CN116567205A - Video injection-based intelligent automobile multi-path camera on-loop testing method - Google Patents

Video injection-based intelligent automobile multi-path camera on-loop testing method Download PDF

Info

Publication number
CN116567205A
CN116567205A CN202310575576.XA CN202310575576A CN116567205A CN 116567205 A CN116567205 A CN 116567205A CN 202310575576 A CN202310575576 A CN 202310575576A CN 116567205 A CN116567205 A CN 116567205A
Authority
CN
China
Prior art keywords
test
scene
camera
parameters
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310575576.XA
Other languages
Chinese (zh)
Inventor
朱冰
黄殷梓
赵健
高质桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202310575576.XA priority Critical patent/CN116567205A/en
Publication of CN116567205A publication Critical patent/CN116567205A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent automobile multi-path camera on-loop test method based on video injection, which comprises test object parameter selection, test scene design, edge scene extraction and test result evaluation.

Description

Video injection-based intelligent automobile multi-path camera on-loop testing method
Technical Field
The invention relates to an intelligent automobile camera on-loop testing method, in particular to an intelligent automobile multi-path camera on-loop testing method based on video injection.
Background
The camera sensor is used as one of the most commonly used hardware devices of the intelligent automobile, and the cost is lower than that of a laser radar and an ultrasonic radar, and the advantages of rich capture characteristics and the like become the first-choice perception system hardware of automobile manufacturers, and in addition, the number of cameras required by the camera sensor is gradually increased along with the gradual enrichment and complexity of the functions of the intelligent automobile. However, compared to lidar, millimeter wave radar and ultrasonic radar, cameras belong to passive sensors and are greatly affected by external factors in the scene, such as complex lighting conditions and complex weather conditions such as rainfall, fog and snowfall. For these complex weather conditions, a large amount of related testing work is needed before the intelligent automobile with the camera sensor is landed on the ground, so as to meet the safety requirement of the intelligent automobile. However, if a traditional real vehicle mileage-based test method is adopted for the work, a lot of time and economic cost are required, and one effective solution is to test based on simulation scenes. However, the pure simulation test and the real vehicle test have certain deviation, so that the defects of high cost, poor repeatability and low safety caused by the real vehicle test can be overcome by utilizing hardware in-loop test, and the problems of poor simulation degree and the like caused by the pure virtual simulation test can be solved. The hardware-in-loop test refers to embedding a hardware entity into a virtual simulation test environment, and testing tasks are carried out on an intelligent automobile system by utilizing response of real hardware, wherein the camera-in-loop test refers to a test method for embedding a camera entity into the virtual simulation test environment, and the camera-in-loop test method comprises a projection screen camera-in-loop test and an injection type camera-in-loop test, wherein the projection screen camera-in-loop test is seriously influenced by external light, and the injection type camera-in-loop test has the problems that parameters influencing a camera test result are infinitely abundant, the coverage rate of the parameters is low, the speed for constructing an edge test scene library is slow, the evaluation of the edge test scene library is lacked, and the like.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent automobile multi-path camera on-loop testing method based on video injection, which comprises the following steps:
test object parameter selection
Firstly, selecting parameters of a test object, namely parameters of a test camera, wherein the parameters of the test object mainly comprise two aspects, namely camera installation parameters and camera performance parameters; the camera installation parameters comprise an installation position and an installation angle, and the camera performance parameters comprise a camera focal length and a camera photosensitive element size;
parameters of the test object mainly comprise two aspects, including a camera installation parameter and a camera performance parameter; the camera installation parameters comprise an installation position and an installation angle, and the camera performance parameters comprise a camera focal length and a camera photosensitive element size;
wherein the mounting position of the camera refers to a translation coordinate x taking the centroid position of the vehicle as an origin and taking the forward motion direction of the vehicle as an x axis, the left as a y axis and the vertical upwards as a z axis a ,y a ,z a ) Subscript a represents the number of the corresponding camera; the mounting angle of the camera refers to the rotation angle (sigma) about the x, y and z axes, respectively a ,β a ,γ a ) The method comprises the steps of carrying out a first treatment on the surface of the The focal length f of the camera refers to the focal length of the lens, namely the distance from the principal point to the focal point after the optics of the lens, and the unit is millimeter; camera photosensitive element size (X) a ,Y a ) Refers to the physical size of the CMOS photosensitive element that needs to be simulated in millimeters.
(II) test scene design
The test scene parameters of the invention comprise the motion state of the vehicle, meteorological types and target objects;
(1) The motion state of the vehicle comprises the speed v of the vehicle vut
(2) The weather includes illumination conditions and complex weather: the illumination condition mainly describes the position of a solar light source in a scene, and is described by an azimuth angle alpha and a polar angle theta; the complex weather parameters include rainfall weather, heavy fog weather and snowfall weather;
the rainfall intensity mu and the diameter D of the rain drop for rainfall meteorological rain Description is made;
setting corresponding rainfall scene parameters in the virtual simulation software through the method;
the atmospheric haze uses visibility D fog Description is made;
snowflake diameter D is used in snowfall snow Description is made;
(3) The target object class comprises a target object class T, a target object running direction Dri and a target movement speed v tar And a collision position P;
the target object class T comprises pedestrians and vehicles; the target running direction Dri comprises a far end, a near end and a longitudinal direction for pedestrians, and comprises the same direction as the vehicle or a cross running with the vehicle for vehicles; the target movement velocity v tar Taking 5 to 8km/h for pedestrians, taking stationary, 20km/h and 50km/h for vehicle movement with-2 to-6 m/s accompanying 2 Is set to a braking deceleration of (a); the collision position P is a position where the vehicle to be detected and the target object finally collide in front of the vehicle to be detected according to the speed of the vehicle to be detected, the value of P is 25% -75% for pedestrians, and the value of P is-50% for vehicles.
Each test scene is described by an array of scene parameters, the test scene parameters pi being expressed in particular as: pi= [ v ] vut ,α,θ,μ,a,D fog ,D snow ,T,Dri,v tar ,P]。
(III) edge scene extraction
The invention uses the edge test scene pi tar The method is defined as that at least one test scene parameter exists in a scene, the condition that a certain threshold value is changed to cause larger behavior or decision change of an object to be tested is met, and the corresponding changed threshold value is set according to test precision and test requirements.
Edge test scene pi tar The screening steps of (a) are as follows:
the first step: initializing parameters of a camera to be tested and discontinuous variable values in a test scene;
and a second step of: the chaos has certain randomness and initial value sensitivity, so that the algorithm has a faster convergence rate, the random chaos sequence is generated by using the Sine chaos mapping, and continuous test scene parameters in a test scene pi are initialized, wherein the specific expression is as follows:
x i+1 =δsin(πx i ) (2)
Wherein, delta is a system parameter, delta is [0,1 ]]Chaos occurs when delta is epsilon (0.87,0.93) U (0.95,1); x is x i ,x i+1 Belonging to iteration sequence values; x is x 0 E (0, 1); because the chaotic map is output as a numerical value of (0, 1), the maximum value omega of the scene parameter is required to be obtained through inverse normalization and specific operation min And a minimum value Ω max Initializing a test scene parameter omega by inverse normalization:
Ω=Ω min +(Ω maxmin )*x i ,i∈[0,100] (3)
and a third step of: and testing the scene on a ring test platform by using a video injection camera, and when the test result fails, reversing according to each step, wherein the steps are as follows:
in order to quickly determine the edge test scene search space sigma, the invention quickly explores the test scene search space based on reverse learning, and corresponding reverse numbers exist for any one continuous test scene parameter omegaSpecific reverse number->The method of calculation is shown in the following formula:
however, the invention does not perform reverse learning on all the test scene parameters, and when the test result is passed, the test scene parameters which can not pass the test result should be automatically selected to perform reverse learning, and the test is performed by using the test scene parameters after reverse learning; if the test result is still passed after reverse learning of all the scene parameters that may result in different results, the process should be returned to the second step to reselect a set of initial values of the scene parameters to be circulated The method comprises the steps of carrying out a first treatment on the surface of the If the scene parameters are updated, the object to be tested cannot pass through the test scene, and all omega and reverse digits thereof for changing the test result are recordedAnd will->The constructed scene is defined as reverse test scene +.>Will->And the corresponding pi is determined as an edge test scene search space sigma; sigma may be a space of different dimensions, depending on the number of omega that fails the object under test due to reverse learning, if other parameters are included to cause a change in the test result, sigma increases the dimension of the corresponding number of parameters;
fourth step: the steps determine the search space sigma of the edge scene, and determine the specific edge test scene pi tar A search at Σ is required. The invention provides a test scene parameter selection method based on a greedy learning algorithm, which is characterized in that test scene parameters are uniformly sampled in corresponding parameter intervals to generate a combined test case set, the greedy algorithm is realized in a way of factor-by-factor expansion (in-parameter-order), and pi is calculated by the method tar The searching process of the method is regarded as a test case set with gradually expanded scene parameters, a test case set meeting 100% coverage rate is generated for a small number of test scene parameters, then new scene parameters are gradually added, and meanwhile, the original test case set is expanded and modified to cover all newly added factors and relevant combinations thereof. The operation steps comprise:
(1) Optionally selecting two factors, generating a combined test case set, and combining all valued combinations comprising the two factors to form all current paired sets;
(2) Expanding in the horizontal direction, namely adding another factor, and selecting a new value from the factor, so as to ensure that the scene factor with the most coverage is combined with the value in pairs;
(3) If the horizontal expansion still has uncovered paired combinations, expanding in the vertical direction to generate a new test case set;
fifth step: the stop conditions for the test included: the number of the cyclic tests meets the test requirement or output pi tar The number meets the test requirement, and the two aspects meet any condition, namely the jump-out cycle, and the final edge test scene library is output.
(IV) evaluation of test results
(1) Scene parameter impact assessment
In the test process, the collision detector or the overlapping of the target object and the boundary frame of the host vehicle in the virtual simulation software is used as the occurrence of a collision accident, and after the design of the parameters of the test scene, the video injection camera is used for testing the system to be tested in the ring test platform;
in order to evaluate the influence degree of different scene parameters on the collision experiment result, the method analyzes the experiment result by using chi-square analysis so as to evaluate the influence degree of a single scene parameter on the experiment result.
Chi-square analysis is a method for comparing whether there is an association between two or more sets of typed variables, also known as chi-square test. Firstly, assuming that variables are independent and uncorrelated, obtaining a group of ideal data to be defined as original assumption, wherein the values of each group are called expected frequency T i The values of each group of actual conditions are called the observation frequency A i The calculation method of the chi-square is as follows:
the result of chi-square analysis can reflect the degree of difference between expected frequency and observed frequency, and the larger the chi-square value is, the less the original assumption is, namely the stronger the correlation among the selected multiple groups of variables is, which indicates that the scene factors have larger influence on the collision result. To quantify this degree of correlation, a chi-square distribution function was introduced:
wherein Γ is a Gamma distribution function; n is the degree of freedom of chi-square distribution, data for characteristic dimension (c×d):
n=(a-1)×(b-1) (7)
in the application of chi-square analysis, a method of comparing chi-square distribution critical value tables is generally adopted.
The relation between a single scene parameter and a collision result can be obtained by chi-square analysis of a test result, however, in order to evaluate the influence of multiple scene parameters on the collision result, the method surrounds weather category factors, is combined with other scene factors, and adopts a two-factor variance method to analyze the influence of the interaction effect on the collision condition, wherein the interaction effect refers to the performance of the influence of one independent variable on the dependent variable on different values of the other independent variable. The two-factor variance method is a statistical analysis method for analyzing whether different levels of two factors have a significant effect on a result and whether an interaction effect exists between the two factors. Assuming that there are r and s horizontal levels for the two scene parameters A, B, respectively, repeating t times of experiments under each horizontal combination to obtain an experimental result x ijk Then sequentially calculating the total dispersion square sum SSAB, the error dispersion square sum SSE and the degree of freedom d f And finally comparing the mean square sum MS with the threshold value of the F distribution table to determine the influence degree.
The sum of squares of the dispersion reflects the discrete condition of the interaction effect or random error of the scene parameters A, B, and the calculation method is as follows:
wherein ,
the degrees of freedom include the degrees of freedom df of the total influence after A, B interaction AB And degree of freedom of error d fE
df AB =(r-1)(s-1) (14)
MSE=SSE/df E =SSE/rs(t-1) (15)
Finally, obtaining a test value F:
F=MSAB/MSE (16)
the two scene elements are further evaluated by looking up a table of F distribution thresholds (α=0.1), i.e. the magnitude of the check value F in the corresponding degree of freedom when the confidence that the scene element A, B co-interaction has an effect on the result is 90%.
(2) Static evaluation of target detection algorithm
The invention describes the evaluation index corresponding to the function, and uses the specific value as the basis of scene parameter updating.
For target recognition, there are four types of target recognition classification results output by the camera, namely True Positves (TP), namely, positive samples are correctly recognized as positive samples; true Negative (TN), i.e., negative samples are correctly identified as negative samples; false Posives (FP), i.e., negative samples are incorrectly identified as positive samples; false Negative (FN), i.e. positive samples are incorrectly identified as negative samples. TP, TN and FN are classified by the numerical value of an IOU, wherein the IOU refers to the proportion of the overlapping area between a rectangular frame output by a target recognition algorithm to be detected and a real target minimum circumscribed rectangular frame GT and the area of the union of the rectangular frame and the real target minimum circumscribed rectangular frame GT. The number of detection frames with IOU greater than 0.5 is recorded as TP; defining the number of detection frames of IOU < = 0.5 or redundant detection frames detecting the same GT as FP; the detection accuracy Pre can be expressed by the following expression:
The detection recall Rel can be expressed by the following formula, representing the proportion of positive samples that are correctly identified to the total number of positive samples:
when the confidence of the detection frame of each sample is used as a threshold value to judge positive and negative samples, the detection precision and Recall rate of the same group number as the number of the samples are provided, and a precision-Recall (P-R) curve can be drawn. Ideally, if a certain network can identify each positive sample with a confidence of 1, and the confidence of identifying the negative sample is 0, the P-R curve only passes through two points (0, 1) and (1, 1), the area of the area surrounded by the P-R curve and the coordinate axis is 1, and the area surrounded by the P-R curve and the coordinate axis should be larger than 0 in other cases. The area enclosed by the P-R curve and the coordinate axis is used for representing the performance of the target detection network, namely mAP, and the expression is as follows:
(3) Dynamic evaluation of target detection algorithm
The invention provides a first detection distance d min With minimum safety distance detection accuracy d p As a dynamic evaluation index of the target detection algorithm.
The first detection distance refers to the distance between the first detection distance and the second detection distance during the execution of a test case,when the target detection algorithm accurately identifies a front target object for the first time, the actual distance between the target object and the workshop is measured. First detection distance d min The larger the target detection algorithm, the earlier the target object detection time is, and the earlier the intelligent automobile system is facilitated to control and operate. Minimum safe distance detection accuracy d p The method is characterized in that in the process of executing a test case, when the real distance between a target object and the workshop is the minimum safe distance at the current speed, the absolute value of the difference between the distance output by the distance measuring module of the target detection algorithm and the real distance is obtained. The smaller the minimum safety distance detection precision is, the closer the distance estimated by the representative ranging module is to the actual distance between the target object and the workshop, and the more beneficial the intelligent automobile to brake according to a preset track.
The invention has the beneficial effects that:
the invention provides an intelligent automobile multi-path camera on-loop testing method based on video injection, which is oriented to four paths of camera data, virtual simulation image data are injected into an ECU to be tested through a video injection board card and a fault injection board card for testing, and meanwhile, the invention provides an edge testing scene searching method based on an improved greedy algorithm aiming at the problem that parameters influencing a camera testing result are infinitely abundant.
Drawings
FIG. 1 is a schematic diagram of the overall flow of the present invention;
FIG. 2 is a hardware schematic of the video injection camera in-loop test platform according to the present invention;
FIG. 3 is a schematic diagram of a hardware connection scheme according to the present invention;
fig. 4 is a schematic diagram of random chaotic sequence generation according to the present invention.
Detailed Description
As shown in FIG. 1, the video injection-based intelligent automobile multi-path camera on-loop testing method provided by the invention has the hardware components of a video injection camera on-loop testing platform shown in FIG. 2, and comprises an upper computer, ECU hardware to be tested, a video injection board card and a fault injection board card. The initialization of the test platform comprises two aspects, namely, the parameter selection of the camera to be tested and the initialization parameter selection of the test scene. The upper computer is used for virtual simulation scene construction and vehicle dynamics model deployment; the ECU hardware to be tested deploys a camera sensing algorithm to be tested, including a target detection and target tracking algorithm; the video injection board card converts four paths of camera image data output by the upper computer from an HDMI interface to a GMSL interface; the fault injection board card carries out noise simulation on four paths of image data, and simulated noise comprises global noise and local noise; the specific hardware connection mode is shown in fig. 3, firstly, the upper computer outputs four paths of vehicle-mounted video signals rendered by virtual simulation scene software from an HDMI interface of a host computer, converts the HDMI video signal format into a GMSL video format through a video injection board card (CameraInjectionModule, CIM) and outputs the GMSL video format to a fault injection board card, the video pictures are displayed, global noise and color blocks are simulated to further test the identification and tracking performance of the vehicle-mounted camera under poor contact or lens damage, then video data is output from a serializer and is input into an external anti-serializer of an ECU to be tested through the GMSL interface, testing and verification of an object identification and tracking algorithm in the ECU are performed, and after the ECU performs object identification and tracking, four-point coordinate parameters corresponding to object identification types (class) and accuracy (accuracy) and BoundingBox are returned to an ADAS model in the ECU as dictionary values, and the dictionary keys are corresponding time stamps. And after receiving the result output by the ECU, the virtual vehicle in the upper computer outputs a control signal through the vehicle dynamics model to ensure that the vehicle performs real-time position update in the virtual simulation scene.
The invention comprises the following steps:
test object parameter selection
Firstly, selecting parameters of a test object, namely parameters of a test camera, setting the parameters of the camera according to test requirements before each test is started, initializing the camera to be tested on a loop test platform, and keeping the parameters of the camera unchanged in a scene updating process after the parameters of the camera are selected;
parameters of the test object mainly comprise two aspects, including a camera installation parameter and a camera performance parameter; the camera installation parameters comprise an installation position and an installation angle, and the camera performance parameters comprise a camera focal length and a camera photosensitive element size;
wherein the mounting position of the camera refers to the translational coordinate (x) taking the centroid position of the vehicle as the origin and taking the forward motion direction of the vehicle as the x-axis, the left as the y-axis and the vertical upwards as the z-axis a ,y a ,z a ) The subscript a represents the number of the corresponding camera, and the intelligent automobile with four paths of cameras is tested, so that a=1, 2,3 and 4; the mounting angle of the camera refers to the rotation angle (sigma) about the x, y and z axes, respectively aaa ) A=1, 2,3,4; the focal length f of the camera refers to the focal length of the lens, namely the distance from the principal point to the focal point after the optics of the lens, and the unit is millimeter; camera photosensitive element size (X) a ,Y a ) Refers to the physical dimensions of the CMOS photosensitive element to be simulated in millimeters, a=1, 2,3,4.
(II) test scene design
The test scene selected by the invention is designed on the basis of the EURO-NCAP-AEB-2023 test procedure, and because the test procedure is oriented to the closed field test, compared with the virtual simulation test, the scene parameters can expand the boundary without cost, so that a parameter space with a larger parameter range is selected to test the algorithm to be tested when the test scene is designed.
Table 1 scene parameters table
As shown in table 1, the intelligent automobile camera sensor belongs to a passive sensor, is obviously influenced by external environment, has more factors influencing the camera sensor in a scene, and comprises the motion state of the automobile, weather and target objects;
(1) The motion state of the vehicle comprises the speed of the vehicle; speed v of the inventive vehicle vut The value is 10-80 km/h;
(2) The weather includes illumination conditions and complex weather: the illumination condition mainly describes the position of a solar light source in a scene, and is described by an azimuth angle alpha and a polar angle theta; the complex weather parameters include rainfall weather, heavy fog weather and snowfall weather;
The rainfall intensity mu (unit: mm/h) and the diameter D of the rain drop for rainfall meteorological rain (Unit: mm) description;
corresponding rainfall scene parameters can be set in the virtual simulation software through the method;
the atmospheric haze uses visibility D fog (Unit: m) description;
snowflake diameter D is used in snowfall snow (unit: mm) for description.
(3) The target object class comprises a target object class, a target object running direction, a target movement speed and a collision position;
the target object class T comprises pedestrians or vehicles for testing; the target running direction Dri comprises a far end, a near end and a longitudinal direction for pedestrians, and comprises the same direction as the vehicle or a cross running with the vehicle for vehicles; the target movement velocity v tar Taking 5 to 8km/h for pedestrians, taking stationary, 20km/h and 50km/h for vehicle movement with-2 to-6 m/s accompanying 2 Is set to a braking deceleration of (a); the collision position P is a position where the vehicle to be detected and the target object finally collide in front of the vehicle to be detected according to the speed of the vehicle to be detected, the value of P is 25% -75% for pedestrians, and the value of P is-50% for vehicles.
Each test scene is described by an array of scene parameters, expressed in detail as: Π=[v vut ,α,θ,μ,a,D fog ,D snow ,T,Dri,v tar ,P]。
(III) edge scene extraction
The above-mentioned scene parameters are continuous variables except the target object class T and the target object running direction Dri, and if a traversal test mode is adopted, a great deal of calculation cost is required. The invention tests the edge of the scene pi tar The method is defined as that at least one test scene parameter exists in a scene, the condition that a certain threshold value is changed to cause larger behavior or decision change of an object to be tested is met, and the corresponding changed threshold value is set according to test precision and test requirements.
Edge test scenario pi tar The screening steps of (a) are as follows:
the first step: initializing parameters of a camera to be tested and discontinuous variable values in a test scene;
and a second step of: the chaos has certain randomness and initial value sensitivity, as shown in fig. 4, so that the algorithm has a faster convergence rate, the random chaos sequence is generated by using the Sine chaos mapping, and continuous test scene parameters in a test scene pi are initialized, wherein the specific expression is as follows:
x i+1 =δsin(πx i ) (2)
wherein, delta is a system parameter, delta is [0,1 ]]Chaos occurs when delta is epsilon (0.87,0.93) U (0.95,1); x is x i ,x i+1 Belonging to iteration sequence values; x is x 0 E (0, 1), the number of experiments i was chosen to be 100 in this example. Because the chaotic map is output as a numerical value of (0, 1), the maximum value omega of the scene parameter is required to be obtained through inverse normalization and specific operation min And a minimum value Ω max Initializing a test scene parameter omega by inverse normalization:
Ω=Ω min +(Ω maxmin )*x i ,i∈[0,100] (3)
and a third step of: testing the scene in a loop test platform by using a video injection camera;
table 2 edge test scene screening method
Table 2 shows how to search for edge scenarios when the test results pass, and when the test results fail, the inverse operation can be performed in each step as follows:
in order to quickly determine the edge test scene search space sigma, the invention quickly explores the test scene search space based on reverse learning, and corresponding reverse numbers exist for any one continuous test scene parameter omegaSpecific reverse number->The method of calculation is shown in the following formula:
however, the invention does not perform reverse learning on all the test scene parameters, and when the test result is passed, the test scene parameters which can not pass the test result should be automatically selected to perform reverse learning, and the test is performed by using the test scene parameters after reverse learning; if the test result is still passed after reverse learning is performed on all the scene parameters which may cause different results, the second step is needed to be returned to reselect a group of initial values of the scene parameters to enter the cycle; if the scene parameters are updated, the object to be tested cannot pass through the test scene, and all omega and reverse digits thereof for changing the test result are recorded And will->The constructed scene is defined as reverse test scene +.>Will->And n corresponding to the search space is determined as an edge test scene search space sigma; sigma may be a space of different dimensions depending on the number of omega failures of the object under test due to reverse learning, e.g. when v vut The initial value is selected to be 30km/h, the test result is passed, the reverse number is selected to be 60km/h, the test result is failed, and then v is determined vut In [30,60 ]]One dimension in search space Σ as an edge test scenario if only v vut If the test result is changed due to reverse learning, the sigma is a one-dimensional space, and if other parameters are included to change the test result, the sigma is increased by the dimension of the number of corresponding parameters.
Fourth step: the steps determine the search space Σ of the edge scene, and determine the specific edge test scene pi tar A search at Σ is required. The invention provides a test scene parameter selection method based on a greedy learning algorithm. The invention realizes a greedy algorithm by a way of factor-by-factor expansion (in-parameter-order), and adds the pi to the algorithm tar The searching process of the method is regarded as a test case set with gradually expanded scene parameters, a test case set meeting 100% coverage rate is generated for a small number of test scene parameters, then new scene parameters are gradually added, and meanwhile, the original test case set is expanded and modified to cover all newly added factors and relevant combinations thereof.
Assuming that the dimension of Σ is 3, taking a three-dimensional test scene parameter in Σ as an example, setting A, B, C, and assuming that the three-dimensional test scene parameter has 2, 2 and 3 value ranges respectively, performing a combination test by using the method provided by the invention, if the value range is increased, the method provided by the invention can also be used for testing, and the basic operation steps are as follows:
(1) Optionally, two factors are selected to generate a combined test case set, all valued combinations of the two factors are included, and the combinations form all paired sets at present:
TABLE 3 two factor pairwise aggregation
(2) Expanding in the horizontal direction, namely adding another factor, and selecting a new value from the factor, so as to ensure that the scene factor with the most coverage is combined with the value in pairs;
as shown in table 4, another factor C was added; all pairwise combinations are covered with three test cases, as shown in Table 5, where (A 1 ,C 2 )、(A 2 ,C 1 )、(A 2 ,C 3 )、(B 1 ,C 3 )、(B 2 ,C 1 )、(B 2 ,C 2 ) Is an uncovered pairwise combination; the values of the vacancies in Table 4 are not determined and are substituted into C 1 、C 2 、C 3 C can be seen by determining the coverage of the corresponding test cases and the uncovered portions in Table 5 1 Cover two test cases, and C 2 、C 3 Each covering one test case, so that the blank value is C 1
TABLE 4 horizontal expansion
A B C
A 1 B 1 C 1
A 2 B 1 C 2
A 1 B 2 C 3
A 2 B 2
Table 5 uncovered combination
(3) If the horizontal expansion still has uncovered paired combinations, the expansion is performed in the vertical direction, four test cases are obtained by supplementing the empty positions of table 4, at this time, table 5 is updated, the uncovered paired combinations are shown in table 6, and the four paired combinations are subjected to vertical expansion, so that the results of table 7 are obtained.
Table 6 paired combinations that remain uncovered after horizontal expansion
TABLE 7 after vertical alignment of the paired combinations
Through the steps, the finally obtained test case set is as follows:
TABLE 8 final test case set
A B C
1 A 1 B 1 C 1
2 A 2 B 1 C 2
3 A 1 B 2 C 3
4 A 2 B 2 C 1
5 A 1 B 2 C 2
6 A 2 B 1 C 3
The method ensures that pi can be searched rapidly on the premise of fully covering the test case tar And further constructing an edge test scene library.
Fifth step: the stop condition of the test is determined by two aspects, namely, the number of times of the cyclic test meets the test requirement and the output pi tar The number meets the test requirement, and the two aspects meet one condition, namely the jump-out cycle, and the final edge test scene library is output.
(IV) evaluation of test results
(1) Scene parameter impact assessment
In the test process, the collision detector or the boundary frame of the target object and the main vehicle in the virtual simulation software is overlapped to be used as the occurrence of a collision accident, after the design of the test scene parameters, the video injection camera is used for testing the system to be tested in the ring test platform, and in order to evaluate the influence degree of different scene parameters on the collision experiment result, the invention utilizes chi-square analysis to analyze the experiment result so as to evaluate the influence degree of single scene parameter on the experiment result. Chi-square analysis is a method for comparing whether there is an association between two or more sets of typed variables, also known as chi-square test. First assume that the inter-variables are phases Independent of each other, a set of ideal data is defined as the original hypothesis, wherein the values of each set are called expected frequency T i The values of each group of actual conditions are called the observation frequency A i The calculation method of the chi-square is as follows:
the result of chi-square analysis can reflect the degree of difference between expected frequency and observed frequency, and the larger the chi-square value is, the less the original assumption is, namely the stronger the correlation among the selected multiple groups of variables is, which indicates that the scene factors have larger influence on the collision result. To quantify this degree of correlation, a chi-square distribution function was introduced:
wherein Γ is a Gamma distribution function; n is the degree of freedom of chi-square distribution, data for characteristic dimension (c×d):
n=(a-1)×(b-1) (7)
in the application of chi-square analysis, a method of comparing chi-square distribution critical value tables is generally adopted.
The relation between a single scene parameter and a collision result can be obtained by chi-square analysis of a test result, however, in order to evaluate the influence of multiple scene parameters on the collision result, the method surrounds weather category factors, is combined with other scene factors, and adopts a two-factor variance method to analyze the influence of the interaction effect on the collision condition, wherein the interaction effect refers to the performance of the influence of one independent variable on the dependent variable on different values of the other independent variable. The two-factor variance method is a statistical analysis method for analyzing whether different levels of two factors have a significant effect on a result and whether an interaction effect exists between the two factors. Assuming that there are r and s horizontal levels for the two scene parameters A, B, respectively, repeating t times of experiments under each horizontal combination to obtain an experimental result x ijk Then sequentially calculating the total dispersion square sum SSAB,Error squared error sum SSE, degree of freedom d f And finally comparing the mean square sum MS with the threshold value of the F distribution table to determine the influence degree.
The sum of squares of the dispersion reflects the discrete condition of the interaction effect or random error of the scene parameters A, B, and the calculation method is as follows:
/>
wherein ,
the degrees of freedom include the degrees of freedom df of the total influence after A, B interaction AB And degree of freedom of error d fE
df AB =(r-1)(s-1) (14)
MSE=SSE/df E =SSE/rs(t-1) (15)
Finally, obtaining a test value F:
F=MSAB/MSE (16)
the two scene elements are further evaluated by looking up a table of F distribution thresholds (α=0.1), i.e. the magnitude of the check value F in the corresponding degree of freedom when the confidence that the scene element A, B co-interaction has an effect on the result is 90%.
(2) Static evaluation of target detection algorithm
The evaluation of the collision result is an important basis for the updating direction of the test scene parameters and the construction of the edge scene library, however, in practical application, the evaluation index aiming at the camera should not only have collision condition, but the collision condition can explain the 'good or bad' degree of the algorithm to a certain extent, but the evaluation index obviously cannot play a role in guiding and referencing the regression test and the algorithm perfection. The invention describes the evaluation index corresponding to the function, and uses the specific value as the basis of scene parameter updating.
For target recognition, there are four types of target recognition classification results output by the camera, namely True Positves (TP), namely, positive samples are correctly recognized as positive samples; true Negative (TN), i.e., negative samples are correctly identified as negative samples; false Posives (FP), i.e., negative samples are incorrectly identified as positive samples; false Negative (FN), i.e. positive samples are incorrectly identified as negative samples. TP, TN and FN are classified by the numerical value of an IOU, wherein the IOU refers to the proportion of the overlapping area between a rectangular frame output by a target recognition algorithm to be detected and a real target minimum circumscribed rectangular frame GT and the area of the union of the rectangular frame and the real target minimum circumscribed rectangular frame GT. The number of detection frames with IOU greater than 0.5 is recorded as TP; defining the number of detection frames of IOU < = 0.5 or redundant detection frames detecting the same GT as FP; the detection accuracy Pre can be expressed by the following expression:
the detection recall Rel can be expressed by the following formula, representing the proportion of positive samples that are correctly identified to the total number of positive samples:
when the confidence of the detection frame of each sample is used as a threshold value to judge positive and negative samples, the detection precision and Recall rate of the same group number as the number of the samples are provided, and a precision-Recall (P-R) curve can be drawn. Ideally, if a certain network can identify each positive sample with a confidence of 1, and the confidence of identifying the negative sample is 0, the P-R curve only passes through two points (0, 1) and (1, 1), the area of the area surrounded by the P-R curve and the coordinate axis is 1, and the area surrounded by the P-R curve and the coordinate axis should be larger than 0 in other cases. The area enclosed by the P-R curve and the coordinate axis is used for representing the performance of the target detection network, namely mAP, and the expression is as follows:
(3) Dynamic evaluation of target detection algorithm
The static index can evaluate the basic performance of the algorithm to be tested, and when the algorithm to be tested, the vehicle dynamics model, the control algorithm and the like work cooperatively on the test platform, the dynamic index is formulated according to the intelligent automobile function to evaluate the performance of the algorithm in a specific test scene. The invention provides a first detection distance d min With minimum safety distance detection accuracy d p As a dynamic evaluation index of the target detection algorithm.
The first detection distance refers to the actual distance between the target object and the workshop when the target detection algorithm accurately identifies the front target object for the first time in the process of executing a test case. First detection distance d min The larger the target detection algorithm, the earlier the target object detection time is, and the earlier the intelligent automobile system is facilitated to control and operate. Minimum safe distance detection accuracy d p The method is characterized in that in the process of executing a test case, when the real distance between a target object and the workshop is the minimum safe distance at the current speed, the absolute value of the difference between the distance output by the distance measuring module of the target detection algorithm and the real distance is obtained. The smaller the minimum safety distance detection precision is, the closer the distance estimated by the representative ranging module is to the actual distance between the target object and the workshop, and the more beneficial the intelligent automobile to brake according to a preset track.

Claims (6)

1. An intelligent automobile multi-path camera on-loop testing method based on video injection is characterized in that: the method comprises the following steps:
test object parameter selection
Firstly, selecting parameters of a test object, namely parameters of a test camera, wherein the parameters of the test object mainly comprise two aspects, namely camera installation parameters and camera performance parameters; the camera installation parameters comprise an installation position and an installation angle, and the camera performance parameters comprise a camera focal length and a camera photosensitive element size;
(II) test scene design
The test scene parameters comprise the motion state of the vehicle, meteorological types and target objects;
(1) The motion state of the vehicle comprises the speed v of the vehicle vut
(2) The weather includes illumination conditions and complex weather: the illumination condition mainly describes the position of a solar light source in a scene, and is described by an azimuth angle alpha and a polar angle theta; the complex weather parameters include rainfall weather, heavy fog weather and snowfall weather;
the rainfall intensity mu and the diameter D of the rain drop for rainfall meteorological rain Description is made;
setting corresponding rainfall scene parameters in the virtual simulation software through the method;
the atmospheric haze uses visibility D fog Description is made;
snowflake diameter D is used in snowfall snow Description is made;
(3) The target object class comprises a target object class T, a target object running direction Dri and a target movement speed v tar And a collision position P;
each test scene is described by an array of scene parameters, the test scene parameters n being expressed specifically as: pi= [ v ] vut ,α,θ,μ,a,D fog ,D snow ,T,Dri,v tar ,P];
(III) edge scene extraction
Pi of edge test scene tar Defining that at least one test scene parameter exists in a scene, meeting the condition that a certain threshold value is changed by the parameter so as to cause larger behavior or decision change of an object to be tested, and setting the corresponding changed threshold value according to the test precision and the test requirement;
edge test scenario pi tar The screening steps of (a) are as follows:
the first step: initializing parameters of a camera to be tested and discontinuous variable values in a test scene;
and a second step of: generating a random chaotic sequence by using the Sine chaotic map, and initializing continuous test scene parameters in a test scene pi, wherein the specific expression is as follows:
x i+1 =δsin(πx i ) (2)
wherein, delta is a system parameter, delta is [0,1 ]]Chaos occurs when delta is epsilon (0.87,0.93) U (0.95,1); x is x i ,x i+1 Belonging to iteration sequence values; x is x 0 E (0, 1); because the chaotic map is output as a numerical value of (0, 1), the maximum value omega of the scene parameter is required to be obtained through inverse normalization and specific operation min And a minimum value Ω max Initializing a test scene parameter omega by inverse normalization:
Ω=Ω min +(Ω maxmin )*x i ,i∈[0,100] (3)
and a third step of: and testing the scene on a ring test platform by using a video injection camera, and when the test result fails, reversing according to each step, wherein the steps are as follows:
based on reverse learning, the test scene search space is quickly explored, and corresponding reverse numbers exist for any one continuous test scene parameter omegaSpecific reverse number->The method of calculation is shown in the following formula:
when the test result is passed, automatically selecting the test scene parameters which possibly cause the test result not to pass to perform reverse learning, and performing the test by using the test scene parameters after reverse learning; if the test result is still passed after reverse learning is performed on all the scene parameters which may cause different results, the second step is needed to be returned to reselect a group of initial values of the scene parameters to enter the cycle; if the scene parameters are updated, the object to be tested cannot pass through the test scene, and all omega and reverse digits thereof for changing the test result are recordedAnd will->The constructed scene is defined as reverse test scene +.>Will->And n corresponding to the search space is determined as an edge test scene search space sigma; Σ may be a space of different dimensions, depending on the number of Ω that fails the object under test due to reverse learning, if the test result changes due to the inclusion of other parameters, Σ increases the dimension of the corresponding number of parameters;
Fourth step: through a test scene parameter selection method based on a greedy learning algorithm, test scene parameters are uniformly sampled in corresponding parameter intervals, a combined test case set is generated, and the greedy algorithm is realized in a factor-by-factor expansion modePi (n) tar The searching process of the method is regarded as a test case set with gradually expanded scene parameters, a test case set meeting 100% coverage rate is generated for a small number of test scene parameters, then new scene parameters are gradually added, and meanwhile, the original test case set is expanded and modified to cover all newly added factors and relevant combinations thereof;
fifth step: the stop conditions for the test included: the number of the cyclic tests meets the test requirement, and the output pi tar The number meets the test requirement, and the two aspects meet any condition, namely jump-out circulation, and the final edge test scene library is output;
(IV) evaluation of test results
(1) Scene parameter impact assessment
Analyzing the experimental result by using chi-square analysis so as to evaluate the influence degree of a single scene parameter on the experimental result; obtaining a relation between a single scene parameter and a collision result through chi-square analysis test results;
around weather category factors, combining the weather category factors with other scene factors, and analyzing the influence of the interaction effect on the collision condition by using a two-factor variance method, wherein the interaction effect refers to the performance of the influence of one independent variable on the dependent variable on different values of the other independent variable;
(2) Static evaluation of target detection algorithm
The target identification is a commonly used function of the camera, describes an evaluation index corresponding to the function, and uses a specific value as a basis for updating scene parameters;
(3) Dynamic evaluation of target detection algorithm
Using the first detection distance d min With minimum safety distance detection accuracy d p As a target detection algorithm dynamic evaluation index;
the first detection distance refers to the real distance between the target object and the workshop when the target detection algorithm accurately identifies the front target object for the first time in the process of executing a test case; first detection distance d min The larger the target detection algorithm, the earlier the target object detection time is, and the intelligent automobile system is facilitated to be as early as possiblePerforming control operation; minimum safe distance detection accuracy d p In the process of executing a test case, when the real distance between the target object and the workshop is the minimum safe distance at the current speed, the absolute value of the difference between the distance output by the ranging module of the target detection algorithm and the real distance is smaller, and the smaller the minimum safe distance detection precision is, the closer the distance estimated by the ranging module is to the actual distance between the target object and the workshop, the more beneficial for the intelligent automobile to brake according to a preset track.
2. The intelligent automobile multi-path camera on-loop testing method based on video injection as claimed in claim 1, wherein the method comprises the following steps: in the step (one), the parameters of the test object mainly comprise two aspects including a camera installation parameter and a camera performance parameter; the camera installation parameters comprise an installation position and an installation angle, and the camera performance parameters comprise a camera focal length and a camera photosensitive element size;
wherein the mounting position of the camera refers to the translational coordinate (x) taking the centroid position of the vehicle as the origin and taking the forward motion direction of the vehicle as the x-axis, the left as the y-axis and the vertical upwards as the z-axis a ,y a ,z a ) Subscript a represents the number of the corresponding camera; the mounting angle of the camera refers to the rotation angle (sigma) about the x, y and z axes, respectively aaa ) The method comprises the steps of carrying out a first treatment on the surface of the The focal length f of the camera refers to the focal length of the lens, namely the distance from the principal point to the focal point after the optics of the lens, and the unit is millimeter; camera photosensitive element size (X) a ,Y a ) Refers to the physical size of the CMOS photosensitive element that needs to be simulated in millimeters.
3. The intelligent automobile multi-path camera on-loop testing method based on video injection as claimed in claim 1, wherein the method comprises the following steps: in the test scene design, the target object class T comprises pedestrians and vehicles; the target running direction Dri comprises a far end, a near end and a longitudinal direction for pedestrians, and comprises the same direction as the vehicle or a cross running with the vehicle for vehicles; the target movement velocity v tar For pedestriansTaking 5-8 km/h, taking stationary, 20km/h and 50km/h with-2 to-6 m/s for vehicle movement 2 Is set to a braking deceleration of (a); the collision position P is a position where the vehicle to be detected and the target object finally collide in front of the vehicle to be detected according to the speed of the vehicle to be detected, the value of P is 25% -75% for pedestrians, and the value of P is-50% for vehicles.
4. The intelligent automobile multi-path camera on-loop testing method based on video injection as claimed in claim 1, wherein the method comprises the following steps: in the step (three) of edge scene extraction, the fourth operation step comprises:
(1) Optionally selecting two factors, generating a combined test case set, and combining all valued combinations comprising the two factors to form all current paired sets;
(2) Expanding in the horizontal direction, namely adding another factor, and selecting a new value from the factor, so as to ensure that the scene factor with the most coverage is combined with the value in pairs;
(3) If the horizontal expansion still has uncovered pairwise combinations, then the expansion is in the vertical direction, generating a new set of test cases.
5. The intelligent automobile multi-path camera on-loop testing method based on video injection as claimed in claim 1, wherein the method comprises the following steps: in the test result evaluation of the step (four), the chi-square analysis is a method for comparing whether correlation exists between two or more groups of fixed-type variables, which is also called chi-square test; firstly, assuming that variables are independent and uncorrelated, obtaining a group of ideal data to be defined as original assumption, wherein the values of each group are called expected frequency T i The values of each group of actual conditions are called the observation frequency A i The calculation method of the chi-square is as follows:
the result of chi-square analysis can reflect the difference degree of expected frequency and observed frequency, the larger the chi-square value is, the less the original assumption is, namely, the stronger the correlation among the selected multiple groups of variables is, which indicates that the scene factors have larger influence on the collision result; to quantify this degree of correlation, a chi-square distribution function was introduced:
wherein Γ is a Gamma distribution function; n is the degree of freedom of chi-square distribution, data for characteristic dimension (c×d):
n=(a-1)×(b-1) (7)
when chi-square analysis is applied, a method for comparing a chi-square distribution critical value table is generally adopted;
the two-factor variance method is that supposing that there are r and s horizontal layers for two scene parameters A, B respectively, repeating t times of experiments under each horizontal combination to obtain an experimental result x ijk Then sequentially calculating the total dispersion square sum SSAB, the error dispersion square sum SSE and the degree of freedom d f The mean square sum MS and the check value F are compared with the critical value of the F distribution table, and the influence degree is determined;
the sum of squares of the dispersion reflects the discrete condition of the interaction effect or random error of the scene parameters A, B, and the calculation method is as follows:
wherein ,
The degrees of freedom include the degrees of freedom df of the total influence after A, B interaction AB And degree of freedom of error d fE
df AB =(r-1)(s-1) (14)
MSE=SSE/df E =SSE/rs(t-1) (15)
Finally, obtaining a test value F:
F=MSAB/MSE (16)
the two scene elements are further evaluated by searching the F distribution critical value table, namely, when the confidence that the interaction of the scene elements A, B has an influence on the result is 90%, the magnitude of the check value F under the corresponding degree of freedom is checked.
6. The intelligent automobile multi-path camera on-loop testing method based on video injection as claimed in claim 1, wherein the method comprises the following steps: in the test result evaluation of the step (four), four types of target recognition classification results output by the camera are True Positves (TP) respectively, namely positive samples are correctly recognized as positive samples; true Negative (TN), i.e., negative samples are correctly identified as negative samples; false Posives (FP), i.e., negative samples are incorrectly identified as positive samples; false Negative (FN), i.e., positive samples are incorrectly identified as negative samples; TP, TN and FN are classified by the numerical value of an IOU, wherein the IOU refers to the proportion of the overlapping area between a rectangular frame output by a target recognition algorithm to be detected and a real target minimum circumscribed rectangular frame GT and the area of the union of the rectangular frame and the real target minimum circumscribed rectangular frame GT; the number of detection frames with IOU greater than 0.5 is recorded as TP; defining the number of detection frames of IOU < = 0.5 or redundant detection frames detecting the same GT as FP; the detection accuracy Pre is expressed by the following formula:
The detection recall Rel can be expressed by the following formula, representing the proportion of positive samples that are correctly identified to the total number of positive samples:
when the confidence of the detection frame of each sample is taken as a threshold value to judge positive and negative samples, the detection precision and Recall rate of the same group number as the number of the samples are provided, and a precision-Recall curve can be drawn; ideally, if a certain network can identify each positive sample with a confidence coefficient of 1, and the confidence coefficient of identifying the negative sample is 0, the P-R curve only passes through two points (0, 1) and (1, 1), the area of the area surrounded by the P-R curve and the coordinate axis is 1, and the area surrounded by the P-R curve and the coordinate axis is larger than 0 in other cases; the area enclosed by the P-R curve and the coordinate axis is used for representing the performance of the target detection network, namely mAP, and the expression is as follows:
CN202310575576.XA 2023-05-22 2023-05-22 Video injection-based intelligent automobile multi-path camera on-loop testing method Pending CN116567205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310575576.XA CN116567205A (en) 2023-05-22 2023-05-22 Video injection-based intelligent automobile multi-path camera on-loop testing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310575576.XA CN116567205A (en) 2023-05-22 2023-05-22 Video injection-based intelligent automobile multi-path camera on-loop testing method

Publications (1)

Publication Number Publication Date
CN116567205A true CN116567205A (en) 2023-08-08

Family

ID=87499881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310575576.XA Pending CN116567205A (en) 2023-05-22 2023-05-22 Video injection-based intelligent automobile multi-path camera on-loop testing method

Country Status (1)

Country Link
CN (1) CN116567205A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075587A (en) * 2023-10-16 2023-11-17 北京茵沃汽车科技有限公司 Electric control unit testing device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075587A (en) * 2023-10-16 2023-11-17 北京茵沃汽车科技有限公司 Electric control unit testing device and system
CN117075587B (en) * 2023-10-16 2024-01-26 北京茵沃汽车科技有限公司 Electric control unit testing device and system

Similar Documents

Publication Publication Date Title
Li et al. Traffic light recognition for complex scene with fusion detections
US10832478B2 (en) Method and system for virtual sensor data generation with depth ground truth annotation
CN102903119B (en) A kind of method for tracking target and device
Liu et al. YOLO-extract: Improved YOLOv5 for aircraft object detection in remote sensing images
CN103902960A (en) Real-time face recognition system and method thereof
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN116567205A (en) Video injection-based intelligent automobile multi-path camera on-loop testing method
Ren et al. Environment influences on uncertainty of object detection for automated driving systems
CN108648210B (en) Rapid multi-target detection method and device under static complex scene
CN111882199A (en) Automatic driving laser radar data amplification method based on rule variation
CN108344997B (en) Road guardrail rapid detection method based on trace point characteristics
CN115544888A (en) Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory
CN114266805A (en) Twin region suggestion network model for unmanned aerial vehicle target tracking
CN112115810A (en) Target identification method, system, computer equipment and storage medium based on information fusion
CN112686190A (en) Forest fire smoke automatic identification method based on self-adaptive target detection
CN110059544B (en) Pedestrian detection method and system based on road scene
CN111563428A (en) Airport parking space intrusion detection method and system
CN113610143B (en) Method, device, equipment and storage medium for classifying point cloud noise points
CN114067224A (en) Unmanned aerial vehicle cluster target number detection method based on multi-sensor data fusion
Yang et al. An instance segmentation algorithm based on improved mask R-CNN
Abdalwohab et al. Deep learning based camera and radar fusion for object detection and classification
Zhencang et al. A Structural Information Aided Method for Intelligent Detection of Power Line Targets
CN113362372B (en) Single target tracking method and computer readable medium
Guo et al. Infrared target tracking method based on hierarchical association and multi-feature fusion
CN113052159B (en) Image recognition method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination