CN113989768B - Automatic driving test scene analysis method and system - Google Patents

Automatic driving test scene analysis method and system Download PDF

Info

Publication number
CN113989768B
CN113989768B CN202111149096.4A CN202111149096A CN113989768B CN 113989768 B CN113989768 B CN 113989768B CN 202111149096 A CN202111149096 A CN 202111149096A CN 113989768 B CN113989768 B CN 113989768B
Authority
CN
China
Prior art keywords
scene
data
frame
difficulty
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111149096.4A
Other languages
Chinese (zh)
Other versions
CN113989768A (en
Inventor
何露
王劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Transportation Technology Co ltd
Original Assignee
Tianyi Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Transportation Technology Co ltd filed Critical Tianyi Transportation Technology Co ltd
Priority to CN202111149096.4A priority Critical patent/CN113989768B/en
Publication of CN113989768A publication Critical patent/CN113989768A/en
Application granted granted Critical
Publication of CN113989768B publication Critical patent/CN113989768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an automatic driving test scene analysis method, which relates to the technical field of unmanned driving and comprises the steps of establishing a database; extracting frame-by-frame information of a scene to be analyzed and analyzing scene feature data, judging the similarity between the scene feature data and the scene feature data stored in a database, and selecting the scene type with the highest similarity as the scene type of the frame; screening continuous multi-frame data belonging to the same scene category as a target scene segment; calculating the difficulty of each frame of data in the target scene segment; and screening continuous multi-frame data which meets the data difficulty requirement and serves as a requirement scene segment. The invention provides a scheme of scene similarity preliminary screening and difficulty quantitative evaluation to determine the target segment, so that the problem that the scene extraction and cutting cannot meet the test requirement due to insufficient data volume and incapability of finely dividing the scene in the early stage of development is avoided, and the redundant scene segment is effectively reduced.

Description

Automatic driving test scene analysis method and system
Technical Field
The invention relates to the technical field of unmanned operation, in particular to an automatic driving test scene analysis method and system.
Background
At present, the processing of test scene data of an automatic driving scene mostly adopts the following two schemes, and the processing mode of the data mainly comprises two modes, namely, manual screening and cutting are adopted, so that the whole process is time-consuming and labor-consuming, and the efficiency is low; the other is to process the related characteristic quantity of the original data, then analyze it based on the extracted characteristic and the original scene category characteristic, regard the start-stop time point corresponding to the frame number meeting the condition (and the front and back frames thereof) as the start-stop point of the scene data to be preserved, and automatically slice based on this, the defect of this scheme is that if the careful division and cutting of the automatic driving scene are needed, the classification result of a large number of historical scene data sets is needed, and meanwhile, the difficulty of the scene of the same category cannot be well quantized and analyzed.
Disclosure of Invention
The invention aims to solve the technical problem of overcoming the defects of the prior art and providing an automatic driving test scene analysis method and system.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an automatic driving test scene analysis method comprises,
establishing a database, wherein different historical scene categories are stored in the database, and corresponding scene characteristic data are extracted for each historical scene category to serve as scene data references for initial comparison;
extracting frame-by-frame information of a scene to be analyzed and analyzing scene feature data, judging the similarity between the scene feature data of each frame and the historical scene feature data of scene categories stored in a database, and selecting the scene category corresponding to the scene feature data with the highest similarity as the scene category of the frame;
screening out continuous multi-frame data belonging to the same scene category as a target scene segment in the whole time span of the scene to be analyzed;
based on the screened target scene fragments, a fuzzy analytic hierarchy process is adopted to convert linguistic descriptions of relative importance degree judgment of a plurality of grading experts on different indexes or different categories into corresponding fuzzy numbers, a fuzzy judgment matrix is constructed, defuzzification processing is carried out on the fuzzy judgment matrix to obtain index weights based on subjective factor analysis, then an entropy weight process is adopted, information entropy calculation of each index is calculated to obtain data-driven weights of each index, the weights are multiplied and normalized with the index weights obtained based on the fuzzy analytic hierarchy process, weight evaluation of each index relative to the difficulty of corresponding frames is determined, then the TOPSIS method is adopted to calculate deviation between related indexes and optimal schemes and worst schemes, and the weight evaluation calculation of each index relative to the difficulty of corresponding frames is combined to obtain the difficulty of each frame of data;
and screening continuous multi-frame data which meets the data difficulty requirement and serves as a requirement scene segment.
As a preferable scheme of the automatic driving test scene analysis method of the present invention, wherein: the scene characteristic data includes surrounding traffic participants, road identifications, and traffic light status information.
As a preferable scheme of the automatic driving test scene analysis method of the present invention, wherein: the screening out n frame data which are continuous and belong to the same scene category in the whole time span of the scene to be analyzed as target scene fragments comprises,
screening out n frames of continuous data belonging to the same scene category in the whole time span of the scene to be analyzed;
setting a threshold range of the similarity;
and judging whether the calculated similarity of each frame data is in a set similarity threshold range, and if so, taking continuous n frame data as a target scene segment.
As a preferable scheme of the automatic driving test scene analysis method of the present invention, wherein: the screening out continuous multi-frame data meeting the data difficulty requirement as a requirement scene segment comprises,
recording the start-stop time points of continuous multi-frame data meeting the data difficulty requirement to obtain a target time period, and expanding time t forwards and backwards on the basis of the target time period, and taking the target time period as a reference point to intercept data from the scene time span to be analyzed to obtain a required scene segment.
As a preferable scheme of the automatic driving test scene analysis method of the present invention, wherein: the time t is generally 1-7 s.
The invention also provides an automatic driving test scene analysis system, which comprises,
a scene database layer in which a plurality of historical scene categories and historical scene feature data corresponding to each scene category are stored;
a scene category judging layer, which is used for extracting frame-by-frame information and analyzing scene feature data of a scene to be analyzed, judging the similarity of the scene feature data of each frame and the historical scene feature data of the scene categories stored in the database, selecting the scene category corresponding to the historical scene feature data with the highest similarity as the scene category of the frame, and screening out continuous multi-frame data belonging to the same scene category as a target scene segment;
the scene difficulty evaluation layer adopts a fuzzy analytic hierarchy process to convert linguistic descriptions of relative importance degree judgment of a plurality of grading experts on different indexes or different categories into corresponding fuzzy numbers, builds a fuzzy judgment matrix, and performs defuzzification treatment on the fuzzy judgment matrix to obtain index weights based on subjective factor analysis, then adopts an entropy weight process to calculate information entropy calculation of each index to obtain data-driven weights of each index, and multiplies and normalizes the weights of each index with the index weights obtained based on the fuzzy analytic hierarchy process to determine weight evaluation of each index relative to the difficulty of a corresponding frame, then adopts a TOPSIS method to calculate deviation between related indexes and an optimal scheme and a worst scheme, and then combines the weight evaluation calculation of each index relative to the difficulty of the corresponding frame to obtain the difficulty of each frame of data; the method comprises the steps of,
and the scene interception layer is used for screening out continuous multi-frame data meeting the data difficulty requirement and intercepting to obtain a required scene fragment.
The beneficial effects of the invention are as follows:
(1) The invention provides a layered scene automatic screening scheme, which adopts a scheme of scene similarity primary screening and difficulty quantitative evaluation to determine target fragments, and can avoid the problem that scene extraction and cutting cannot meet test requirements due to insufficient data volume and incapability of finely dividing scenes in the early stage of development, effectively reduce redundant scene fragments and improve evaluation analysis efficiency.
(2) The invention can realize quantitative analysis of scene difficulty, and avoid the problems of judgment difference and the like caused by pure subjective factors.
(3) The fuzzy analytic hierarchy process, the entropy weight process and the TOPSIS process evaluate and calculate the difficulty of each frame of data in the target scene segment, integrate the subjective experience of an expert and the advantages of data characteristic distribution, and have more convincing effect on the result of difficulty quantitative evaluation; in addition, the system has strong expansibility, and no matter for newly added scene categories or indexes or categories affecting quantitative evaluation of difficulty, the system can be directly overlapped on the existing system, and has good compatibility and strong flexibility.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an automatic driving test scene analysis method provided by the invention.
Detailed Description
In order that the invention may be more readily understood, a more particular description thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
The embodiment provides an automatic driving test scene analysis method, which comprises the following steps of S101-S105:
step S101: a database is established, a plurality of different historical scene categories are stored in the database, and corresponding historical scene characteristic data are extracted for each historical scene category to serve as scene data references for initial comparison.
Specifically, historical data is stored in the database, and the historical data includes N scene categories and corresponding historical scene feature data. Specifically, the first scene category, the second scene category …, the nth scene category, the historical first scene feature data corresponding to the first scene category, the historical second scene feature data … corresponding to the second scene category, and the historical nth scene feature data corresponding to the nth scene category.
Step S102: extracting frame-by-frame information of a scene to be analyzed and analyzing scene feature data, judging the similarity between the scene feature data of each frame and the historical scene feature data of different scene types stored in a database, and selecting the scene type corresponding to the scene feature data with the highest similarity as the scene type of the frame.
Step S103: and screening out continuous multi-frame data belonging to the same scene category as a target scene segment in the whole time span of the scene to be analyzed.
Specifically, a threshold range of similarity is set first, then the similarity of the n frames of data which are selected continuously and belong to the same scene category is calculated, whether the similarity obtained by corresponding calculation of each frame of data is in the set threshold range of similarity is judged, and if the similarity is in the set threshold range of similarity, the continuous n frames of data are used as target scene segments.
Step S104: based on the screened target scene fragments, a fuzzy analytic hierarchy process is adopted to convert linguistic descriptions of relative importance degree judgment of a plurality of grading experts on different indexes or different categories into corresponding fuzzy numbers, a fuzzy judgment matrix is constructed, defuzzification processing is carried out on the fuzzy judgment matrix to obtain index weights based on subjective factor analysis, then an entropy weight process is adopted, information entropy calculation of each index is calculated to obtain data-driven weights of each index, the weights are multiplied and normalized with the index weights obtained based on the fuzzy analytic hierarchy process, weight evaluation of each index relative to the difficulty of corresponding frames is determined, then the TOPSIS method is adopted to calculate deviation between related indexes and optimal schemes and worst schemes, and the weight evaluation calculation of each index relative to the difficulty of corresponding frames is combined to obtain the difficulty of each frame of data.
Step S105: and screening continuous multi-frame data which meets the data difficulty requirement and serves as a requirement scene segment.
Specifically, firstly screening out continuous multi-frame data meeting the data difficulty requirement, then recording the start and stop time points of the multi-frame data to obtain a target time period, and carrying out data interception from the scene time span to be analyzed by taking the target time period as a reference point to obtain a required scene segment on the basis of the target time period and extending time t forwards and backwards. And expanding the target time period forwards and backwards for a period of time, so that a leading link of a small-segment scene and a subsequent link of the scene are added on the basis of the extracted target scene fragment, and the extracted scene is more perfect. The time t may vary from scene to scene, and is typically 1 to 7s, and in this embodiment, the expansion time t is 5s.
Therefore, the embodiment provides a layered scene automatic screening scheme, and adopts a scheme of scene similarity primary screening and difficulty quantitative evaluation to determine target fragments, so that the problem that the scene extraction and cutting cannot meet test requirements due to insufficient data volume and incapability of finely dividing scenes in the early stage of development can be avoided, redundant scene fragments are effectively reduced, and the evaluation analysis efficiency is improved.
The embodiment also provides an automatic driving test scene analysis system, which comprises a scene database layer, a scene category judgment layer, a scene difficulty evaluation layer and a scene interception layer.
The scene database layer stores a plurality of historical scene categories and scene characteristic data corresponding to each historical scene category. Specifically, the method may include a first scene category, a second scene category … nth scene category, first scene feature data corresponding to the first scene category, and second scene feature data … corresponding to the second scene category and nth scene feature data corresponding to the nth scene category.
The scene category judging layer is used for extracting frame-by-frame information and analyzing scene feature data of a scene to be analyzed, judging the similarity of the scene feature data of each frame and the scene feature data of the historical scene categories stored in the database, selecting the scene category corresponding to the scene feature data with the highest similarity as the scene category of the frame, and screening out continuous multi-frame data belonging to the same scene category as a target scene segment.
In this embodiment, the scene feature data includes, but is not limited to, surrounding traffic participants (such as actual movement conditions of different vehicles and pedestrians around), road signs (such as whether to pass through an intersection, lane lines, speed limit and other information), and traffic light status information.
It should be noted that, when screening the target scene segment, a threshold range of similarity is set first, then the similarity of the screened continuous multi-frame data belonging to the same scene category is calculated, and whether the similarity obtained by corresponding calculation of each frame data is within the set threshold range of similarity is determined, if both frames are within the set threshold range of similarity, the continuous multi-frame data is used as the target scene segment.
The scene difficulty evaluation layer is used for converting linguistic descriptions of importance degree judgment by a scoring expert into corresponding fuzzy numbers by adopting a fuzzy analytic hierarchy process, constructing a fuzzy judgment matrix, performing defuzzification treatment on the fuzzy judgment matrix to obtain index weights, then adopting an entropy weight method, calculating the information entropy of each index to obtain the weight of each index, multiplying and normalizing the index weights obtained by the fuzzy analytic hierarchy process to determine the weight evaluation of each index relative to the difficulty of a corresponding frame, then adopting a TOPSIS method to calculate the distance between related indexes and an optimal scheme and a worst scheme, and then combining the weight evaluation calculation of each index relative to the difficulty of the corresponding frame to obtain the difficulty of each frame of data.
The scale value adopted by the fuzzy analytic hierarchy process is a fuzzy number, linguistic descriptions of the scoring expert on the importance degree judgment are converted into corresponding fuzzy numbers, a fuzzy judgment matrix is constructed, different fuzzy judgment matrixes are constructed by scoring structures of different scoring expert, and when a final fuzzy judgment matrix is calculated, fuzzy number integration is carried out on matrix structures of a plurality of scoring expert, so that an overall fuzzy evaluation result is obtained. After the scoring results of a plurality of experts are integrated to generate a final fuzzy judgment matrix, defuzzification processing is carried out on the final fuzzy judgment matrix, and the final fuzzy judgment matrix is converted into final index weight.
For the entropy weight method, the index weight is determined according to the characteristic of the order degree of the information contained in each evaluation index, the information entropy of each index is calculated, the weight of each index is calculated, and the weight evaluation of each index relative to the difficulty of a corresponding frame can be determined after the weight is multiplied and normalized with the weight obtained based on the fuzzy analytic hierarchy process.
On the basis of the above, the TOPSIS method is adopted to process the related indexes in consideration of the difference of the dimensions of the indexes and the difference between the indexes and the ideal sequence, and the distance between the related indexes and the optimal scheme and the worst scheme is calculated. The reference sequences of the optimal scheme and the worst scheme are derived from historical scene characteristic data of the same scene category in the database. On the basis, the weight evaluation of each index obtained by previous calculation relative to the difficulty of the corresponding frame is combined to obtain the difficulty calculation result of the corresponding frame.
The scene interception layer is used for screening out continuous multi-frame data meeting the data difficulty requirement and intercepting to obtain a required scene fragment.
The automatic driving test scene analysis device is an automatic driving scene analysis system based on a similarity and difficulty level analysis scheme, the scheme does not need to rely on manual extraction and screening, and does not need massive historical scene data to carry out model training and feature extraction, so that the overall scene data interception efficiency is improved; meanwhile, the complexity of each scene can be quantitatively evaluated, so that the automatic driving scene data set is finely cut and extracted, the scene segments with partial redundancy are effectively removed, and the evaluation analysis efficiency is improved.
In addition to the above embodiments, the present invention may have other embodiments; all technical schemes formed by equivalent substitution or equivalent transformation fall within the protection scope of the invention.

Claims (6)

1. An automatic driving test scene analysis method is characterized in that: comprising the steps of (a) a step of,
establishing a database, wherein different scene categories are stored in the database, and corresponding historical scene characteristic data is extracted for each scene category to serve as scene data references for initial comparison;
extracting frame-by-frame information of a scene to be analyzed and analyzing scene feature data, judging the similarity between the scene feature data of each frame and the historical scene feature data of scene categories stored in a database, and selecting the scene category corresponding to the historical scene feature data with the highest similarity as the scene category of the frame;
screening out continuous multi-frame data belonging to the same scene category as a target scene segment in the whole time span of the scene to be analyzed;
based on the screened target scene fragments, a fuzzy analytic hierarchy process is adopted to convert linguistic descriptions of relative importance degree judgment of a plurality of grading experts on different indexes or different categories into corresponding fuzzy numbers, a fuzzy judgment matrix is constructed, defuzzification processing is carried out on the fuzzy judgment matrix to obtain index weights based on subjective factor analysis, then an entropy weight process is adopted, information entropy calculation of each index is calculated to obtain data-driven weights of each index, the weights are multiplied and normalized with the index weights obtained based on the fuzzy analytic hierarchy process, weight evaluation of each index relative to the difficulty of corresponding frames is determined, then the TOPSIS method is adopted to calculate deviation between related indexes and optimal schemes and worst schemes, and the weight evaluation calculation of each index relative to the difficulty of corresponding frames is combined to obtain the difficulty of each frame of data;
and screening continuous multi-frame data which meets the data difficulty requirement and serves as a requirement scene segment.
2. The automated driving test scenario analysis method of claim 1, wherein: the scene characteristic data includes surrounding traffic participants, road identifications, and traffic light status information.
3. The automated driving test scenario analysis method of claim 1, wherein: the screening out n frame data which are continuous and belong to the same scene category in the whole time span of the scene to be analyzed as target scene fragments comprises,
setting a threshold range of the similarity;
screening out n frames of continuous data belonging to the same scene category in the whole time span of the scene to be analyzed;
and judging whether the calculated similarity of each frame data is in a set similarity threshold range, and if so, taking continuous n frame data as a target scene segment.
4. The automated driving test scenario analysis method of claim 1, wherein: the screening out continuous multi-frame data meeting the data difficulty requirement as a requirement scene segment comprises,
recording the start-stop time points of continuous multi-frame data meeting the data difficulty requirement to obtain a target time period, and expanding time t forwards and backwards on the basis of the target time period, and taking the target time period as a reference point to intercept data from the scene time span to be analyzed to obtain a required scene segment.
5. The automated driving test scenario analysis method of claim 4, wherein: the time t is 1-7 s.
6. An automatic driving test scene analysis system, which is characterized in that: comprising the steps of (a) a step of,
a scene database layer in which a plurality of scene categories and historical scene feature data corresponding to each scene category are stored;
a scene category judging layer, which is used for extracting frame-by-frame information and analyzing scene feature data of a scene to be analyzed, judging the similarity of the scene feature data of each frame and the historical scene feature data of the scene categories stored in the database, selecting the scene category corresponding to the historical scene feature data with the highest similarity as the scene category of the frame, and screening out continuous multi-frame data belonging to the same scene category as a target scene segment;
the scene difficulty evaluation layer adopts a fuzzy analytic hierarchy process to convert linguistic descriptions of relative importance degree judgment of a plurality of grading experts on different indexes or different categories into corresponding fuzzy numbers, builds a fuzzy judgment matrix, and performs defuzzification treatment on the fuzzy judgment matrix to obtain index weights based on subjective factor analysis, then adopts an entropy weight process to calculate information entropy calculation of each index to obtain data-driven weights of each index, and multiplies and normalizes the weights of each index with the index weights obtained based on the fuzzy analytic hierarchy process to determine weight evaluation of each index relative to the difficulty of a corresponding frame, then adopts a TOPSIS method to calculate deviation between related indexes and an optimal scheme and a worst scheme, and then combines the weight evaluation calculation of each index relative to the difficulty of the corresponding frame to obtain the difficulty of each frame of data; the method comprises the steps of,
and the scene interception layer is used for screening out continuous multi-frame data meeting the data difficulty requirement and intercepting to obtain a required scene fragment.
CN202111149096.4A 2021-09-29 2021-09-29 Automatic driving test scene analysis method and system Active CN113989768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111149096.4A CN113989768B (en) 2021-09-29 2021-09-29 Automatic driving test scene analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111149096.4A CN113989768B (en) 2021-09-29 2021-09-29 Automatic driving test scene analysis method and system

Publications (2)

Publication Number Publication Date
CN113989768A CN113989768A (en) 2022-01-28
CN113989768B true CN113989768B (en) 2024-03-29

Family

ID=79737180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111149096.4A Active CN113989768B (en) 2021-09-29 2021-09-29 Automatic driving test scene analysis method and system

Country Status (1)

Country Link
CN (1) CN113989768B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740759A (en) * 2022-04-18 2022-07-12 中国第一汽车股份有限公司 Test method and device for automatic driving system, storage medium and electronic device
CN116401111B (en) * 2023-05-26 2023-09-05 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium
CN117649635A (en) * 2024-01-30 2024-03-05 湖北经济学院 Method, system and storage medium for detecting shadow eliminating point of narrow water channel scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146204A (en) * 2018-09-27 2019-01-04 浙江中海达空间信息技术有限公司 A kind of wind power plant booster stations automatic addressing method of comprehensiveestimation
CN112132424A (en) * 2020-09-07 2020-12-25 国网河北省电力有限公司经济技术研究院 Large-scale energy storage multi-attribute decision type selection method
CN113064839A (en) * 2021-06-03 2021-07-02 中智行科技有限公司 System evaluation method and device
CN113420975A (en) * 2021-06-17 2021-09-21 中智行科技有限公司 System performance evaluation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10262239B2 (en) * 2016-07-26 2019-04-16 Viisights Solutions Ltd. Video content contextual classification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146204A (en) * 2018-09-27 2019-01-04 浙江中海达空间信息技术有限公司 A kind of wind power plant booster stations automatic addressing method of comprehensiveestimation
CN112132424A (en) * 2020-09-07 2020-12-25 国网河北省电力有限公司经济技术研究院 Large-scale energy storage multi-attribute decision type selection method
CN113064839A (en) * 2021-06-03 2021-07-02 中智行科技有限公司 System evaluation method and device
CN113420975A (en) * 2021-06-17 2021-09-21 中智行科技有限公司 System performance evaluation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
危险驾驶工况场景的复杂度评估方法研究;董汉;舒伟;陈超;孙灿;尤超;;汽车工程;20200624(第06期);全文 *
基于模糊层次分析法的本地通信技术适用性评估;张乐平;金鑫;万路;;电力信息与通信技术;20180715(第07期);全文 *

Also Published As

Publication number Publication date
CN113989768A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN113989768B (en) Automatic driving test scene analysis method and system
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN111353413B (en) Low-missing-report-rate defect identification method for power transmission equipment
CN109767423B (en) Crack detection method for asphalt pavement image
CN115063796B (en) Cell classification method and device based on signal point content constraint
CN111160481B (en) Adas target detection method and system based on deep learning
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN116168356B (en) Vehicle damage judging method based on computer vision
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN115880529A (en) Method and system for classifying fine granularity of birds based on attention and decoupling knowledge distillation
CN115719475A (en) Three-stage trackside equipment fault automatic detection method based on deep learning
CN114332559A (en) RGB-D significance target detection method based on self-adaptive cross-modal fusion mechanism and depth attention network
CN112528058B (en) Fine-grained image classification method based on image attribute active learning
CN111582191B (en) Pouring amount estimation method in concrete pouring based on artificial intelligence video analysis
CN113869433A (en) Deep learning method for rapidly detecting and classifying concrete damage
CN116311088B (en) Construction safety monitoring method based on construction site
CN116204791B (en) Construction and management method and system for vehicle behavior prediction scene data set
CN115376315B (en) Multi-level bayonet quality control method for road network emission accounting
CN115271565B (en) DEA-based method, device and equipment for evaluating highway pavement maintenance measures
CN116434054A (en) Intensive remote sensing ground object extraction method based on line-plane combination
CN114331206A (en) Point location addressing method and device, electronic equipment and readable storage medium
CN115063602A (en) Crop pest and disease identification method based on improved YOLOX-S network
CN112581177A (en) Marketing prediction method combining automatic feature engineering and residual error neural network
CN110928861A (en) Auxiliary analysis and evaluation method and system for vehicle road noise
CN117035434B (en) Suspicious transaction monitoring method and suspicious transaction monitoring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221102

Address after: 215000 room 808, 8 / F, building 9a, launch area of Yangtze River Delta International R & D community, No. 286, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant after: Tianyi Transportation Technology Co.,Ltd.

Address before: 1 / F, building 28, 6055 Jinhai highway, Fengxian District, Shanghai, 201403

Applicant before: Zhongzhixing (Shanghai) Transportation Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant