CN116225921A - Visual debugging method and device for detection algorithm - Google Patents
Visual debugging method and device for detection algorithm Download PDFInfo
- Publication number
- CN116225921A CN116225921A CN202310101348.9A CN202310101348A CN116225921A CN 116225921 A CN116225921 A CN 116225921A CN 202310101348 A CN202310101348 A CN 202310101348A CN 116225921 A CN116225921 A CN 116225921A
- Authority
- CN
- China
- Prior art keywords
- video frame
- picture
- detection
- visual
- debugging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/362—Software debugging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- General Engineering & Computer Science (AREA)
- Debugging And Monitoring (AREA)
Abstract
The application discloses a visual debugging method and device for a detection algorithm, wherein the visual debugging method is applied to verification of an embedded end algorithm of a vehicle and comprises the following steps: reading video frame pictures in the TF card; preprocessing the video frame picture to obtain a first picture to be detected; identifying the first picture to be detected through a first detection algorithm to obtain a first detection result; summarizing the first detection results to obtain a first detection result set; the first detection result set is visualized according to a preset drawing rule, and a first visualization effect image is obtained; and outputting an initial debugging result according to the first visual effect image, so as to provide a visual debugging scheme which can improve the debugging efficiency and is applied to an embedded end detection algorithm of the vehicle.
Description
Technical Field
The application relates to the technical field of detection algorithm debugging, in particular to a visual debugging method and device for a detection algorithm.
Background
Today, with more and more intelligent automobiles, various powerful algorithms are increasingly used. If the automobile is driven automatically, a real-time picture is acquired through a camera and is sent to a detection and identification algorithm module, the algorithm module outputs the identification results of lane lines, pedestrians, vehicles and the like after model processing, the identification results are sent to a car control algorithm module, and the car control module controls the car body and the track in real time according to the identification results. For debugging of algorithms, at present, algorithm engineers debug on a PC, and the debugging is completed and is converted into a board end test effect, so that an ideal effect is difficult to achieve due to the calculation force difference between the PC and an actual board end chip, and the reason of the deviation cannot be found in time. Meanwhile, the immature algorithm real vehicle on-road test is dangerous, and the actual problem is difficult to find due to the difference between the PC and the board end. In general, the efficiency of the embedded end algorithm verification currently applied to vehicles is not high.
Disclosure of Invention
The utility model provides a with continuous road surface video frame picture to board end of TF card mode recharging, the picture that simulation real-time camera shot to draw the screen in real time with detection recognition result, can see the algorithm result under actual board end calculation power, can save the debugging time of frequent commentaries on classics real car again, can effectively solve the current not high technical problem of efficiency of being applied to embedded end algorithm verification of vehicle.
The visual debugging method for the detection algorithm is applied to verification of an embedded end algorithm of a vehicle and comprises the following specific steps:
reading video frame pictures in the TF card;
preprocessing the video frame picture to obtain a first picture to be detected;
identifying the first picture to be detected through a first detection algorithm to obtain a first detection result;
summarizing the first detection results to obtain a first detection result set;
the first detection result set is visualized according to a preset drawing rule, and a first visualization effect image is obtained;
and outputting an initial debugging result according to the first visual effect image.
Further, the video frame picture is in YUV format.
Further, the visual effect image comprises lane lines, frame selection moving targets, tracking frames and position data texts.
Further, the method also comprises the following steps:
updating the first detection algorithm according to the initial debugging result to obtain a second detection algorithm;
reading the video frame picture in the TF card;
preprocessing the video frame picture to obtain a second picture to be detected;
identifying the second picture to be detected through the second detection algorithm to obtain a second detection result;
summarizing the second detection results to obtain a second detection result set;
the second detection result set is visualized according to the preset drawing rule, and a second visualization effect image is obtained;
and outputting a final debugging result according to the second visual effect image.
Further, the method also comprises the following steps:
and updating the second detection algorithm according to the final debugging result.
Further, the method for reading the video frame picture in the TF card comprises the following specific steps:
judging whether a video frame picture meeting preset conditions exists in the TF card;
when the video frame pictures meeting the preset conditions exist in the TF card, the video frame pictures in the TF card are read.
The application also provides a visual debugging device for the detection algorithm, which is applied to the verification of the embedded end algorithm of the vehicle and comprises the following steps:
the reading module is used for reading the video frame pictures in the TF card;
the preprocessing module is used for preprocessing the video frame pictures to obtain first pictures to be detected;
the identification module is used for identifying the first picture to be detected through a first detection algorithm to obtain a first detection result;
the summarizing module is used for summarizing the first detection results to obtain a first detection result set;
the visualization module is used for visualizing the first detection result set according to a preset drawing rule to obtain a first visual effect image;
and the output module is used for outputting an initial debugging result according to the first visual effect image.
Further, the video frame picture is in YUV format.
Further, the visual effect image comprises lane lines, frame selection moving targets, tracking frames and position data texts.
Further, the method further comprises the following steps:
the algorithm updating module is used for updating the first detection algorithm according to the initial debugging result to obtain a second detection algorithm;
the reading module is also used for re-reading the video frame pictures in the TF card;
the preprocessing module is further used for preprocessing the video frame picture again to obtain a second picture to be detected;
the identification module is further used for identifying the second picture to be detected through the second detection algorithm to obtain a second detection result;
the summarizing module is further configured to summarize the second detection result to obtain a second detection result set;
the visualization module is further configured to perform visualization processing on the second detection result set according to the preset drawing rule, so as to obtain a second visualization effect image;
and the output module is also used for outputting a final debugging result according to the second visual effect image.
The technical scheme provided by the application has the following beneficial effects:
by reading the video frame pictures in the TF card, the real-vehicle road test picture can be simulated, the first visual effect image is obtained through visual processing according to the preset drawing rules, algorithm data and effects can be seen in real time, the simulation debugging effect similar to that of a PC (personal computer) is achieved, the problem of unpredictability caused by cross-platform is avoided, and the efficiency of checking the embedded end algorithm applied to the vehicle is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a flowchart of a method for visual debugging of a detection algorithm according to an embodiment of the present application;
fig. 2 is a software flow diagram of a visual debugging method for a detection algorithm according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a visual debugging device for a detection algorithm according to an embodiment of the present application.
11. A reading module; 12. a preprocessing module; 13. an identification module; 14. a summarizing module; 15. a visualization module; 16. an output module; 100. and a visual debugging device.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, the visual debugging method for a detection algorithm provided by the present application is applied to verification of an embedded end algorithm of a vehicle, and includes the following specific steps:
s100: and reading the video frame picture in the TF card.
In the automatic driving of the automobile, a real-time picture needs to be acquired through a camera, the acquired real-time picture is sent to a detection and recognition algorithm module, the algorithm module outputs recognition results of lane lines, pedestrians, vehicles and the like after model processing, the recognition results are sent to a vehicle control algorithm module, and the vehicle control algorithm module controls the automobile body and the track in real time according to the recognition results. The method adopts the TF card mode to recharge continuous pavement video frame pictures to the board end, simulates pictures shot by the real-time camera, draws detection and identification results to the screen in real time, can see algorithm results under the calculation force of the actual board end, and can save the debugging time of frequent real-time vehicle conversion. The video frame picture is understood as a video frame picture corresponding to the pavement video frame picture. Reading video frame pictures in TF cards is a continuous reading process.
Further, the video frame picture is in YUV format.
It is understood that YUV is a color coding method that is widely used in image processing or video processing.
Further, the method for reading the video frame picture in the TF card comprises the following specific steps:
judging whether a video frame picture meeting preset conditions exists in the TF card;
when the video frame pictures meeting the preset conditions exist in the TF card, the video frame pictures in the TF card are read.
It should be noted that, the technical scheme in the application is applied to the embedded end algorithm verification of the vehicle, and can be directly switched to the actual road surface test after the test based on the simulated real vehicle road test picture is completed. Referring to fig. 2, when performing visual debugging, the embedded board is powered on, the embedded device system is started, and then it is required to determine whether a video frame picture meeting a preset condition exists in the TF card. If the video frame picture meeting the preset condition does not exist, a camera video streaming mode can be adopted to execute the subsequent image preprocessing step; if the video frame picture meeting the preset condition exists, the video frame picture in the TF card can be read, for example, the YUV picture of the TF card in FIG. 2 is read, and then the subsequent image preprocessing step is executed. The judgment of the video frame pictures in the preset condition can be the judgment of the format of the video frame pictures, the judgment of the number of the video frame pictures, or the judgment of other conditions set according to the need. When the preset condition is that the video frame picture needs to be in the YUV format, if the video frame picture in the YUV format exists in the TF card, the video frame picture in the TF card is read; when the preset condition is that the number of the video frame pictures needs to reach a specific number, for example, the number of the video frame pictures needs to reach 100, if the video frame pictures exist in the TF card and the number of the video frame pictures exceeds 100, the video frame pictures in the TF card are read; when the preset condition is that the video frame pictures need to be in the YUV format and the number of the video frame pictures needs to reach a specific number, for example, the number of the video frame pictures needs to reach 100, if the video frame pictures in the YUV format exist in the TF card and the number of the video frame pictures exceeds 100, the video frame pictures in the TF card are read. In specific implementation, the preset conditions may be set according to actual needs. When the video frame pictures meeting the preset conditions exist in the TF card, the video frame pictures in the TF card are continuously read.
S200: and preprocessing the video frame picture to obtain a first picture to be detected.
It should be noted that, the preprocessing mode of the video frame picture can be a conventional picture preprocessing mode, and also can be a picture preprocessing mode adopted when the picture is processed by a neural network algorithm. Referring to fig. 2, the image preprocessing in the figure is mainly used for preprocessing video frame pictures and obtaining a first to-be-detected picture. And preprocessing is carried out once every time a video frame picture is read. Since reading video frame pictures is a continuous process, the preprocessing operation herein is also a continuous process. The first picture to be detected obtained through the preprocessing operation can be understood as a plurality of frames of pictures obtained continuously.
S300: and identifying the first picture to be detected through a first detection algorithm to obtain a first detection result.
It should be noted that, the first detection algorithm may be a conventional image recognition algorithm commonly used in automatic driving of a vehicle, or may be a neural network algorithm applied to automatic driving of a vehicle. Referring to fig. 2, after image preprocessing, the image data is sent to an algorithm for detection and identification, where the algorithm can be understood as a first detection algorithm. The first picture to be detected is a plurality of continuous pictures obtained through pretreatment, and the corresponding first detection results are a plurality of detection results.
S400: and summarizing the first detection results to obtain a first detection result set.
Referring to fig. 2, after the calculation result is obtained through the algorithm, the calculation result can be forwarded to the visual drawing functional module through data summarization. It should be noted that, when the first detection results are summarized, a plurality of detection results obtained through the first detection algorithm may be continuously obtained, and finally the first detection result set is assembled.
S500: and carrying out visualization processing on the first detection result set according to a preset drawing rule to obtain a first visualization effect image.
It should be noted that, the preset drawing rule may be a drawing rule that is conventional in the autopilot field, and is used to implement visualization of data according to the first detection result set. Referring to fig. 2, in the visualization processing, result data in the first detection result set, such as lane lines, pedestrian vehicle in frame selection, tracking frames, position data text, and the like, are drawn according to a preset drawing rule by image visualization superposition. The first visualization effect image may be understood as a real-time screen obtained after the visualization process.
Further, the visual effect image comprises lane lines, frame selection moving targets, tracking frames and position data texts.
S600: and outputting an initial debugging result according to the first visual effect image.
It can be understood that the first visual effect image corresponds to a real-time picture, and whether the position deviation, the type are corresponding, whether the tracking is correct and real-time, the false detection condition is missed, etc. can be clearly seen according to the real-time picture, so that a corresponding initial debugging result can be output.
Further, the method also comprises the following steps:
updating the first detection algorithm according to the initial debugging result to obtain a second detection algorithm;
reading the video frame picture in the TF card;
preprocessing the video frame picture to obtain a second picture to be detected;
identifying the second picture to be detected through the second detection algorithm to obtain a second detection result;
summarizing the second detection results to obtain a second detection result set;
the second detection result set is visualized according to the preset drawing rule, and a second visualization effect image is obtained;
and outputting a final debugging result according to the second visual effect image.
It should be noted that, the first detection algorithm in the present application may update the iterative corresponding algorithm model according to the initial debugging result, thereby obtaining the second detection algorithm, and re-debug the second detection algorithm. Referring to fig. 2, after the real-time frame is obtained, the problem analysis and modification can be performed based on the real-time frame, and the initial debugging result is output accordingly, and the first detection algorithm is updated accordingly when the problem analysis and modification are performed, so as to obtain an updated second detection algorithm. After the problem analysis and modification, the embedded board end power-on operation can be carried out again, and the corresponding visual debugging method is executed.
Further, the method also comprises the following steps:
and updating the second detection algorithm according to the final debugging result.
It can be understood that the second detection algorithm is further updated on the basis of the final debugging result, so that the corresponding algorithm can be more accurate, and the reliability of the algorithm is improved.
The method mainly utilizes the TF card recharging video frame mode to simulate the real-vehicle road test picture, and draws the visualized data at the plate end, such as lane line drawing dotted lines, pedestrian picture frames, vehicle picture frames and the like, and can distinguish object types according to different colors in actual selection during visualization processing, so that algorithm data and effects are seen in real time, the simulation debugging effect same as that of a PC (personal computer) is achieved, and the problem that the real-time simulation debugging effect cannot be predicted due to cross-platform is avoided. The application is a debugging technical scheme for checking an algorithm aiming at an embedded terminal, and the technical scheme is used for solving the technical problems that after the algorithm is developed and debugged on a PC and transplanted to a board terminal, the effect is inconsistent due to platform difference, the problem reproduction is difficult, the development period is long and the like. The method of recharging video frame pictures by using the TF card and visually simulating the real vehicle road test can solve the problem of cross-platform and reduce the safety problem of the real vehicle road test.
Referring to fig. 3, the present application further provides a visual debugging device 100 for a detection algorithm, which is applied to checking an embedded end algorithm of a vehicle, and includes:
the reading module 11 is used for reading the video frame pictures in the TF card;
a preprocessing module 12, configured to preprocess the video frame picture to obtain a first picture to be detected;
the identifying module 13 is configured to identify the first picture to be detected by using a first detection algorithm, so as to obtain a first detection result;
a summarizing module 14, configured to summarize the first detection results to obtain a first detection result set;
the visualization module 15 is configured to perform visualization processing on the first detection result set according to a preset drawing rule, so as to obtain a first visual effect image;
and the output module 16 is used for outputting an initial debugging result according to the first visual effect image.
Further, the reading module 11 is specifically configured to determine whether a video frame picture that meets a preset condition exists in the TF card; when the video frame pictures meeting the preset conditions exist in the TF card, the video frame pictures in the TF card are read.
Further, the video frame picture is in YUV format.
Further, the visual effect image comprises lane lines, frame selection moving targets, tracking frames and position data texts.
Further, the visual debugging device 100 further includes:
the algorithm updating module is used for updating the first detection algorithm according to the initial debugging result to obtain a second detection algorithm;
the reading module 11 is further configured to re-read the video frame picture in the TF card;
the preprocessing module 12 is further configured to re-preprocess the video frame picture to obtain a second picture to be detected;
the identifying module 13 is further configured to identify the second picture to be detected by using the second detection algorithm, so as to obtain a second detection result;
the summarizing module 14 is further configured to summarize the second detection result to obtain a second detection result set;
the visualization module 15 is further configured to perform visualization processing on the second detection result set according to the preset drawing rule, so as to obtain a second visual effect image;
the output module 16 is further configured to output a final debugging result according to the second visual effect image.
Furthermore, the algorithm updating module is further configured to update the second detection algorithm according to the final debugging result.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.
Claims (10)
1. The visual debugging method for the detection algorithm is applied to the verification of an embedded end algorithm of a vehicle and is characterized by comprising the following specific steps of:
reading video frame pictures in the TF card;
preprocessing the video frame picture to obtain a first picture to be detected;
identifying the first picture to be detected through a first detection algorithm to obtain a first detection result;
summarizing the first detection results to obtain a first detection result set;
the first detection result set is visualized according to a preset drawing rule, and a first visualization effect image is obtained;
and outputting an initial debugging result according to the first visual effect image.
2. The visual debugging method of claim 1, wherein the video frame picture is in YUV format.
3. The visual debugging method of claim 1, wherein the visual effect image comprises a lane line, a frame-selected moving object, a tracking frame, and a location data text.
4. The visual debugging method of claim 1, further comprising the steps of:
updating the first detection algorithm according to the initial debugging result to obtain a second detection algorithm;
reading the video frame picture in the TF card;
preprocessing the video frame picture to obtain a second picture to be detected;
identifying the second picture to be detected through the second detection algorithm to obtain a second detection result;
summarizing the second detection results to obtain a second detection result set;
the second detection result set is visualized according to the preset drawing rule, and a second visualization effect image is obtained;
and outputting a final debugging result according to the second visual effect image.
5. The visual debugging method of claim 4, further comprising the steps of:
and updating the second detection algorithm according to the final debugging result.
6. The visual debugging method of claim 1, wherein the step of reading the video frame picture in the TF card comprises the following specific steps:
judging whether a video frame picture meeting preset conditions exists in the TF card;
when the video frame pictures meeting the preset conditions exist in the TF card, the video frame pictures in the TF card are read.
7. The visual debugging device for the detection algorithm is applied to the verification of an embedded end algorithm of a vehicle and is characterized by comprising the following components:
the reading module is used for reading the video frame pictures in the TF card;
the preprocessing module is used for preprocessing the video frame pictures to obtain first pictures to be detected;
the identification module is used for identifying the first picture to be detected through a first detection algorithm to obtain a first detection result;
the summarizing module is used for summarizing the first detection results to obtain a first detection result set;
the visualization module is used for visualizing the first detection result set according to a preset drawing rule to obtain a first visual effect image;
and the output module is used for outputting an initial debugging result according to the first visual effect image.
8. The visual debugging apparatus of claim 7, wherein the video frame picture is in YUV format.
9. The visual debugging apparatus of claim 7, wherein the visual effect image comprises a lane line, a frame-selected moving object, a tracking frame, a location data text.
10. The visual debugging apparatus of claim 9, further comprising:
the algorithm updating module is used for updating the first detection algorithm according to the initial debugging result to obtain a second detection algorithm;
the reading module is also used for re-reading the video frame pictures in the TF card;
the preprocessing module is further used for preprocessing the video frame picture again to obtain a second picture to be detected;
the identification module is further used for identifying the second picture to be detected through the second detection algorithm to obtain a second detection result;
the summarizing module is further configured to summarize the second detection result to obtain a second detection result set;
the visualization module is further configured to perform visualization processing on the second detection result set according to the preset drawing rule, so as to obtain a second visualization effect image;
and the output module is also used for outputting a final debugging result according to the second visual effect image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310101348.9A CN116225921A (en) | 2023-01-20 | 2023-01-20 | Visual debugging method and device for detection algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310101348.9A CN116225921A (en) | 2023-01-20 | 2023-01-20 | Visual debugging method and device for detection algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116225921A true CN116225921A (en) | 2023-06-06 |
Family
ID=86580019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310101348.9A Pending CN116225921A (en) | 2023-01-20 | 2023-01-20 | Visual debugging method and device for detection algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116225921A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117316248A (en) * | 2023-10-13 | 2023-12-29 | 广东全芯半导体有限公司 | TF card operation intelligent detection system based on deep learning |
-
2023
- 2023-01-20 CN CN202310101348.9A patent/CN116225921A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117316248A (en) * | 2023-10-13 | 2023-12-29 | 广东全芯半导体有限公司 | TF card operation intelligent detection system based on deep learning |
CN117316248B (en) * | 2023-10-13 | 2024-03-15 | 广东全芯半导体有限公司 | TF card operation intelligent detection system based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102447352B1 (en) | Method and device for traffic light detection and intelligent driving, vehicle, and electronic device | |
CN107153363B (en) | Simulation test method, device, equipment and readable medium for unmanned vehicle | |
CN110163176B (en) | Lane line change position identification method, device, equipment and medium | |
CN113343461A (en) | Simulation method and device for automatic driving vehicle, electronic equipment and storage medium | |
CN109115242B (en) | Navigation evaluation method, device, terminal, server and storage medium | |
CN110928620B (en) | Evaluation method and system for driving distraction caused by automobile HMI design | |
CN111428374A (en) | Part defect detection method, device, equipment and storage medium | |
CN112149663A (en) | RPA and AI combined image character extraction method and device and electronic equipment | |
CN116225921A (en) | Visual debugging method and device for detection algorithm | |
CN115830399B (en) | Classification model training method, device, equipment, storage medium and program product | |
CN111797769A (en) | Small target sensitive vehicle detection system | |
CN110363193B (en) | Vehicle weight recognition method, device, equipment and computer storage medium | |
JP2021508634A (en) | Environmental sensor behavior model | |
CN112598953B (en) | Train driving simulation system-based crew member evaluation system and method | |
CN113221894A (en) | License plate number identification method and device of vehicle, electronic equipment and storage medium | |
CN112215222A (en) | License plate recognition method, device, equipment and storage medium | |
CN114898563B (en) | Intersection vehicle violation event detection method, electronic device and storage medium | |
CN114693722B (en) | Vehicle driving behavior detection method, detection device and detection equipment | |
CN111353273A (en) | Radar data labeling method, device, equipment and storage medium | |
CN112529116B (en) | Scene element fusion processing method, device and equipment and computer storage medium | |
CN114372351A (en) | Automatic driving simulation scene automatic generation method based on real traffic scene | |
CN114140282B (en) | Method and device for quickly reviewing answers of general teaching classroom based on deep learning | |
CN110796024B (en) | Automatic driving visual perception test method and device for failure sample | |
CN117726883B (en) | Regional population analysis method, device, equipment and storage medium | |
CN116152761B (en) | Lane line detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |