CN112966334A - Method, system and device for generating traffic scene attribute label - Google Patents

Method, system and device for generating traffic scene attribute label Download PDF

Info

Publication number
CN112966334A
CN112966334A CN202110242214.XA CN202110242214A CN112966334A CN 112966334 A CN112966334 A CN 112966334A CN 202110242214 A CN202110242214 A CN 202110242214A CN 112966334 A CN112966334 A CN 112966334A
Authority
CN
China
Prior art keywords
traffic
label
traffic scene
scene data
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110242214.XA
Other languages
Chinese (zh)
Inventor
康亚滨
郭倍豊
朱振夏
罗洋
王晶
沈浴竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Voyager Technology Co Ltd
Original Assignee
Beijing Voyager Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Voyager Technology Co Ltd filed Critical Beijing Voyager Technology Co Ltd
Priority to CN202110242214.XA priority Critical patent/CN112966334A/en
Publication of CN112966334A publication Critical patent/CN112966334A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Hardware Design (AREA)
  • Development Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Educational Administration (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a method for generating a traffic scene attribute label, which comprises the following steps: acquiring a preset label structure table; the label structure table comprises a plurality of label layers, each label layer comprises a plurality of nodes, each node corresponds to a preset label category, and each node is used as a child node to be uniquely mapped with a father node of the previous label layer; acquiring a plurality of traffic scene data, wherein the traffic scene data at least comprises test parameters of a plurality of traffic participants; determining a plurality of attribute labels of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the label structure table; the plurality of attribute tags correspond to tag categories for respective nodes in the tag structure table.

Description

Method, system and device for generating traffic scene attribute label
Technical Field
The present disclosure relates to the field of intelligent traffic technologies, and in particular, to a method, a system, and an apparatus for generating a traffic scene attribute tag.
Background
With the development of intelligent technology, automatic driving is developed and applied. The popularization of the automatic driving technology can reduce traffic accidents, reduce urban congestion, improve road traffic efficiency and liberate the time of manual driving. At the present stage, the perception, decision and control technology of the automatic driving technology is gradually mature, and the main resistance influencing the further popularization of the automatic driving technology is the consideration of the safety of the automatic driving technology. To ensure the safety of autonomous vehicles, the mainstream method is to place the autonomous driving strategy in a traffic scene for simulation to test the safety of the autonomous driving strategy. However, different from the conventional test of the passive safety related performance of the vehicle, in order to verify the safety of the automatic driving strategies, a large number of traffic scenes are required to be verified for each automatic driving strategy, and each traffic scene has more parameter combinations, so that the automatic driving vehicle has more test scenes, long test time consumption and high cost. This not only increases the development cost of the autopilot strategy, but also fails to quickly acquire the target traffic scenario required for the autopilot test due to the complexity of the scenario parameters.
Therefore, a method is needed to accurately and quickly acquire a target traffic scene required for a test.
Disclosure of Invention
One of embodiments of the present specification provides a method for generating a traffic scene attribute tag, including: acquiring a preset label structure table; the label structure table comprises a plurality of label layers, each label layer comprises a plurality of nodes, each node corresponds to a preset label category, and each node is used as a child node to be uniquely mapped with a father node of the previous label layer; acquiring a plurality of traffic scene data, wherein the traffic scene data at least comprises test parameters of a plurality of traffic participants; determining a plurality of attribute labels of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the label structure table; the plurality of attribute tags correspond to tag categories for respective nodes in the tag structure table.
One of embodiments of the present specification further provides a traffic scene attribute tag generation system, where the system includes: the tag structure table acquisition module is used for acquiring a preset tag structure table; the label structure table comprises a plurality of label layers, each label layer comprises a plurality of nodes, each node corresponds to a preset label category, and each node is used as a child node to be uniquely mapped with a father node of the previous label layer; the traffic scene data acquisition module is used for acquiring a plurality of traffic scene data, and the traffic scene data at least comprises test parameters of a plurality of traffic participants; the attribute label determining module is used for determining a plurality of attribute labels of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the label structure table; the plurality of attribute tags correspond to tag categories for respective nodes in the tag structure table.
One of the embodiments of the present specification further provides an apparatus for determining a traffic scene attribute tag, the apparatus including a processor and a memory; the memory is configured to store instructions that the processor is configured to execute to implement a method of determining a traffic scene attribute tag.
One of the embodiments of the present specification also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement a method of determining a traffic scene attribute tag.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram illustrating an application scenario of a traffic scenario attribute tag generation system in accordance with some embodiments of the present description;
FIG. 2 is a system block diagram of a traffic scenario attribute tag generation system in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a traffic scenario attribute tag generation method, shown in accordance with some embodiments of the present description;
fig. 4 is an exemplary flow diagram of a table of preset tag structures, shown in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Embodiments of the present description may be applied to different unmanned transportation systems including, but not limited to, one or a combination of land, marine, aeronautical, aerospace, and the like. For example, taxis, special cars, tailplanes, buses, designated drives, trains, railcars, high-speed rail, ships, airplanes, hot air balloons, and other unmanned vehicles. It is understood that the application scenarios of the system and method of the present specification are merely examples or embodiments of the present specification, and it will be obvious to those of ordinary skill in the art that the present specification can also be applied to other similar scenarios according to the drawings without inventive effort. For example, other similar guided user parking systems.
Fig. 1 is a schematic diagram illustrating an application scenario of a traffic scenario attribute tag generation system according to some embodiments of the present disclosure.
As shown in fig. 1, the traffic scenario attribute tag generation system 100 may specifically include a processing device 110, an autonomous vehicle 120, a storage device 130, a scenario simulation end 140, and a network 150.
In some embodiments, processing device 110 may obtain data and/or information from autonomous vehicle 120, storage device 130, scene simulator end 140. For example, the processing device 110 may obtain generated virtual traffic scene data of the scene simulation end 140, wherein the traffic scene data includes at least test parameters of a plurality of traffic participants. As another example, the processing device 110 may obtain a plurality of stored traffic scene data from the storage device 130 over a period of time. As another example, the processing device 110 may obtain a preset tag structure table from the storage device 130. For example, processing device 110 may obtain, from autonomous vehicle 120, video and/or sensor information captured while the autonomous vehicle is traveling in the field.
Further, processing device 110 may perform one or more of the functions described herein based on the information and/or data obtained as described above. For example, the processing device 110 may determine traffic scene data based on video and/or sensor information collected while the autonomous vehicle is traveling at the actual site. For another example, the processing device 110 may determine the attribute tags of the traffic participants in each traffic scene based on the test parameters of the traffic participants in the traffic scene data and a preset tag structure table. For another example, the processing device 110 may obtain a filtering condition corresponding to the target traffic scene data, and obtain the target traffic scene data satisfying the filtering condition from the plurality of traffic scene data based on the filtering condition. As another example, the processing device 110 may perform a test of an autonomous driving maneuver based on the target traffic scenario data.
In some embodiments, the processing device 110 may be a stand-alone server or a group of servers. The set of servers may be centralized or distributed (e.g., processing device 110 may be a distributed system). In some embodiments, the processing device 110 may be local or remote. For example, the processing device 110 may access information and/or material stored in the storage device 130, the scene simulator 140, via the network 150. In some embodiments, processing device 110 may interface directly with scene simulation terminal 140, storage device 130, and/or autonomous vehicle 120 to access information and/or material stored therein. In some embodiments, the processing device 110 may execute on a cloud platform. For example, the cloud platform may include one or any combination of a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like. In other embodiments, the processing device 110 may be one of the scene simulation ends 140 at the same time.
In some embodiments, processing device 110 may include one or more sub-processing devices (e.g., a single-core processor or a multi-core processor). By way of example only, processing device 110 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic circuit (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
In some embodiments, the autonomous vehicle 120 may include various types of vehicles, such as a sedan 120-1, an off-road vehicle 120-2, …, a sports car 120-n, and so forth. In some embodiments, an image capture device is provided on autonomous vehicle 120. The image capture device may include, but is not limited to, a video camera, a CCD camera, an infrared camera, and the like, and combinations thereof. The image capturing device may be configured to obtain video information around the transportation vehicle, and upload the obtained video to the storage device 130 or directly to the processing device 110.
In some embodiments, sensors may also be disposed on autonomous vehicle 120, including, but not limited to, positioning devices, optical sensors, infrared sensors, temperature and humidity sensors, position sensors, pressure sensors, distance sensors, speed sensors, acceleration sensors, gravity sensors, displacement sensors, moment sensors, gyroscopes, and the like, or any combination thereof. For example, temperature and humidity data of the external environment may be acquired based on a temperature and humidity sensor. For another example, acceleration information of the autonomous vehicle 120 may be acquired based on a pressure sensor/acceleration sensor/gravity sensor. For another example, the vibration information of the vehicle may also be acquired based on a gyro/gravity sensor.
In some embodiments, the scene simulation end 140 may be a device with data acquisition, storage, and/or transmission capabilities. For example, the scene emulation end 140 may include a mobile phone 140-1, a tablet computer 140-2, a notebook computer 140-3, and the like. In some embodiments, the scene simulation peer 140 may edit the virtual traffic scene to verify the safety of the autonomous driving maneuver. For example, the scene simulation terminal 140 may edit dynamic information such as the driving direction and the driving speed of each pedestrian and vehicle in the virtual traffic scene; static road information such as traffic light indication information, weather information, road condition information, and road driving information (e.g., speed limit, road restriction) can also be edited.
Storage device 130 may store data and/or instructions. In some embodiments, the storage device 130 may be used to store data/information transmitted by the scene simulator end 140 (e.g., virtual traffic scene data generated by the scene simulator end 140). In some embodiments, storage device 130 may also be used to store video information collected by video collection equipment disposed on autonomous vehicle 120. In some embodiments, storage device 130 may also be used to store information collected by sensors disposed on autonomous vehicle 120.
In some embodiments, storage device 130 may also store data and/or instructions for execution or use by processing device 110 to perform the example methods described in this specification. For example, the storage device 130 may store instructions that cause the processing device 110 to obtain traffic scene data. As another example, the storage device 130 may store instructions for the processing device 110 to determine attribute tags for various traffic participants in each traffic scenario.
In some embodiments, the storage device 130 may be part of the processing device 110 or the scene simulation end 140. In some embodiments, the storage device 130 may be connected to the network 150 to communicate with one or more components (e.g., processing device 110, autonomous vehicle 120, scene simulator end 140) in the traffic scene attribute tag generation system 100. One or more components in the traffic scenario attribute tag generation system 100 may access data or instructions stored in the storage device 130 via the network 150. In some embodiments, storage 130 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. Exemplary mass storage may include magnetic disks, optical disks, solid state disks, and the like. Exemplary removable memory may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read-only memory can include Random Access Memory (RAM). Exemplary RAM may include Dynamic RAM (DRAM), double-data-rate synchronous dynamic RAM (DDR SDRAM), Static RAM (SRAM), thyristor RAM (T-RAM), zero-capacitance RAM (Z-RAM), and the like. Exemplary ROMs may include Mask ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, storage device 130 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
The network 150 may facilitate the exchange of information and/or data. In some embodiments, one or more components in the traffic scenario attribute tag generation system 100 (e.g., the processing device 110, the autonomous vehicle 120, the storage device 130, and the scenario simulation end 140) may send and/or receive information and/or data to/from other components in the traffic scenario attribute tag generation system 100 via the network 150. In some embodiments, the network 150 may be any form or combination of wired or wireless network. By way of example only, network 150 may include a cable network, a wireline network, a fiber optic network, a telecommunications network, an intranet, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, a Global System for Mobile communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a General Packet Radio Service (GPRS) network, an enhanced data rates for GSM evolution (EDGE) network, a Wideband Code Division Multiple Access (WCDMA) network, a High Speed Downlink Packet Access (HSDPA) network, a Long Term Evolution (LTE) network, a User Datagram Protocol (UDP) network, a Transmission control protocol/Internet protocol (TCP/IP) network, a Short Message Service (SMS), A Wireless Application Protocol (WAP) network, an ultra-wideband (UWB) network, a mobile communication (1G, 2G, 3G, 4G, 5G) network, Wi-Fi, Li-Fi, narrowband Internet of things (NB-IoT), and the like, or any combination thereof.
FIG. 2 is a system block diagram of a traffic scenario attribute tag generation system in accordance with some embodiments of the present description.
As shown in fig. 2, the traffic scenario attribute tag generation system 200 may include: a tag structure table acquisition module 210, a traffic scene data acquisition module 220, and an attribute tag determination module 230. In some embodiments, the traffic scenario attribute tag generation system 200 may be disposed on the processing device 110, wherein:
a tag structure table obtaining module 210, configured to obtain a preset tag structure table; the label structure table comprises a plurality of label layers, each label layer comprises a plurality of nodes, each node corresponds to a preset label category, and each node is used as a child node to be uniquely mapped with a father node of the previous label layer;
a traffic scene data acquiring module 220, configured to acquire a plurality of traffic scene data, where the traffic scene data at least includes test parameters of a plurality of traffic participants;
an attribute tag determination module 230, configured to determine a plurality of attribute tags of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the tag structure table; the plurality of attribute tags correspond to tag categories for respective nodes in the tag structure table.
In some embodiments, the traffic scenario data acquisition module 220 is further configured to: acquiring video information acquired when an automatic driving vehicle drives in an actual field based on an image acquisition device on the automatic driving vehicle, and carrying out parametric representation on the video information to acquire test parameters of all traffic participants; and/or acquiring the test parameters of the traffic participants in the simulation scene based on the preset simulation scene of the automatic driving vehicle.
In some embodiments, the test parameters include: at least one of the type of traffic participant, driving speed, driving direction, driving lane.
In some embodiments, the traffic scene data further comprises road information comprising: at least one of traffic light information, weather information, road condition information, and road driving information.
In some embodiments, the traffic scenario data acquisition module 220 is further configured to: screening relevant participants related to the automatic driving strategy test based on preset judgment conditions; and acquiring the test parameters of the relevant participants, and determining the traffic scene data based on the test parameters of the relevant participants. In some embodiments, the preset determination condition includes: the distance of the traffic participant from the autonomous vehicle and/or the relationship between the traffic participant and the planned driving route of the autonomous vehicle and/or the positional relationship between the traffic participant and the autonomous vehicle.
In some embodiments, the system further comprises: and a target traffic scene data screening module 240. The target traffic scene data screening module 240 is configured to obtain a screening condition corresponding to the target traffic scene data, and based on the screening condition, obtain the target traffic scene data meeting the screening condition from the multiple traffic scene data.
Still further, the system may also include an automated driving maneuver testing module 250. The autopilot strategy testing module 250 is configured to perform autopilot strategy testing based on the target traffic scenario data.
It should be appreciated that the system and its modules in one or more implementations of the present description may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the processing device and its modules is merely for convenience of description and is not intended to limit the present description to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings.
FIG. 3 is an exemplary flow diagram of an automated driving test method shown in accordance with some embodiments of the present description. In some embodiments, one or more steps of method 300 may be implemented in system 100 shown in FIG. 1. For example, one or more steps of the method 300 may be stored as instructions in the storage device 130 and invoked or executed by the processing device 110 (or the traffic scenario property tag generation system 200).
Step 310, obtaining a preset label structure table. In some embodiments, step 310 may be performed by the tag structure table acquisition module 210.
In some embodiments, the tag structure table is already manually set, and is pre-stored in the corresponding memory (e.g., the storage device 130) and called by the tag structure table obtaining module 210.
The label structure table is a preset structured label catalogue which comprises a plurality of label layers. Each label layer comprises a plurality of nodes, and each node corresponds to a set label category. In some embodiments, nodes of adjacent label layers may be set as parent-child nodes; a parent node may include multiple children, but each child may and may not correspond to a parent node.
In some embodiments, the tag structure table may be a tree structure (such as, in particular, MongoDB, Graph Lookup). The tree structure comprises a root node, a middle node and a leaf node; wherein, the root node (corresponding to the father node) is connected with one or more middle nodes (corresponding to the child nodes) of the next layer; further, the internal node is connected with the internal node (or leaf node) of the next layer as a new parent node, and the process is repeated until all the nodes of the next layer are leaf nodes. It will be appreciated that the root node does not have a node above it, and the leaf nodes cannot be connected to other nodes as parent nodes.
Fig. 4 is a schematic diagram of a configured tag structure table, and it should be noted that the diagram is merely an exemplary illustration of the tag structure table and does not constitute a limitation of the present specification. In fig. 4, the traffic participant is a root node, which is used as a father node to connect with child nodes such as non-motor vehicles, pedestrians, pets, and the like. Further, these child nodes (nodes such as non-motor vehicles, pedestrians, pets, etc.) can be connected with nodes of a next layer as new father nodes, for example, child nodes such as "trucks", "cars", "buses", etc. can be set under the node "motor vehicles", and further, a series of child tags such as "vehicle speed", "driving direction", "driving lane", and "position of automatically driven vehicle" can be set under the node "cars"; furthermore, the driving direction can be connected with leaf nodes such as left turn, straight line, left lane change, right turn and the like.
It should be noted that fig. 4 is only illustrated by taking the branch "traffic participant-vehicle-car-driving direction" as an example. It is to be understood that any modifications to the embodiments described herein will be apparent to those skilled in the art. For example, labels such as "non-motor vehicle", "pedestrian", "pet" and the like may be developed step by step according to the above-described method until leaf nodes are obtained. Specifically, the label of "non-motor vehicle" can be split into "motorcycle", "battery car", "tricycle", "bicycle", and the like, and further, a series of sub-labels of "speed", "direction of travel", "lane of travel", and "position of automatically driven vehicle" can be set under the node of the motorcycle; furthermore, the driving direction can be connected with leaf nodes such as left turn, straight line, left lane change, right turn and the like.
Step 320, obtaining a plurality of traffic scene data, wherein the traffic scene data at least comprises the test parameters of a plurality of traffic participants. In some embodiments, step 320 may be performed by the traffic scene data acquisition module 220.
The test parameter of the traffic participant can be at least one of a type, a driving speed, a driving direction, a driving lane of the traffic participant, which can characterize the attributes of the traffic participant by the parameter.
In some embodiments, the test parameters of the traffic participants in the traffic scene are combined together, i.e., the parameters that can be used to characterize the traffic scene. For example, a bicycle and a car are in a certain traffic scene, and the whole traffic scene data can be formulated by depicting the running speed, the running direction, the running lane and other test parameters of the bicycle and the car.
In some embodiments, the traffic scene data may also include road information. The road information includes at least one of traffic light indication information, weather information, traffic signs, and road condition information. Further, the traffic light indication information may include a red light, a yellow light, a green light, a yellow light flashing, a green light flashing, and the like. The weather information may include rainy, snowy, sunny days, etc. The traffic signs can include traffic indication information such as speed limit, height limit, traffic restriction and the like. The road condition information may include road surface gradient, side inclination, road surface adhesion coefficient, road surface wet skid, translation inertia of the vehicle during running, road surface running resistance, flatness, and the like. Road information may have an effect on the driving behavior of the traffic participants, and thus different driving behaviors may be generated in different states of the road information. For example, in rainy and snowy weather and wet and slippery road conditions, the driving behavior of the traffic participants is more cautious, and the driving speed and lane change of the traffic participants are possibly reduced.
In some embodiments, the traffic scene data may be captured by video information captured while the autonomous vehicle is driving at the actual site (e.g., captured by an image capture device on the autonomous vehicle 120). The image capture device may include, but is not limited to, a video camera, a CCD camera, an infrared camera, and the like, and combinations thereof. In this scenario embodiment, the image capturing device then obtains video information around the transportation vehicle, and the traffic scenario data obtaining module 220 further performs parameterization processing on the obtained video information to obtain test parameters of the traffic participants.
In some embodiments, the video information acquired while driving at the actual site may not exhaust all possible traffic scenarios. In the embodiment of the scene, an extreme scene which is difficult to appear in the operation of an actual field can be simulated by a simulation method, so that the diversity of traffic scene data is improved, and the safety of an automatic driving strategy under the extreme condition is improved. Further, the traffic scene data acquisition module 220 may extract traffic scene data based on a simulated scene of the autonomous vehicle. It can be understood that when the virtual autonomous driving vehicle runs in the simulation scene, each parameter of the traffic participant and the road information in the virtual autonomous driving vehicle are preset, at this time, the traffic scene data acquisition module 220 may directly call the corresponding parameter in the simulation scene to obtain the test parameter of the traffic participant, so as to obtain the corresponding traffic scene data.
In some embodiments, the parameterization of the video information collected during driving in the actual field may be based on a machine learning model. For example, the traffic scene data obtaining module 220 may obtain corresponding test parameters based on video information (e.g., video with a time length of 20 s) over a period of time and further based on a trained parameter extraction model.
And the training process of the parameter extraction model is described by taking the test parameters as the driving speed of the traffic participants as an example. At this time, a plurality of pieces of video information (e.g., 1000 training samples) that have been labeled by human needs to be obtained as a training set of the parameter extraction model.
The parameter extraction model may be a convolutional neural network, which may include two layers, one of which is a feature extraction layer, and the input of each neuron is connected to the local acceptance domain of the previous layer and extracts the features of the local layer. The other is a feature mapping layer, each calculation layer of the network consists of a plurality of feature mapping layers, each feature mapping layer is a plane, and the weights of all neurons on the plane are equal. Specifically, the input of the feature extraction layer may be track information corresponding to a video of a certain traffic participant in the training set, and the feature extraction layer may generate a track matrix corresponding to the motion track. For example, for a motion trajectory corresponding to a 3-dimensional coordinate (X-Y-Z coordinate system), a row of the trajectory matrix may correspond to a motion length of the trajectory in an X-axis direction, a column of the trajectory matrix may correspond to a motion height of the trajectory in a Y-axis direction, and an element of the trajectory matrix may correspond to a motion in a Z-axis direction of the trajectory matrix. It can be understood that the feature extraction layer is to encode the motion trajectory. Through the arrangement, the action recognition model can construct the mapping relation between the track matrix and the motion track. Furthermore, the feature mapping layer can extract the driving speed and/or the driving direction of the traffic participant based on the obtained track matrix, further compare the driving speed and/or the driving direction with the input labeled value of the training data, determine the loss function, and perform back propagation based on the loss function, thus forming an updating process of the parameter extraction model. And repeating the multiple iterations of the parameter extraction model until the iteration process meets the requirements (if the loss function is smaller than the threshold), thereby obtaining the parameter extraction model with higher precision in a well-trained manner.
It should be noted that the above training process of the parameter extraction model is an exemplary description that the input is a picture/video and the output is a driving speed and/or a driving direction. It can be understood that when the labeled value is the type of the traffic participant (motor vehicle, non-motor vehicle, pedestrian, pet), the input of the feature extraction layer may be a picture corresponding to a certain traffic participant in the training set, and a picture matrix is generated; the feature mapping layer can be classified and judged based on the picture matrix, and then compared with the labeled value of the input training data to determine a loss function, so that a parameter extraction model with high precision can be obtained in a well-trained mode.
In some embodiments, sensors may also be provided on autonomous vehicle 120. The sensors include, but are not limited to, a positioning device, an optical sensor, an infrared sensor, a temperature and humidity sensor, a position sensor, a pressure sensor, a distance sensor, a velocity sensor, an acceleration sensor, a gravity sensor, a displacement sensor, a moment sensor, a gyroscope, or the like, or any combination thereof, or the like. In this scenario embodiment, the traffic scenario data acquisition module 220 may acquire the test parameters of the traffic participants based directly on the sensors. For example, the traffic scene data acquiring module 220 may acquire temperature and humidity data of the external environment based on a temperature and humidity sensor. For another example, the traffic scene data acquisition module 220 may acquire acceleration information of the autonomous vehicle 120 based on a pressure sensor/acceleration sensor/gravity sensor. For another example, the traffic scene data acquiring module 220 may also acquire the slope, the roll angle, the flatness, and the like of the road based on the gyroscope/gravity sensor.
In some embodiments, the traffic scene data acquisition module 220 may screen relevant participants related to the automated driving maneuver test based on preset decision conditions. It can be understood that the relevant participants are traffic participants strongly related to the automatic driving strategy test, and the influence of the traffic participants on the automatic driving strategy test is large.
In some embodiments, the preset determination condition includes: the distance of the traffic participant from the autonomous vehicle and/or the relationship between the traffic participant and the planned driving route of the autonomous vehicle and/or the positional relationship between the traffic participant and the autonomous vehicle. For example, if the distance between a traffic participant and the autonomous vehicle exceeds a certain threshold (e.g., 100m), the traffic participant cannot be considered a relevant participant. For another example, when the autonomous vehicle is about to turn left, and a traffic participant is on the right side of the autonomous vehicle and makes a right turn, the traffic participant cannot be a relevant participant. As another example, a traffic participant is 30m ahead of the autonomous vehicle (within a preset threshold), and the traffic participant changes lanes into the lanes of the autonomous vehicle, i.e., the relevant participant.
The traffic scene data acquisition module 220 may remove traffic participants irrelevant to the autopilot strategy test by acquiring relevant participants according to preset determination conditions, and screen out traffic participants having a large influence on the autopilot strategy determination, so as to reduce the calculation amount in the subsequent test.
Step 330, determining a plurality of attribute labels of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the label structure table. In some embodiments, step 330 may be performed by attribute tag determination module 230.
The attribute tag determination module 230 may compare the test parameters of the plurality of transportation participants with the tag information in the tag structure table to obtain the attribute tag of each transportation participant. The attribute tags correspond to the tag categories of the respective nodes in the tag structure table (e.g., corresponding to the tag structure table in fig. 4). For example, the attribute tag may mark the type of the transportation participant 1 as a motor vehicle, the type of the car as a car, the driving direction as a lane change to the left, and the like.
In some embodiments, the attribute tags for each of the traffic participants may be manually labeled and directly invoked by the attribute tag determination module 230. It can be understood that this determination is accurate, but the labor cost is high.
In some embodiments, the attribute tag determination module 230 may employ an attribute tag classification model for automatic classification. Specifically, a part of attribute labels of the traffic participants are labeled manually, and training is performed based on the attribute labels of the part of the traffic participants with the labels, so that a trained classification model is obtained. And further, predicting the test data of the remaining traffic participants without the labels based on the trained classification model to obtain the attribute labels of each traffic participant. The attribute labels of the traffic participants can be automatically acquired by automatically labeling the attribute labels of the traffic participants by adopting the classification model, so that high labor cost caused by manual expert labeling is avoided, and the labeling efficiency of the attribute labels is effectively improved.
In some embodiments, the attribute label classification model may be trained jointly with the parameter extraction model. In the joint training process, the attribute label classification model and the parameter extraction model may use the same training sample set (hereinafter referred to as a second training set). The second training set contains the captured video information and the labeled values, wherein the labeled values can be classes that have been labeled manually. For certain video information, the video information can be input into the parameter extraction model, and corresponding test parameters are output. And further, inputting the test parameters into an attribute label classification model to obtain a predicted classification value. Further, the predicted classification value is compared with the labeled value to form a loss value. In the process of joint training, a corresponding loss function (for example, in an average or weighted average manner) may be constructed based on the loss values of the attribute label classification model and the parameter extraction model, and back propagation may be performed based on the loss function, so as to form an update process of one-time joint training.
In some embodiments, the traffic scene attribute tag generation system (e.g., system 200) may further establish a screening condition after acquiring the attribute tags corresponding to the traffic participants to acquire the target traffic scene meeting the test requirements. In this scenario embodiment, the method 300 further includes step 340 of determining a filtering condition corresponding to the target traffic scene data, and acquiring the target traffic scene data satisfying the filtering condition from the plurality of traffic scene data. In some embodiments, step 340 may be performed by target traffic scenario data screening module 240.
The target traffic scene data filtering module 240 may establish corresponding filtering conditions based on the target traffic scene data required to be obtained.
In some embodiments, the target traffic scenario data filtering module 240 may obtain the filtering condition based on an attribute tag. For example, the target traffic scenario data filtering module 240 may establish a filtering condition "the type of traffic participant is a motor vehicle". For another example, the screening condition is "the traffic participant is traveling on a non-motor lane". For example, when the automatic driving policy is lane change straight on a road, all traffic scenes containing vehicles may be screened out based on the screening condition "the type of traffic participant is a vehicle" to test the policy. For another example, when the automatic driving policy is "turn right into the secondary lane", traffic scenes in which all the traffic participants travel on the non-motor lane may be acquired based on the filtering condition that "the traffic participants travel on the non-motor lane" to test the policy.
In some embodiments, the target traffic scenario data filtering module 240 may also collectively serve as a filtering condition based on a combination of multiple attribute tags. For example, a screening condition "the type of the traffic participant is motor vehicle and turns left" may be established. Also for example, a screening condition "traffic participant is a car and turns left and the traffic light is a yellow light" may also be established.
By adopting the combination mode of a plurality of attribute labels, on one hand, the traffic scene data can be accurately positioned, so that the traffic data of a certain specific scene can be acquired more quickly. For example, when testing a left turn of an autonomous vehicle, a "traffic participant is a car and turning left and the traffic light is yellow" situation may be screened out for more accurate testing.
On the other hand, a more complex traffic scene can be screened out by combining a plurality of attribute tags so as to verify the safety of the automatic driving strategy in the complex scene. For an updated automatic driving strategy, a more complex traffic scene can be preferentially tested to find problems for correction, so that repeated testing of a simple scene can be avoided, and the testing efficiency of the strategy is effectively improved.
In some embodiments, the target traffic scenario data screening module 240 may also establish screening conditions based on the number of the same type of traffic participants. For example, the screening condition may be "have 3 cars, 2 trucks, 4 pedestrians, and the traffic light is yellow light". In the above screening conditions, attribute labels of "3 cars, 2 trucks, 4 pedestrians" (such as traveling speed, path, lane, and position of the autonomous vehicle of each car, truck, pedestrian, etc.) may be further defined. It can be understood that the greater the number of traffic participants, the greater the complexity of the resulting traffic scene, and the greater the security of the traffic scene.
The target traffic scene data screening module 240 may screen out the target traffic scene data that meets the requirements based on the determined screening conditions. For example, when the screening condition is "3 cars, 2 trucks, 4 pedestrians, and the traffic light is yellow light", the traffic scene data meeting the above requirements may be screened out as the target traffic scene data.
Further, after obtaining the target traffic scenario data obtained in step 340, the traffic scenario attribute tag generation system (e.g., system 200) may further perform a test. In an embodiment of this scenario, the method 300 further comprises: and 350, performing automatic driving strategy test based on the target traffic scene data. In some embodiments, step 350 may be performed by the autopilot strategy testing module 250.
The autopilot strategy testing module 250 may perform a test of the autopilot strategy based on the screened target traffic scenario data. For example, the automatic driving strategy is to check the process of turning left when the automatic driving vehicle turns "yellow flashing", at this time, all traffic scene data of turning left and the traffic light of which is "yellow flashing" are screened out based on the screening condition, and the traffic scene data are input into the corresponding simulation scene, and then the automatic driving vehicle is controlled to execute the automatic driving strategy to complete the corresponding movement (such as "left turning" instruction), so as to test whether the automatic driving strategy can be executed correctly on the target traffic scene data.
It should be noted that the method 300 is exemplified by taking the execution subject as the system 200. It is understood that, since steps 310 to 330 in the method 300 correspond to the generation process of the attribute tags, step 340 corresponds to the acquisition process of the target traffic scene, and step 350 corresponds to the process executed by the target traffic scene to perform the automatic driving strategy, different steps in the method 300 may also be performed by different executing subjects. For example, processor 1 (e.g., system 200) may directly perform steps 310-330 of method 300 to generate attribute tags of traffic participants, and processor 2 (e.g., a system different from processor 1) may perform steps 340, 350 to obtain corresponding target traffic scenarios, thereby performing a test of an automatic driving strategy. As another example, after processor 1 (e.g., system 200) may directly perform steps 310-330 of method 300 to generate attribute tags of traffic participants, processor 2 (another system different from processor 1) may perform step 340 to obtain corresponding target traffic scenarios, and processor 3 (another system different from processors 1 and 2) may perform step 350 to perform testing of the automatic driving strategy.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: 1) determining a plurality of attribute labels of each traffic participant in each traffic scene based on a preset label structure table and test parameters of the traffic participants so as to realize rapid screening of target traffic scene data; 2) by adopting a mode of combining a plurality of attribute labels, a more complex traffic scene can be accurately obtained, and the testing efficiency is effectively improved; 3) extreme scenes which are difficult to appear in actual field operation are simulated through a simulation method, the integrity of traffic scene data is effectively supplemented, and the safety of an automatic driving strategy in the extreme scenes is improved. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of generating a traffic scene attribute tag, comprising:
acquiring a preset label structure table; the label structure table comprises a plurality of label layers, each label layer comprises a plurality of nodes, each node corresponds to a preset label category, and each node is used as a child node to be uniquely mapped with a father node of the previous label layer;
acquiring a plurality of traffic scene data, wherein the traffic scene data at least comprises test parameters of a plurality of traffic participants;
determining a plurality of attribute labels of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the label structure table; the plurality of attribute tags correspond to tag categories for respective nodes in the tag structure table.
2. The method of claim 1, the obtaining a plurality of traffic scene data, comprising:
acquiring video information acquired when an automatic driving vehicle drives in an actual field based on an image acquisition device on the automatic driving vehicle, and carrying out parametric representation on the video information to acquire test parameters of all traffic participants;
and/or the presence of a gas in the gas,
and acquiring the test parameters of the traffic participants in the simulation scene based on the preset simulation scene of the automatic driving vehicle.
3. The method of claim 2, the test parameters comprising: at least one of the type of traffic participant, driving speed, driving direction, driving lane.
4. The method of claim 1, the traffic scene data further comprising road information, the road information comprising: at least one of traffic light indicating information, weather information, traffic signs, road condition information and traffic signs.
5. The method of claim 1, the obtaining a plurality of traffic scene data, further comprising:
screening relevant participants related to the automatic driving strategy test based on preset judgment conditions;
and acquiring the test parameters of the relevant participants, and determining the traffic scene data based on the test parameters of the relevant participants.
6. The method of claim 5, wherein the preset decision condition comprises:
the distance of the traffic participant from the autonomous vehicle and/or the relationship between the traffic participant and the planned driving route of the autonomous vehicle and/or the positional relationship between the traffic participant and the autonomous vehicle.
7. The method of claim 1, the method further comprising:
obtaining a screening condition corresponding to target traffic scene data, and obtaining the target traffic scene data meeting the screening condition from the plurality of traffic scene data, wherein the screening condition at least comprises an attribute tag corresponding to one traffic participant;
and carrying out automatic driving strategy test based on the target traffic scene data.
8. A traffic scenario attribute tag generation system, the system comprising:
the tag structure table acquisition module is used for acquiring a preset tag structure table; the label structure table comprises a plurality of label layers, each label layer comprises a plurality of nodes, each node corresponds to a preset label category, and each node is used as a child node to be uniquely mapped with a father node of the previous label layer;
the traffic scene data acquisition module is used for acquiring a plurality of traffic scene data, and the traffic scene data at least comprises test parameters of a plurality of traffic participants;
the attribute label determining module is used for determining a plurality of attribute labels of each traffic participant in each traffic scene based on the test parameters of the plurality of traffic participants and the label structure table; the plurality of attribute tags correspond to tag categories for respective nodes in the tag structure table.
9. An apparatus for determining a traffic scenario attribute tag, the apparatus comprising a processor and a memory; the memory is configured to store instructions for execution by the processor to implement the method of generating a traffic scenario attribute tag of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores computer instructions which, when executed by a processor, implement the method of generating a traffic scene attribute tag according to any one of claims 1 to 7.
CN202110242214.XA 2021-03-04 2021-03-04 Method, system and device for generating traffic scene attribute label Pending CN112966334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110242214.XA CN112966334A (en) 2021-03-04 2021-03-04 Method, system and device for generating traffic scene attribute label

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110242214.XA CN112966334A (en) 2021-03-04 2021-03-04 Method, system and device for generating traffic scene attribute label

Publications (1)

Publication Number Publication Date
CN112966334A true CN112966334A (en) 2021-06-15

Family

ID=76276772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110242214.XA Pending CN112966334A (en) 2021-03-04 2021-03-04 Method, system and device for generating traffic scene attribute label

Country Status (1)

Country Link
CN (1) CN112966334A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048972A (en) * 2022-03-11 2022-09-13 北京智能车联产业创新中心有限公司 Traffic scene deconstruction classification method and virtual-real combined automatic driving test method
CN115440028A (en) * 2022-07-22 2022-12-06 中智行(苏州)科技有限公司 Traffic road scene classification method based on labeling

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048972A (en) * 2022-03-11 2022-09-13 北京智能车联产业创新中心有限公司 Traffic scene deconstruction classification method and virtual-real combined automatic driving test method
CN115048972B (en) * 2022-03-11 2023-04-07 北京智能车联产业创新中心有限公司 Traffic scene deconstruction classification method and virtual-real combined automatic driving test method
CN115440028A (en) * 2022-07-22 2022-12-06 中智行(苏州)科技有限公司 Traffic road scene classification method based on labeling
CN115440028B (en) * 2022-07-22 2024-01-30 中智行(苏州)科技有限公司 Traffic road scene classification method based on labeling

Similar Documents

Publication Publication Date Title
CN110647839B (en) Method and device for generating automatic driving strategy and computer readable storage medium
CN112703459B (en) Iterative generation of confrontational scenarios
CN109101907B (en) Vehicle-mounted image semantic segmentation system based on bilateral segmentation network
CN112465395A (en) Multi-dimensional comprehensive evaluation method and device for automatically-driven automobile
US20190164007A1 (en) Human driving behavior modeling system using machine learning
US20230150529A1 (en) Dynamic sensor data augmentation via deep learning loop
CN108921200A (en) Method, apparatus, equipment and medium for classifying to Driving Scene data
CN106080590A (en) Control method for vehicle and device and the acquisition methods of decision model and device
CN111797526B (en) Simulation test scene construction method and device
CN107316010A (en) A kind of method for recognizing preceding vehicle tail lights and judging its state
CN112966334A (en) Method, system and device for generating traffic scene attribute label
US11645360B2 (en) Neural network image processing
CN111275249A (en) Driving behavior optimization method based on DQN neural network and high-precision positioning
CN112987711B (en) Optimization method of automatic driving regulation algorithm and simulation testing device
CN111062405A (en) Method and device for training image recognition model and image recognition method and device
CN110874610B (en) Human driving behavior modeling system and method using machine learning
CN117056153A (en) Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems
CN116597690B (en) Highway test scene generation method, equipment and medium for intelligent network-connected automobile
Divakarla et al. Journey mapping—a new approach for defining automotive drive cycles
CN113252058A (en) IMU data processing method, system, device and storage medium
Li A scenario-based development framework for autonomous driving
CN114613144B (en) Method for describing motion evolution law of hybrid vehicle group based on Embedding-CNN
Marques et al. YOLOv3: Traffic Signs & Lights Detection and Recognition for Autonomous Driving.
CN111881232B (en) Semantic association lane acquisition method and device for traffic lights
CN116978236B (en) Traffic accident early warning method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination