WO2024040864A1 - 雷达检测模型确定方法、系统、电子设备及可读存储介质 - Google Patents

雷达检测模型确定方法、系统、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2024040864A1
WO2024040864A1 PCT/CN2023/071958 CN2023071958W WO2024040864A1 WO 2024040864 A1 WO2024040864 A1 WO 2024040864A1 CN 2023071958 W CN2023071958 W CN 2023071958W WO 2024040864 A1 WO2024040864 A1 WO 2024040864A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
detection model
radar detection
component
scene
Prior art date
Application number
PCT/CN2023/071958
Other languages
English (en)
French (fr)
Inventor
詹景麟
刘铁军
陈三霞
张晶威
Original Assignee
苏州元脑智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州元脑智能科技有限公司 filed Critical 苏州元脑智能科技有限公司
Publication of WO2024040864A1 publication Critical patent/WO2024040864A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • G06F8/24Object-oriented
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00

Definitions

  • This application relates to the field of radar algorithm development, and in particular to radar detection model determination methods, systems, electronic equipment and non-volatile readable storage media.
  • the autonomous driving industry is in the window period of making great strides from L2 to L3 and L4.
  • Traditional cameras have inherent flaws in ranging, speed measurement, and dark light adaptability.
  • radar devices By introducing radar devices, a multi-sensor fusion redundant system can be built.
  • the perception system provides effective guarantee for the safety and reliability of autonomous driving in highly complex scenarios.
  • Figure 1 is a schematic diagram of the development of a traditional self-driving data closed-loop algorithm.
  • Data collected by vehicles in a real road environment for specific application scenarios is reflowed and stored in a hard drive or cloud disk.
  • Through data analysis technology effective Accurate screening of data, and manual annotation of the screened data according to actual detection needs to build model training data sets, realize cloud algorithm development, and apply the developed algorithms to specific scenarios for testing, and collect failure data from vehicles.
  • the high level of confidentiality of manufacturer's road test data creates obstacles for various academic research efforts to break through existing technical bottlenecks, and is not conducive to the large-scale commercial implementation of autonomous driving.
  • this application provides a radar detection model determination method, including:
  • the point cloud data in the current environment is collected by collecting the radar device configured in the vehicle, and the current location is determined based on the point cloud data.
  • the basic database includes point cloud components under each basic type generated using digital twin technology, and the target point cloud component is any point cloud component;
  • the radar detection model determination method further includes:
  • the process of constructing a virtual scene set based on the target point cloud components includes:
  • the first scene set is adjusted according to the working parameters to obtain a second scene set, and the second scene set is used as a virtual scene set.
  • the base database also includes:
  • the radar detection model determination method further includes:
  • the process of constructing a virtual scene set based on the target point cloud components includes:
  • the radar detection model determination method further includes:
  • the process of constructing a virtual scene set based on the target point cloud component and the target noise transformation matrix includes:
  • the operating parameters include detection angle, lateral resolution and longitudinal resolution of the radar device
  • the process of adjusting the first scene set according to the working parameters to obtain the second scene set includes:
  • the point cloud data in the first scene set is adjusted based on the coordinate range and density to obtain the second scene set.
  • the basic database includes a basic road point cloud component database, a basic detection object point cloud component database and a basic peripheral environment point cloud component database.
  • the radar detection model determination method further includes:
  • digital twin technology is used to generate all point cloud components under the type, and all generated point cloud components are added to the basic database.
  • the process of training a radar detection model based on a virtual scene set and deploying the radar detection model on the target vehicle includes:
  • the radar detection model is deployed on the target vehicle.
  • the radar detection model determination method further includes:
  • the radar detection model determination method further includes:
  • the process of deploying the radar detection model on the target vehicle includes:
  • the radar detection model is deployed on the target vehicle.
  • the radar detection model determination method further includes:
  • this application also provides a radar detection model determination system, including:
  • the first determination module is used to collect point cloud data in the current environment by collecting the radar device configured in the vehicle, and determine the type of the current point cloud component in the current environment based on the point cloud data;
  • the second determination module is used to determine the target point cloud component from the basic database based on the type.
  • the basic database includes point cloud components under each basic type generated using digital twin technology, and the target point cloud component is any point cloud component;
  • the first building module is used to construct a virtual scene set based on the target point cloud component and train the radar detection model based on the virtual scene set;
  • Deployment module used to deploy the radar detection model on the target vehicle.
  • this application also provides an electronic device, including:
  • Memory for storing computer-readable instructions
  • One or more processors configured to implement the steps of the radar detection model determination method as described in any one of the above when executing the computer readable instructions.
  • the present application also provides a non-volatile readable storage medium.
  • Computer readable instructions are stored on the readable storage medium. When the computer readable instructions are executed by the processor, the above is implemented. The steps of the radar detection model determination method described in any one of this article.
  • Figure 1 is a schematic diagram of the development of a traditional self-driving data closed-loop algorithm provided in this application;
  • Figure 2 is a step flow chart of a method for determining a radar detection model provided by this application
  • Figure 3 is a step flow chart of another radar detection model determination method provided by this application.
  • Figure 4 is a schematic structural diagram of a radar detection model determination system provided by this application.
  • Figure 5 is a schematic diagram of the internal structure of an electronic device provided by this application.
  • the core of this application is to provide a radar detection model determination method, system, electronic equipment and readable storage media, using digital twin technology to realize point cloud modeling of point cloud components in the real environment and the construction of a basic database, and based on the evaluation Indicators can quickly supplement missing scenes and missing point cloud components as well as iterative training of models, which can solve the problems of real road acquisition data set creation and maintenance costs and limitations of data set scenarios in the process of building radar detection models. It solves the problem of strong data openness and poor data openness, and improves the algorithm iteration efficiency of data closed loop.
  • Figure 2 is a flow chart of a method for determining a radar detection model provided by this application.
  • the method for determining a radar detection model includes:
  • S101 Collect point cloud data in the current environment by collecting the radar device configured in the vehicle, and determine the type of the current point cloud component in the current environment based on the point cloud data;
  • the radar detection model determination method also includes the operation of building a basic database based on digital twin technology.
  • the basic database includes point cloud components of various basic types generated using digital twin technology.
  • Basic types include but are not limited to intersections, motor vehicles, pedestrians, buildings, etc.
  • Each basic type can include one or more point cloud components.
  • intersection types include point cloud components such as intersections, T-intersections, and Y-intersections.
  • databases can be constructed separately for road scenes, detection objects, and peripheral environments.
  • i 0, 1, 2 ,... ⁇ , the point cloud component database is divided into three categories according to the three mainstream road scenes of highway, urban road, and rural road, and a road component database is constructed for one of the road scenes.
  • the database for urban roads needs to include Point cloud components corresponding to major road forms such as expressways, main roads, secondary roads, branch roads, etc., including point cloud components corresponding to cross, T-shaped, Y-shaped and other intersections, covering crosswalks, urban elevated roads, non-motorized lanes, traffic Point cloud components corresponding to various scenarios such as traffic lights, various types of vehicle diversion, and merging roads.
  • the basic detection object point cloud component database O ⁇ O i
  • This database contains common motor vehicles (such as cars, SUVs, buses, and trucks).
  • Point cloud components corresponding to pedestrians adults, children
  • point cloud components corresponding to non-motor vehicles such as bicycles, electric vehicles, etc.
  • the foundation can be built based on digital twin technology
  • Peripheral environment point cloud component database E ⁇ E i
  • i 0, 1, 2,... ⁇ .
  • This database contains point cloud components corresponding to common road surrounding environments, such as point clouds corresponding to various green plants and buildings. components etc.
  • a collection vehicle equipped with a radar device is used to sample a small amount of point cloud data in the application environment of the radar algorithm, that is, the current environment. Based on the point cloud data, the type R need of the road component in the current scene and the type O need of the detection object component are counted.
  • the type of peripheral environment point cloud component E need for example, R need is an intersection, O need is a motor vehicle, and E need is a building.
  • S102 Determine the target point cloud component from the basic database based on the type.
  • the basic database includes point cloud components under each basic type generated using digital twin technology.
  • the target point cloud component is any point cloud component;
  • S103 Construct a virtual scene set based on the target point cloud component, train the radar detection model based on the virtual scene set, and deploy the radar detection model on the target vehicle.
  • Equation (1) it means that the basic database constructed in the above steps meets the needs of the current environment.
  • the corresponding target point cloud components r, o, e are selected from the basic database R, O, E based on the point cloud data, Among them, r represents the specific point cloud component under the R need type. Assume that R need is an intersection, then r can be a crossroad, a T-junction, etc. It is understandable that the number of r can be determined based on the point cloud data, for example, it can include 3 crossroads, 2 T-junctions, etc., o and e are determined in the same way as r.
  • the radar detection model determination method further includes:
  • digital twin technology is used to generate all point cloud components under the type, and all generated point cloud components are added to the basic database.
  • the type of the current point cloud component is not included in the basic type of the basic database, that is, the current point cloud component is a special component not included in the basic database, then digital twin technology is used to generate all point clouds under this type. Component and will be added to the base database to update the base database. Quickly supplement missing point cloud components. Further, a virtual scene set is constructed based on the target point cloud components determined in the above steps, a radar detection model is trained based on the virtual scene set, and the radar detection model is deployed on the target vehicle.
  • digital twin technology is used to realize point cloud modeling of point cloud components in the real environment and the construction of a basic database, and based on the evaluation indicators, the rapid supplement of missing scenes and missing point cloud components and the iterative training of the model are carried out , solves the problem of high cost of creation and maintenance of real road acquisition data sets in the process of building radar detection models, removes the manual data annotation link, improves the algorithm iteration efficiency of data closed loop, and flexibly constructs it through point cloud components
  • the simulation scenario corresponding to the current environment can, on the one hand, solve the problem of strong limitations of data set scenarios in the process of building radar detection models. On the other hand, it can avoid data privacy and solve the problem of poor data openness, which is conducive to open academic research. Promote breakthroughs in relevant technical bottlenecks.
  • the radar detection model determination method also includes:
  • the process of constructing a virtual scene set based on the target point cloud components includes:
  • the first scene set is adjusted according to the working parameters to obtain a second scene set, and the second scene set is used as a virtual scene set.
  • the target point cloud components determined in S102 can be flexibly combined to construct an initial scene set. It is assumed that the target point cloud components determined in S102 based on the current environment are as follows: 3 intersections, 6 cars, 1 SUV and 4 buildings, the first scene set S 0 can be flexibly constructed based on the point cloud data of the above target point cloud components, and the first scene set S 0 can be adjusted by adapting the working parameters of the radar device to obtain the second scene set S 1 , making the second scene set S 1 closer to the scenes actually collected by the radar device.
  • the working parameters of the radar device include but are not limited to detection angle, lateral resolution and longitudinal resolution.
  • the detection angle of the radar device is used to determine the coordinate range of the point cloud data.
  • the first scene set can be The point cloud data in S 0 is filtered to remove point cloud data outside the coordinate range. That is, the second scene set S 1 only includes point cloud data within the coordinate range.
  • the lateral resolution and longitudinal resolution of the radar device can be Used to determine the sparsity of point cloud data at each location in the scene. For example, those far away should be more evacuated, and those closer should be more dense.
  • each point cloud data in the first scene set S 0 may be Uniformly distributed, therefore the density of each point cloud data in the first scene set S 0 can be adjusted according to the determined density of each position to obtain the second scene set S 1 .
  • the basic database also includes:
  • i 0, 1, 2... ⁇ , including the impact of various common noise sources (such as rain, snow, fog, dust, etc.) on point cloud data.
  • the radar detection model determination method further includes:
  • the process of constructing a virtual scene set based on the target point cloud components includes:
  • the possible noise type T need can be obtained by collecting the weather information where the vehicle is located, and it can be judged whether the constructed basic database can meet the needs of the current environment. Specifically, it can be judged whether the formula (2) is true:
  • Equation (2) it means that the basic database constructed in the above steps meets the needs of the current environment.
  • the corresponding target point cloud components r, o, e are selected from the database R, O, E based on the point cloud data.
  • T need selects the corresponding target noise conversion matrix t from the database T, and the number of target noise conversion matrices is determined according to the local weather.
  • the noise type is modeled based on digital twin technology, a noise transformation matrix is constructed, and the basic database is updated accordingly.
  • the process of constructing a virtual scene set based on the target point cloud component and the target noise transformation matrix includes:
  • the working parameters of the radar device are first obtained.
  • the working parameters of the radar device include but are not limited to detection angle, lateral resolution and longitudinal resolution.
  • the detection angle of the radar device is used to determine the coordinate range of the point cloud data. Based on the The coordinate range can filter the point cloud data in the first scene set S 0 and remove the point cloud data outside the coordinate range. That is, the second scene set S 1 only includes point cloud data within the coordinate range.
  • the radar device Horizontal resolution and vertical resolution can be used to determine the density of point cloud data at each location in the scene, such as applications that are far away.
  • the sparse ones should be more sparse, and the ones that are closer should be more dense.
  • the first scene set S can be calculated based on the determined density of each location.
  • the density of each point cloud data in 0 is adjusted to obtain the second scene set S 1 .
  • the random introduction in this step includes randomly introducing one or more types of noise into a certain scene, or introducing no noise into the scene.
  • j 0, 1, 2,... ⁇
  • j 0, 1, 2,... ⁇
  • the second scene set S 1 is converted into the third scene set
  • the third scene set S2 be used as a virtual scene set.
  • the process of training a radar detection model based on a virtual scene set and deploying the radar detection model on the target vehicle includes:
  • the radar detection model is deployed on the target vehicle.
  • a virtual scene training set ⁇ s
  • a virtual scene test set Use the virtual scene training set to train the radar detection model, and then use the virtual scene test set to conduct preliminary testing on the detection accuracy of the trained radar detection model.
  • To evaluate calculate the detection accuracy AP k of various detection objects in the virtual scene set, and then obtain the first detection accuracy AP1 of the radar detection model based on the detection accuracy AP k of various types of objects.
  • m is the number of categories of detected objects
  • k is the index of the category.
  • the trained radar detection model passes the preliminary evaluation of the virtual scene.
  • the layout parameters include but are not limited to the number of layouts and the layout direction.
  • the radar detection model has a relatively low detection accuracy AP k for motor vehicles.
  • the first scene set is rebuilt, and subsequent steps are repeated until the first The detection accuracy AP1 is greater than or equal to the first preset value AP sim .
  • the radar detection model After the trained radar detection model passes the preliminary evaluation of the virtual scene, the radar detection model will continue to be evaluated based on a small amount of sampled road point cloud data, including:
  • the process of deploying the radar detection model on the target vehicle includes:
  • the radar detection model is deployed on the target vehicle.
  • the trained radar detection model is applied to the representative real scenes collected by the vehicle, a real scene test set is constructed based on the point cloud data of the current environment collected by the radar device, and the radar detection model is tested through the real scene test set.
  • the second detection accuracy AP2 of the radar detection model is calculated. If AP2 is greater than or equal to the second preset value AP real , the radar detection model passes the evaluation of representative real scenarios and can be deployed on the target vehicle.
  • the second detection accuracy AP2 is less than the second preset value AP real , then perform a cause analysis to determine whether there is a missing scene and/or a missing component: in response to the judgment result that there is a missing component, determine real For the new point cloud component type in the real scene test set, use digital twin technology to generate all point cloud components under the type, and add all the generated point cloud components to the basic database, and repeat the above to determine whether the built basic database can Operations that meet the needs of the current environment, in response to the judgment result that there is a missing scene, increase the proportion of composite scenes and specific object components that are difficult to detect in scene generation, specifically, determine the scenes to be adjusted and the scenes to be adjusted Corresponding target point cloud component to be adjusted, adjust the layout parameters of the target point cloud component to be adjusted, and then repeat the steps of constructing a virtual scene set based on the target point cloud component until the second detection accuracy is greater than or equal to the second preset value, realizing based on Evaluation indicators enable rapid addition of missing scenes and missing point
  • the radar detection model is deployed on the vehicle end to collect missed detection and incorrect detection phenomena, and rapid iteration of the algorithm is implemented based on digital twin technology. If no missed detection or incorrect detection of detected objects occurs in actual applications, the algorithm iteration is completed; If there are missed detections or incorrect detections of detected objects in actual applications, the causes will be analyzed. For the composite scenes and object components where missed detections or incorrect detections occur, their proportion in the scene generation will be increased, and the target point cloud will be repeatedly detected.
  • the component constructs a virtual scene set and trains the radar detection model based on the virtual scene set; if there is a lack of basic point cloud component sampling, the operation of determining whether the constructed basic database can meet the needs of the current environment is repeated.
  • Figure 3 is a step flow chart of a preferred method for determining a radar detection model provided by an embodiment of the present application.
  • the method for determining a radar detection model includes:
  • S202 Collect point cloud data in the current environment by collecting the radar device configured in the vehicle, determine the type of the current point cloud component in the current environment based on the point cloud data, and determine the current noise type based on the weather information of the current environment;
  • S204 Determine the target point cloud component and the target noise transformation matrix from the basic database based on the type of the current point cloud component and the current noise type;
  • S206 Determine the working parameters of the radar device, and adjust the first scene set according to the working parameters to obtain the second scene set;
  • S207 Randomly add the target noise transformation matrix or unit matrix to each scene in the second scene set to obtain the third scene set;
  • S210 Determine the first detection accuracy of the radar detection model through the virtual scene test set
  • S212 Construct a real scene test set based on the point cloud data of the current environment collected by the radar device, and determine the second detection accuracy of the radar detection model through the real scene test set;
  • S215 Determine whether there is a missed or incorrect detection of the target object. In response to the judgment result being yes, execute S216. In response to the judgment result being no, end;
  • S216 Cause analysis. In response to the cause analysis result being that the scene is missing, S205 is executed. In response to the cause analysis result being that the component is missing, S202 is executed.
  • this application proposes a new autonomous driving radar algorithm development method based on digital twin technology to solve the current problems of high construction cost of high-quality autonomous driving point cloud data sets, strong scene limitations, and poor data openness, using digital Twin technology realizes the modeling of point cloud data and the construction of basic databases that constitute components of real-world autonomous driving scenarios, reducing the cost of constructing and maintaining model development data sets, and accelerating the iterative update of data sets; through the flexible combination of point cloud components and noise matrices, Different scenarios, multiple functions, and different LiDAR configuration solutions correspond to the agile development of algorithm models to achieve efficient iteration of targeted scenarios and detection object algorithms; virtual simulation scene generation technology is used to avoid data privacy and solve the problem of poor data openness, which is beneficial to Open up academic research and promote breakthroughs in relevant technical bottlenecks.
  • FIG. 4 is a schematic structural diagram of a radar detection model determination system provided by this application.
  • the radar detection model determination system includes:
  • the first determination module 1 is used to collect point cloud data in the current environment by collecting the radar device configured in the vehicle, and determine the type of the current point cloud component in the current environment based on the point cloud data;
  • the second determination module 2 is used to determine the target point cloud component from the basic database based on the type.
  • the basic database includes point cloud components under each basic type generated using digital twin technology, and the target point cloud component is any point cloud component;
  • the first building module 3 is used to construct a virtual scene set based on the target point cloud component and train the radar detection model based on the virtual scene set;
  • Deployment module 4 is used to deploy the radar detection model on the target vehicle.
  • digital twin technology is used to realize point cloud modeling of point cloud components in the real environment and the construction of a basic database, and based on the evaluation indicators, the rapid supplement of missing scenes and missing point cloud components and the iterative training of the model are carried out , solves the problem of high cost of creation and maintenance of real road acquisition data sets in the process of building radar detection models, removes the manual data annotation link, improves the algorithm iteration efficiency of data closed loop, and flexibly constructs it through point cloud components
  • the simulation scenario corresponding to the current environment can, on the one hand, solve the problem of strong limitations of data set scenarios in the process of building radar detection models. On the other hand, it can avoid data privacy and solve the problem of poor data openness, which is conducive to open academic research. Promote breakthroughs in relevant technical bottlenecks.
  • the radar detection model determination system also includes:
  • the third determination module is used to determine the working parameters of the radar device
  • the process of constructing a virtual scene set based on the target point cloud components includes:
  • the first scene set is adjusted according to the working parameters to obtain a second scene set, and the second scene set is used as a virtual scene set.
  • the basic database also includes:
  • the radar detection model determination system also includes:
  • the fourth determination module is used to determine the weather information of the current environment, determine the current noise type based on the weather information, and determine the target noise transformation matrix from all noise transformation matrices in the basic database according to the current noise type;
  • the process of constructing a virtual scene set based on the target point cloud components includes:
  • the radar detection model determination system also includes:
  • the fifth determination module is used to determine the working parameters of the radar device
  • the process of constructing a virtual scene set based on the target point cloud component and the target noise transformation matrix includes:
  • the working parameters include the detection angle, lateral resolution and longitudinal resolution of the radar device;
  • the process of adjusting the first scene set according to the working parameters to obtain the second scene set includes:
  • the point cloud data in the first scene set is adjusted based on the coordinate range and density to obtain the second scene set.
  • the basic database includes a basic road point cloud component database, a basic detection object point cloud component database and a basic peripheral environment point cloud component database.
  • the radar detection model determination system also includes:
  • the first judgment module is used to judge whether the type of the current point cloud component in the current environment is any basic type in the basic database, and triggers the first processing module in response to the judgment result being no;
  • the first processing module is used to use digital twin technology to generate all point cloud components under the type, and add all generated point cloud components to the basic database.
  • the process of training a radar detection model based on a virtual scene set and deploying the radar detection model on the target vehicle includes:
  • the radar detection model is deployed on the target vehicle.
  • the radar detection model determination system also includes:
  • the sixth determination module is used to determine the scene to be adjusted and the target point cloud component to be adjusted corresponding to the scene to be adjusted when the first detection accuracy is less than the first preset value;
  • the first building module 3 is also used to adjust the layout parameters of the target point cloud component to be adjusted, and then repeat the steps of building a virtual scene set based on the target point cloud component.
  • the radar detection model determination system also includes:
  • the second building module is used to construct a real scene test set based on the point cloud data of the current environment collected by the radar device;
  • the process of deploying the radar detection model on the target vehicle includes:
  • the radar detection model is deployed on the target vehicle.
  • the radar detection model determination system further includes:
  • the second judgment module is also used to judge whether there is scene missing and/or component missing when the second detection accuracy is less than the second preset value. In response to the judgment result that there is a component missing, trigger the first processing module. In response to the judgment result, If there is a missing scene, the second processing module is triggered;
  • the first processing module is also used to determine the type of new point cloud components in the real scene test set, use digital twin technology to generate all point cloud components under the type, and add all generated point cloud components to the basic database.
  • the second processing module is used to determine the scene to be adjusted and the target point cloud component to be adjusted corresponding to the scene to be adjusted, adjust the layout parameters of the target point cloud component to be adjusted, and then repeat the steps of constructing a virtual scene set based on the target point cloud component. steps.
  • this application also provides an electronic device, as shown in Figure 5, including:
  • Memory 10 for storing computer readable instructions
  • One or more processors 20 are configured to implement the steps of the radar detection model determination method as described in any of the above embodiments when executing computer readable instructions.
  • the memory includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions
  • the internal memory provides an environment for the execution of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the processor executes the computer readable instructions stored in the memory, the following steps can be implemented: collecting point cloud data in the current environment by collecting the radar device configured in the vehicle, and determining the type of the current point cloud component in the current environment based on the point cloud data. ; Determine the target point cloud component from the basic database based on the type.
  • the basic database includes point cloud components under each basic type generated using digital twin technology.
  • the target point cloud component is any point cloud component; build a virtual scene set based on the target point cloud component. , train the radar detection model based on the virtual scene set, and deploy the radar detection model on the target vehicle.
  • digital twin technology is used to realize point cloud modeling of point cloud components in the real environment and the construction of a basic database, and based on the evaluation indicators, the rapid supplement of missing scenes and missing point cloud components and the iterative training of the model are carried out , solves the problem of high cost of creation and maintenance of real road acquisition data sets in the process of building radar detection models, removes the manual data annotation link, improves the algorithm iteration efficiency of data closed loop, and flexibly builds simulations corresponding to the current environment through point cloud components Scenarios, on the one hand, can solve the problem of strong limitations of data set scenarios in the process of building radar detection models. On the other hand, they can avoid data privacy and solve the problem of poor data openness, which is conducive to open academic research and promotes breakthroughs in related technical bottlenecks. .
  • the electronic device further includes:
  • the input interface is connected to the processor and is used to obtain externally imported computer-readable instructions, parameters and instructions, and save them to the memory under the control of the processor.
  • the input interface can be connected to an input device to receive parameters or instructions manually input by the user.
  • the input device may be a touch layer covered on the display screen, or may be a button, trackball or touch pad provided on the terminal housing.
  • the display unit is connected to the processor and used to display data sent by the processor.
  • the display unit may be a liquid crystal display or an electronic ink display.
  • the network port is connected to the processor and is used to communicate with external terminal devices.
  • the communication technology used in the communication connection can be wired communication technology or wireless communication technology, such as Mobile High Definition Link Technology (MHL), Universal Serial Bus (USB), High Definition Multimedia Interface (HDMI), Wireless Fidelity Technology (WiFi), Bluetooth communication technology, low-power Bluetooth communication technology, communication technology based on IEEE802.11s, etc.
  • this application also provides a non-volatile readable storage medium, which stores computer-readable instructions.
  • the computer-readable instructions are executed by the processor, any one of the above embodiments can be implemented. The steps of the described radar detection model determination method.
  • the readable storage medium can include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • Computer readable instructions are stored on the storage medium. When the computer readable instructions are executed by the processor, the following steps are implemented: collecting point cloud data in the current environment by collecting the radar device configured in the vehicle, and determining the point cloud data in the current environment based on the point cloud data.
  • the type of the current point cloud component determine the target point cloud component from the basic database based on the type.
  • the basic database includes point cloud components under each basic type generated using digital twin technology.
  • the target point cloud component is any point cloud component; according to the target point
  • the cloud component builds a virtual scene set, trains the radar detection model based on the virtual scene set, and deploys the radar detection model on the target vehicle.
  • digital twin technology is used to realize point cloud modeling of point cloud components in the real environment and the construction of a basic database, and based on the evaluation indicators, the rapid supplement of missing scenes and missing point cloud components and the iterative training of the model are carried out , solves the problem of high cost of creation and maintenance of real road acquisition data sets in the process of building radar detection models, removes the manual data annotation link, improves the algorithm iteration efficiency of data closed loop, and flexibly builds simulations corresponding to the current environment through point cloud components Scenarios, on the one hand, can solve the problem of strong limitations of data set scenarios in the process of building radar detection models. On the other hand, they can avoid data privacy and solve the problem of poor data openness, which is conducive to open academic research and promotes breakthroughs in related technical bottlenecks. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本申请公开了一种雷达检测模型确定方法、系统、电子设备及可读存储介质,涉及雷达算法开发领域,该雷达检测模型确定方法使用数字孪生技术实现真实环境中的点云组件的点云建模及基础数据库的构建,并基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练。

Description

雷达检测模型确定方法、系统、电子设备及可读存储介质
相关申请的交叉引用
本申请要求于2022年8月25日提交中国专利局,申请号为202211022746.3,申请名称为“雷达检测模型确定方法、系统、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及雷达算法开发领域,特别涉及雷达检测模型确定方法、系统、电子设备及非易失性的可读存储介质。
背景技术
现阶段自动驾驶产业正处于从L2向L3、L4大步迈进的窗口期,传统摄像头在测距、测速、暗光适应能力方面存在本征缺陷,通过引入雷达装置,构建多传感器融合冗余的感知体系,为自动驾驶在高度复杂场景下的安全性、可靠性提供有效保障。
如图1所示,图1为传统自动驾驶数据闭环算法开发示意图,采集车辆在真实道路环境中采集特定应用场景的数据,回流并存储至硬盘或云端网盘中,通过数据分析技术,实现有效数据的准确筛选,并根据实际检测需要对于筛选所得数据进行人工标注,用以构建模型训练数据集,实现云端算法开发,并将研发所得算法应用于具体场景进行测试,采集车辆收集失效数据,经过新一轮数据回流、分析、标注,更新模型训练数据集进行算法的迭代开发。
然而,发明人意识到上述方案存在以下缺点,一是用于雷达算法开发的训练数据集构建、维护成本高,且更新速度受限于人工数据分析与标注;二是基于道路采集的传感器数据,场景局限性强,对于未能采集或采集数量较少的道路场景适应性差,为自动驾驶整车系统的安全性带来极大隐患;三是受隐私安全、商业竞争等因素的影响,各家厂商路测数据保密等级高,为突破现有技术瓶颈的各项学术研究工作造成障碍,不利于自动驾驶大规模商业落地。
因此,如何提供一种解决上述技术问题的方案是本领域技术人员目前需要解决的问题。
发明内容
为解决上述技术问题,本申请提供了一种雷达检测模型确定方法,包括:
通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当 前环境中的当前点云组件的类型;
基于上述类型从基础数据库中确定目标点云组件,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,目标点云组件为任一点云组件;及
根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上。
在一些实施例中,该雷达检测模型确定方法还包括:
确定雷达装置的工作参数;
相应的,根据目标点云组件构建虚拟场景集的过程包括:
基于目标点云组件构建第一场景集;及
根据工作参数对第一场景集进行调整得到第二场景集,将第二场景集作为虚拟场景集。
在一些实施例中,基础数据库还包括:
利用数字孪生技术生成的各个噪声类型下的噪声转换矩阵。
在一些实施例中,基于点云数据确定当前环境中的当前点云组件的类型之后,该雷达检测模型确定方法还包括:
确定当前环境的天气信息;
基于天气信息确定当前噪声类型;及
根据当前噪声类型从基础数据库中的所有噪声转换矩阵中确定目标噪声转换矩阵;
相应的,根据目标点云组件构建虚拟场景集的过程包括:
根据目标点云组件和目标噪声转换矩阵构建虚拟场景集。
在一些实施例中,该雷达检测模型确定方法还包括:
确定雷达装置的工作参数;
相应的,根据目标点云组件和目标噪声转换矩阵构建虚拟场景集的过程包括:
基于目标点云组件构建第一场景集;
根据工作参数对第一场景集进行调整得到第二场景集;及
对第二场景集中的各个场景随机加入目标噪声转换矩阵或单位矩阵得到第三场景集,将第三场景集作为虚拟场景集。
在一些实施例中,工作参数包括雷达装置的检测角度、横向分辨率和纵向分辨率;
相应的,根据工作参数对第一场景集进行调整得到第二场景集的过程包括:
根据检测角度确定点云数据的坐标范围;
利用横向分辨率和纵向分辨率确定各个位置的点云数据的疏密度;及
基于坐标范围和疏密度对第一场景集中的点云数据进行调整得到第二场景集。
在一些实施例中,基础数据库包括基础道路点云组件数据库,基础检测对象点云组件数据库和基础外围环境点云组件数据库。
在一些实施例中,基于点云数据确定当前环境中的当前点云组件的类型之后,该雷达检测模型确定方法还包括:
判断当前环境中的当前点云组件的类型是否为基础数据库中的任一基础类型;及
响应于判断结果为否,利用数字孪生技术生成类型下的所有点云组件,并将生成的所有点云组件添加至基础数据库中。
在一些实施例中,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上的过程包括:
将虚拟场景集划分为虚拟场景训练集和虚拟场景测试集;
通过虚拟场景训练集训练雷达检测模型,通过虚拟场景测试集确定雷达检测模型的第一检测精度;及
响应于第一检测精度大于或等于第一预设值,将雷达检测模型部署在目标车辆上。
在一些实施例中,通过虚拟场景测试集确定雷达检测模型的第一检测精度之后,该雷达检测模型确定方法还包括:
响应于第一检测精度小于第一预设值,确定待调整场景及待调整场景对应的待调整目标点云组件;及
调整待调整目标点云组件的布设参数,然后重复根据目标点云组件构建虚拟场景集的步骤。
在一些实施例中,该雷达检测模型确定方法还包括:
基于雷达装置采集的当前环境的点云数据构建真实场景测试集;及
通过真实场景测试集确定雷达检测模型的第二检测精度;
相应的,将雷达检测模型部署在目标车辆上的过程包括:
当第二检测精度大于或等于第二预设值,将雷达检测模型部署在目标车辆上。
在一些实施例中,通过真实场景测试集确定雷达检测模型的第二检测精度之后,该雷达检测模型确定方法还包括:
当第二检测精度小于第二预设值,判断是否存在场景缺失和/或组件缺失;及
响应于判断结果为存在场景缺失,确定待调整场景及待调整场景对应的待调整目标点云组件,调整待调整目标点云组件的布设参数,然后重复根据目标点云组件构建虚拟场景集的步骤;或,响应于判断结果为存在组件缺失,确定真实场景测试集中的新的点 云组件的类型,利用数字孪生技术生成类型下的所有点云组件,并将生成的所有点云组件添加至基础数据库中。
为解决上述技术问题,本申请还提供了一种雷达检测模型确定系统,包括:
第一确定模块,用于通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当前环境中的当前点云组件的类型;
第二确定模块,用于基于类型从基础数据库中确定目标点云组件,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,目标点云组件为任一点云组件;
第一构建模块,用于根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型;及
部署模块,用于将雷达检测模型部署在目标车辆上。
为解决上述技术问题,本申请还提供了一种电子设备,包括:
存储器,用于存储计算机可读指令;及
一个或多个处理器,用于执行所述计算机可读指令时实现如上文任意一项所述的雷达检测模型确定方法的步骤。
为解决上述技术问题,本申请还提供了一种非易失性的可读存储介质,所述可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如上文任意一项所述的雷达检测模型确定方法的步骤。
附图说明
为了更清楚地说明本申请实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请所提供的一种传统自动驾驶数据闭环算法开发示意图;
图2为本申请所提供的一种雷达检测模型确定方法的步骤流程图;
图3为本申请所提供的另一种雷达检测模型确定方法的步骤流程图;
图4为本申请所提供的一种雷达检测模型确定系统的结构示意图;
图5为本申请所提供的一种电子设备的内部结构示意图。
具体实施方式
本申请的核心是提供一种雷达检测模型确定方法、系统、电子设备及可读存储介质,使用数字孪生技术实现真实环境中的点云组件的点云建模及基础数据库的构建,并基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练,能够解决雷达检测模型构建过程中存在的真实路采数据集创建、维护成本高、数据集场景局限性 强、数据开放性差的问题,提升数据闭环的算法迭代效率。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参照图2,图2为本申请所提供的一种雷达检测模型确定方法的步骤流程图,该雷达检测模型确定方法包括:
S101:通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当前环境中的当前点云组件的类型;
具体的,在执行本步骤之前,该雷达检测模型确定方法还包括基于数字孪生技术构建基础数据库的操作,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件。基础类型包括但不限于路口、机动车、行人、建筑等,每个基础类型下可以包括一个或多个点云组件,比如路口类型下包括十字路口、丁字路口、Y字路口等点云组件。
作为一种可选的实施例,可以针对道路场景、检测对象、外围环境分别构建数据库,如可基于数字孪生技术构建基础道路场景点云组件数据库R={Ri|i=0,1,2,…},该点云组件数据库根据公路、城市道路、乡村道路三大主流道路场景划分为三大类,并针对其中某一类道路场景构建道路组件的数据库,如针对城市道路的数据库需包含快速路、主干路、次干路、支路等主要道路形式对应的点云组件,包含十字、丁字、Y字等多种路口对应的点云组件,涵盖人行横道、城市高架、非机动车道、交通信号灯、各类车辆分流、汇入道路等各类场景对应的点云组件。可基于数字孪生技术构建基础检测对象点云组件数据库O={Oi|i=0,1,2,…},该数据库中包含常见的机动车(如小轿车、SUV、客车、载货卡车等)对应的点云组件、行人(成人、儿童)对应的点云组件、非机动车(如自行车、电动车等)等需要检测的目标对象对应的点云组件;可基于数字孪生技术构建基础外围环境点云组件数据库E={Ei|i=0,1,2,…},该数据库中包含常见的道路周边环境对应的点云组件,如各类绿植、建筑房屋对应的点云组件等。
具体的,使用配置雷达装置的采集车辆对雷达算法的应用环境即当前环境的点云数据进行少量采样,基于点云数据统计当前场景中的道路组件的类型Rneed、检测对象组件的类型Oneed、外围环境点云组件的类型Eneed,比如Rneed为路口,Oneed为机动车,Eneed为建筑房屋。
S102:基于类型从基础数据库中确定目标点云组件,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,目标点云组件为任一点云组件;
S103:根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上。
具体的,在执行S102之前,还包括判断构建的基础数据库是否可以满足当前环境的需求的操作,具体可判断式(1)是否成立:
如果式(1)成立,则说明上述步骤构建的基础数据库满足当前环境的需求,此时,基于点云数据从基础数据库R、O、E中选择对应的目标点云组件r、o、e,其中,r表示Rneed类型下的具体的点云组件,假设Rneed为路口,那么r可以为十字路口、丁字路口等,可以理解的是,r的数量可以根据点云数据确定,比如可以包括3个十字路口,2个丁字路口等,o,e的确定方式与r同理。
作为一种可选的实施例,基于点云数据确定当前环境中的当前点云组件的类型之后,该雷达检测模型确定方法还包括:
判断当前环境中的当前点云组件的类型是否为基础数据库中的任一基础类型;
响应于判断结果为否,利用数字孪生技术生成类型下的所有点云组件,并将生成的所有点云组件添加至基础数据库中。
具体的,如果当前点云组件的类型不包括在基础数据库的基础类型内,也即当前点云组件是未包含在基础数据库中的特殊组件,则利用数字孪生技术生成该类型下的所有点云组件,并将添加至基础数据库中,以更新基础数据库。实现缺失点云组件的快速补充。进一步的,根据上述步骤确定的目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上。
可见,本实施例中,使用数字孪生技术实现真实环境中的点云组件的点云建模及基础数据库的构建,并基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练,解决雷达检测模型构建过程中存在的真实路采数据集创建、维护成本高的问题,去除了人工数据标注环节,提升数据闭环的算法迭代效率,通过点云组件灵活构建 与当前环境对应的仿真场景,一方面可以解决雷达检测模型构建过程中存在的数据集场景局限性强的问题,另一方面可以规避数据隐私,解决数据开放性差的问题,有利于开放学术研究,推动相关技术瓶颈的突破。
在上述实施例的基础上:
作为一种可选的实施例,该雷达检测模型确定方法还包括:
确定雷达装置的工作参数;
相应的,根据目标点云组件构建虚拟场景集的过程包括:
基于目标点云组件构建第一场景集;
根据工作参数对第一场景集进行调整得到第二场景集,将第二场景集作为虚拟场景集。
具体的,可以将S102中确定的目标点云组件进行灵活组合,构建初始场景集,假设S102中基于当前环境确定的目标点云组件如下:3个十字路口、6辆小轿车、1辆SUV和4座建筑房屋,则可根据上述目标点云组件的点云数据灵活构建第一场景集S0,适配雷达装置的工作参数对第一场景集S0进行调整,得到第二场景集S1,使第二场景集S1更接近雷达装置实际采集到的场景。
具体的,雷达装置的工作参数包括但不限于检测角度、横向分辨率和纵向分辨率,其中,雷达装置的检测角度用于确定点云数据的坐标范围,基于该坐标范围可以对第一场景集S0中的点云数据进行筛选,将坐标范围以外的点云数据去除,也即第二场景集S1中仅包括坐标范围以内的点云数据,雷达装置的横向分辨率和纵向分辨率可以用于确定场景中各位置的点云数据的疏密度,比如距离较远的应该疏散一点,距离较近的就密集一点,考虑到第一场景集S0中各点云数据的疏密度可能是均匀分布的,因此可根据确定的各位置的疏密度对第一场景集S0中各点云数据的疏密度进行调整,得到第二场景集S1
作为一种可选的实施例,基础数据库还包括:
利用数字孪生技术生成的各个噪声类型下的噪声转换矩阵。
具体的,考虑到环境噪声会对雷达装置的检测精度产生影响,因此,本实施例中还通过数字孪生技术对各类噪声进行建模,构建噪声转换矩阵数据库,T={Ti|i=0,1,2…},包含各类常见噪声源(如雨、雪、雾、扬尘等)对点云数据产生的影响。
作为一种可选的实施例,基于点云数据确定当前环境中的当前点云组件的类型之后,该雷达检测模型确定方法还包括:
确定当前环境的天气信息;
基于天气信息确定当前噪声类型;
根据当前噪声类型从基础数据库中的所有噪声转换矩阵中确定目标噪声转换矩阵;
相应的,根据目标点云组件构建虚拟场景集的过程包括:
根据目标点云组件和目标噪声转换矩阵构建虚拟场景集。
可以理解的是,根据采集车辆所在地的天气信息即可得到可能的噪声类型Tneed,判断构建的基础数据库是否可以满足当前环境的需求的操作,具体可判断式(2)是否成立:
如果式(2)成立,则说明上述步骤构建的基础数据库满足当前环境的需求,此时,基于点云数据从数据库R、O、E中选择对应的目标点云组件r、o、e,基于Tneed从数据库T中选择对应的目标噪声转换矩阵t,目标噪声转换矩阵的数量根据当地天气确定。
当然,如果当前噪声类型不包括在基础数据库中,则基于数字孪生技术对该噪声类型进行建模,构造噪声转换矩阵,并以此更新基础数据库。
然后基于r、o、e、t构建虚拟场景集,包括:
确定雷达装置的工作参数;
相应的,根据目标点云组件和目标噪声转换矩阵构建虚拟场景集的过程包括:
基于目标点云组件构建第一场景集;
根据工作参数对第一场景集进行调整得到第二场景集;
对第二场景集中的各个场景随机加入目标噪声转换矩阵或单位矩阵得到第三场景集,将第三场景集作为虚拟场景集。
具体的,首先获取雷达装置的工作参数,雷达装置的工作参数包括但不限于检测角度、横向分辨率和纵向分辨率,其中,雷达装置的检测角度用于确定点云数据的坐标范围,基于该坐标范围可以对第一场景集S0中的点云数据进行筛选,将坐标范围以外的点云数据去除,也即第二场景集S1中仅包括坐标范围以内的点云数据,雷达装置的横向分辨率和纵向分辨率可以用于确定场景中各位置的点云数据的疏密度,比如距离较远的应 该疏散一点,距离较近的就密集一点,考虑到第一场景集S0中各点云数据的疏密度可能是均匀分布的,因此可根据确定的各位置的疏密度对第一场景集S0中各点云数据的疏密度进行调整,得到第二场景集S1
然后将对于第二场景集中的某一个场景,随机引入雨、雪、雾、扬尘等多种噪声的影响。具体的,本步骤中的随机引入包括在某一个场景中随机引入一种或多种噪声,或者在该场景中不引入噪声。假设目标噪声变换矩阵子集为t={tj|j=0,1,2,...},定义随机函数f(t),该随机函数随机返回子集t={tj|j=0,1,2,...}中某一个目标噪声变换矩阵或单位矩阵(无变换),则第二场景集S1转化为第三场景集将第三场景集S2作为虚拟场景集。
作为一种可选的实施例,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上的过程包括:
将虚拟场景集划分为虚拟场景训练集和虚拟场景测试集;
通过虚拟场景训练集训练雷达检测模型,通过虚拟场景测试集确定雷达检测模型的第一检测精度;
响应于第一检测精度大于或等于第一预设值,将雷达检测模型部署在目标车辆上。
在上述实施例的基础上,将虚拟场景集S2拆分为虚拟场景训练集Strain={s|s∈S2}和虚拟场景测试集利用虚拟场景训练集训练雷达检测模型,然后利用虚拟场景测试集对训练所得的雷达检测模型的检测精度进行初步 评估,计算虚拟场景集中各类检测物体的检测精度APk,然后基于各类物体的检测精度APk得到雷达检测模型的第一检测精度AP1。
其中,m检测物体的类别的数量,k为类别的索引。
如果第一检测精度AP1大于或等于第一预设值APsim,训练得到的雷达检测模型通过虚拟场景的初步评估。
作为一种可选的实施例,如果第一检测精度AP1小于第一预设值APsim,则确定待调整场景及待调整场景对应的待调整目标点云组件,调整待调整目标点云组件的布设参数,然后重复根据目标点云组件构建虚拟场景集的步骤,其中,布设参数包括但不限于布设个数和布设方向等。
其中,待调整目标点云组件为APk较低的检测物体对应的点云组件,则为该类检测物体分配更高的采样比例,比如该雷达检测模型对于机动车的检测精度APk比较低,则增加机动车类型下的点云组件的个数,重新构建虚拟场景集。假设在第一次构建第一场景集时,机动车类型下的点云组件的个数为7个,此时可增加至10个,重新构建第一场景集,并重复后续步骤,直至第一检测精度AP1大于或等于第一预设值APsim
当训练得到的雷达检测模型通过虚拟场景的初步评估后,基于少量采样的道路点云数据继续对该雷达检测模型进行评估,包括:
基于雷达装置采集的当前环境的点云数据构建真实场景测试集;
通过真实场景测试集确定雷达检测模型的第二检测精度;
相应的,将雷达检测模型部署在目标车辆上的过程包括:
当第二检测精度大于或等于第二预设值,将雷达检测模型部署在目标车辆上。
具体的,将训练所得的雷达检测模型应用于采集车辆采集所得的代表性真实场景中,基于雷达装置采集的当前环境的点云数据构建真实场景测试集,通过真实场景测试集对雷达检测模型进行进一步评估,计算雷达检测模型的第二检测精度AP2,如果AP2大于或等于第二预设值APreal,则雷达检测模型通过代表性真实场景的评估,可以部署在目标车辆上。
作为一种可选的实施例,如果第二检测精度AP2小于第二预设值APreal,则进行原因分析,判断是否存在场景缺失和/或组件缺失:响应于判断结果为存在组件缺失,确定真 实场景测试集中的新的点云组件的类型,利用数字孪生技术生成类型下的所有点云组件,并将生成的所有点云组件添加至基础数据库中,并重复上述判断构建的基础数据库是否可以满足当前环境的需求的操作,响应于判断结果为存在场景缺失,针对检测难度高的复合场景和具体物体组件,提升其在场景生成中的占比,具体的,确定待调整场景及待调整场景对应的待调整目标点云组件,调整待调整目标点云组件的布设参数,然后重复根据目标点云组件构建虚拟场景集的步骤,直至第二检测精度大于或等于第二预设值,实现基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练。
进一步的,将雷达检测模型部署于车端,收集漏检、错检现象,基于数字孪生技术实现算法的快速迭代,若实际应用中无检测物体漏检、错检现象发生,则算法迭代完成;若实际应用中存在检测物体漏检、错检现象发生,则进行原因分析,针对漏检、错检现象发生的复合场景和物体组件,提升其在场景生成中的占比,重复根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型的操作;若存在基本点云组件采样缺失,则重复判断构建的基础数据库是否可以满足当前环境的需求的操作。
请参照图3,图3为本申请实施例所提供的一种较为优选的雷达检测模型确定方法的步骤流程图,该雷达检测模型确定方法包括:
S201:构建基础数据库R、O、E,构建噪声变换矩阵库T;
S202:通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当前环境中的当前点云组件的类型,基于当前环境的天气信息确定当前噪声类型;
S203:判断基础数据库是否满足当前环境的需求,响应于判断结果为是,执行S204,响应于判断结果为否,执行S201;
S204:基于当前点云组件的类型和当前噪声类型从基础数据库中确定目标点云组件和目标噪声变换矩阵;
S205:根据目标点云组件构建第一场景集;
S206:确定雷达装置的工作参数,根据工作参数对第一场景集进行调整得到第二场景集;
S207:对第二场景集中的各个场景随机加入目标噪声变换矩阵或单位矩阵得到第三场景集;
S208:将虚拟场景集划分为虚拟场景训练集和虚拟场景测试集;
S209:通过虚拟场景训练集训练雷达检测模型;
S210:通过虚拟场景测试集确定雷达检测模型的第一检测精度;
S211:判断第一检测精度是否大于或等于第一预设值,响应于判断结果为是,执行 S212,响应于判断结果为否,执行S205;
S212:基于雷达装置采集的当前环境的点云数据构建真实场景测试集,通过真实场景测试集确定雷达检测模型的第二检测精度;
S213:判断第二检测精度是否大于或等于第二预设值,响应于判断结果为是,执行S214,响应于判断结果为否,执行S216;
S214:将雷达检测模型部署在目标车辆上;
S215:判断是否存在目标物体漏检或错检,响应于判断结果为是,执行S216,响应于判断结果为否,结束;
S216:原因分析,响应于原因分析结果为场景缺失,执行S205,响应于原因分析结果为组件缺失,执行S202。
综上,本申请基于数字孪生技术提出一种新型的自动驾驶的雷达算法开发方法,解决现阶段高质量自动驾驶点云数据集构建成本高、场景局限性强、数据开放性差的问题,使用数字孪生技术实现自动驾驶真实场景构成组件点云数据的建模和基础数据库的构建,降低模型开发数据集构建、维护成本,加速数据集的迭代更新;通过点云组件和噪声矩阵的灵活组合,实现不同场景、多种功能、各异LiDAR配置方案对应算法模型的敏捷开发,实现针对性场景、检测对象算法的高效迭代;通过虚拟仿真场景生成技术规避数据隐私,解决数据开放性差的问题,有利于开放学术研究,推动相关技术瓶颈的突破。
请参照图4,图4为本申请所提供的一种雷达检测模型确定系统的结构示意图,该雷达检测模型确定系统包括:
第一确定模块1,用于通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当前环境中的当前点云组件的类型;
第二确定模块2,用于基于类型从基础数据库中确定目标点云组件,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,目标点云组件为任一点云组件;
第一构建模块3,用于根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型;
部署模块4,用于将雷达检测模型部署在目标车辆上。
可见,本实施例中,使用数字孪生技术实现真实环境中的点云组件的点云建模及基础数据库的构建,并基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练,解决雷达检测模型构建过程中存在的真实路采数据集创建、维护成本高的问题,去除了人工数据标注环节,提升数据闭环的算法迭代效率,通过点云组件灵活构建 与当前环境对应的仿真场景,一方面可以解决雷达检测模型构建过程中存在的数据集场景局限性强的问题,另一方面可以规避数据隐私,解决数据开放性差的问题,有利于开放学术研究,推动相关技术瓶颈的突破。
作为一种可选的实施例,该雷达检测模型确定系统还包括:
第三确定模块,用于确定雷达装置的工作参数;
相应的,根据目标点云组件构建虚拟场景集的过程包括:
基于目标点云组件构建第一场景集;
根据工作参数对第一场景集进行调整得到第二场景集,将第二场景集作为虚拟场景集。
作为一种可选的实施例,基础数据库还包括:
利用数字孪生技术生成的各个噪声类型下的噪声转换矩阵。
作为一种可选的实施例,该雷达检测模型确定系统还包括:
第四确定模块,用于确定当前环境的天气信息,基于天气信息确定当前噪声类型,根据当前噪声类型从基础数据库中的所有噪声转换矩阵中确定目标噪声转换矩阵;
相应的,根据目标点云组件构建虚拟场景集的过程包括:
根据目标点云组件和目标噪声转换矩阵构建虚拟场景集。
作为一种可选的实施例,该雷达检测模型确定系统还包括:
第五确定模块,用于确定雷达装置的工作参数;
相应的,根据目标点云组件和目标噪声转换矩阵构建虚拟场景集的过程包括:
基于目标点云组件构建第一场景集;
根据工作参数对第一场景集进行调整得到第二场景集;
对第二场景集中的各个场景随机加入目标噪声转换矩阵或单位矩阵得到第三场景集,将第三场景集作为虚拟场景集。
作为一种可选的实施例,工作参数包括雷达装置的检测角度、横向分辨率和纵向分辨率;
相应的,根据工作参数对第一场景集进行调整得到第二场景集的过程包括:
根据检测角度确定点云数据的坐标范围;
利用横向分辨率和纵向分辨率确定各个位置的点云数据的疏密度;
基于坐标范围和疏密度对第一场景集中的点云数据进行调整得到第二场景集。
作为一种可选的实施例,基础数据库包括基础道路点云组件数据库,基础检测对象点云组件数据库和基础外围环境点云组件数据库。
作为一种可选的实施例,该雷达检测模型确定系统还包括:
第一判断模块,用于判断当前环境中的当前点云组件的类型是否为基础数据库中的任一基础类型,响应于判断结果为否,触发第一处理模块;
第一处理模块,用于利用数字孪生技术生成类型下的所有点云组件,并将生成的所有点云组件添加至基础数据库中。
作为一种可选的实施例,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上的过程包括:
将虚拟场景集划分为虚拟场景训练集和虚拟场景测试集;
通过虚拟场景训练集训练雷达检测模型,通过虚拟场景测试集确定雷达检测模型的第一检测精度;
响应于第一检测精度大于或等于第一预设值,将雷达检测模型部署在目标车辆上。
作为一种可选的实施例,该雷达检测模型确定系统还包括:
第六确定模块,用于在第一检测精度小于第一预设值,确定待调整场景及待调整场景对应的待调整目标点云组件;
第一构建模块3,还用于调整待调整目标点云组件的布设参数,然后重复根据目标点云组件构建虚拟场景集的步骤。
作为一种可选的实施例,该雷达检测模型确定系统还包括:
第二构建模块,用于基于雷达装置采集的当前环境的点云数据构建真实场景测试集;
测试模块,用于通过真实场景测试集确定雷达检测模型的第二检测精度;
相应的,将雷达检测模型部署在目标车辆上的过程包括:
当第二检测精度大于或等于第二预设值,将雷达检测模型部署在目标车辆上。
作为一种可选的实施例,通过真实场景测试集确定雷达检测模型的第二检测精度之后,该雷达检测模型确定系统还包括:
第二判断模块,还用于当第二检测精度小于第二预设值,判断是否存在场景缺失和/或组件缺失,响应于判断结果为存在组件缺失,触发第一处理模块,响应于判断结果为存在场景缺失,触发第二处理模块;
第一处理模块,还用于确定真实场景测试集中的新的点云组件的类型,利用数字孪生技术生成类型下的所有点云组件,并将生成的所有点云组件添加至基础数据库中。
第二处理模块,用于确定待调整场景及待调整场景对应的待调整目标点云组件,调整待调整目标点云组件的布设参数,然后重复根据目标点云组件构建虚拟场景集的步 骤。
另一方面,本申请还提供了一种电子设备,如图5所示,包括:
存储器10,用于存储计算机可读指令;
一个或多个处理器20,用于执行计算机可读指令时实现如上文任意一个实施例所描述的雷达检测模型确定方法的步骤。
具体的,存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令,该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。处理器执行存储器中保存的计算机可读指令时,可以实现以下步骤:通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当前环境中的当前点云组件的类型;基于类型从基础数据库中确定目标点云组件,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,目标点云组件为任一点云组件;根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上。
可见,本实施例中,使用数字孪生技术实现真实环境中的点云组件的点云建模及基础数据库的构建,并基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练,解决雷达检测模型构建过程中存在的真实路采数据集创建、维护成本高的问题,去除了人工数据标注环节,提升数据闭环的算法迭代效率,通过点云组件灵活构建与当前环境对应的仿真场景,一方面可以解决雷达检测模型构建过程中存在的数据集场景局限性强的问题,另一方面可以规避数据隐私,解决数据开放性差的问题,有利于开放学术研究,推动相关技术瓶颈的突破。
在上述实施例的基础上,该电子设备还包括:
输入接口,与处理器相连,用于获取外部导入的计算机可读指令、参数和指令,经处理器控制保存至存储器中。该输入接口可以与输入装置相连,接收用户手动输入的参数或指令。该输入装置可以是显示屏上覆盖的触摸层,也可以是终端外壳上设置的按键、轨迹球或触控板。
显示单元,与处理器相连,用于显示处理器发送的数据。该显示单元可以为液晶显示屏或者电子墨水显示屏等。
网络端口,与处理器相连,用于与外部各终端设备进行通信连接。该通信连接所采用的通信技术可以为有线通信技术或无线通信技术,如移动高清链接技术(MHL)、通用串行总线(USB)、高清多媒体接口(HDMI)、无线保真技术(WiFi)、蓝牙通信技术、低功耗蓝牙通信技术、基于IEEE802.11s的通信技术等。
另一方面,本申请还提供了一种非易失性的可读存储介质,该可读存储介质上存储有计算机可读指令,计算机可读指令被处理器执行时实现如上文任意一个实施例所描述的雷达检测模型确定方法的步骤。
具体的,该可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。该存储介质上存储有计算机可读指令,计算机可读指令被处理器执行时实现以下步骤:通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于点云数据确定当前环境中的当前点云组件的类型;基于类型从基础数据库中确定目标点云组件,基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,目标点云组件为任一点云组件;根据目标点云组件构建虚拟场景集,基于虚拟场景集训练雷达检测模型,并将雷达检测模型部署在目标车辆上。
可见,本实施例中,使用数字孪生技术实现真实环境中的点云组件的点云建模及基础数据库的构建,并基于评估指标进行缺失场景和缺失点云组件的快速补充以及模型的迭代训练,解决雷达检测模型构建过程中存在的真实路采数据集创建、维护成本高的问题,去除了人工数据标注环节,提升数据闭环的算法迭代效率,通过点云组件灵活构建与当前环境对应的仿真场景,一方面可以解决雷达检测模型构建过程中存在的数据集场景局限性强的问题,另一方面可以规避数据隐私,解决数据开放性差的问题,有利于开放学术研究,推动相关技术瓶颈的突破。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的状况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其他实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (20)

  1. 一种雷达检测模型确定方法,其特征在于,包括:
    通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于所述点云数据确定当前环境中的当前点云组件的类型;
    基于所述类型从基础数据库中确定目标点云组件,所述基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,所述目标点云组件为任一所述点云组件;及
    根据所述目标点云组件构建虚拟场景集,基于所述虚拟场景集训练雷达检测模型,并将所述雷达检测模型部署在目标车辆上。
  2. 根据权利要求1所述的雷达检测模型确定方法,其特征在于,该雷达检测模型确定方法还包括:
    确定所述雷达装置的工作参数;
    相应的,根据所述目标点云组件构建虚拟场景集的过程包括:
    基于所述目标点云组件构建第一场景集;及
    根据所述工作参数对所述第一场景集进行调整得到第二场景集,将所述第二场景集作为虚拟场景集。
  3. 根据权利要求1所述的雷达检测模型确定方法,其特征在于,所述基础数据库还包括:
    利用数字孪生技术生成的各个噪声类型下的噪声转换矩阵。
  4. 根据权利要求3所述的雷达检测模型确定方法,其特征在于,基于所述点云数据确定当前环境中的当前点云组件的类型之后,该雷达检测模型确定方法还包括:
    确定当前环境的天气信息;
    基于所述天气信息确定当前噪声类型;及
    根据当前噪声类型从所述基础数据库中的所有所述噪声转换矩阵中确定目标噪声转换矩阵;
    相应的,根据所述目标点云组件构建虚拟场景集的过程包括:
    根据所述目标点云组件和所述目标噪声转换矩阵构建虚拟场景集。
  5. 根据权利要求4所述的雷达检测模型确定方法,其特征在于,该雷达检测模型确定方法还包括:
    确定所述雷达装置的工作参数;
    相应的,根据所述目标点云组件和所述目标噪声转换矩阵构建虚拟场景集的过程包括:
    基于所述目标点云组件构建第一场景集;
    根据所述工作参数对所述第一场景集进行调整得到第二场景集;及
    对所述第二场景集中的各个场景随机加入所述目标噪声转换矩阵或单位矩阵得到第三场景集,将所述第三场景集作为所述虚拟场景集。
  6. 根据权利要求2或5所述的雷达检测模型确定方法,其特征在于,所述工作参数包括所述雷达装置的检测角度、横向分辨率和纵向分辨率;
    相应的,根据所述工作参数对所述第一场景集进行调整得到第二场景集的过程包括:
    根据所述检测角度确定点云数据的坐标范围;
    利用所述横向分辨率和所述纵向分辨率确定各个位置的点云数据的疏密度;及
    基于所述坐标范围和所述疏密度对所述第一场景集中的点云数据进行调整得到第二场景集。
  7. 根据权利要求1所述的雷达检测模型确定方法,其特征在于,所述基础数据库包括基础道路点云组件数据库,基础检测对象点云组件数据库和基础外围环境点云组件数据库。
  8. 根据权利要求1所述的雷达检测模型确定方法,其特征在于,基于所述点云数据确定当前环境中的当前点云组件的类型之后,该雷达检测模型确定方法还包括:
    判断当前环境中的当前点云组件的类型是否为所述基础数据库中的任一所述基础类型;及
    响应于判断结果为否,利用数字孪生技术生成所述类型下的所有点云组件,并将生成的所有点云组件添加至所述基础数据库中。
  9. 根据权利要求1所述的雷达检测模型确定方法,其特征在于,基于所述虚拟场景集训练雷达检测模型,并将所述雷达检测模型部署在目标车辆上的过程包括:
    将所述虚拟场景集划分为虚拟场景训练集和虚拟场景测试集;
    通过所述虚拟场景训练集训练雷达检测模型,通过所述虚拟场景测试集确定所述雷达检测模型的第一检测精度;及
    响应于所述第一检测精度大于或等于第一预设值,将所述雷达检测模型部署在目标车辆上。
  10. 根据权利要求9所述的雷达检测模型确定方法,其特征在于,通过所述虚拟场 景测试集确定所述雷达检测模型的第一检测精度之后,该雷达检测模型确定方法还包括:
    响应于所述第一检测精度小于所述第一预设值,确定待调整场景及所述待调整场景对应的待调整目标点云组件;及
    调整所述待调整目标点云组件的布设参数,然后重复根据所述目标点云组件构建虚拟场景集的步骤。
  11. 根据权利要求9所述的雷达检测模型确定方法,其特征在于,该雷达检测模型确定方法还包括:
    基于所述雷达装置采集的当前环境的点云数据构建真实场景测试集;及
    通过所述真实场景测试集确定所述雷达检测模型的第二检测精度;
    相应的,将所述雷达检测模型部署在目标车辆上的过程包括:
    当所述第二检测精度大于或等于第二预设值,将所述雷达检测模型部署在目标车辆上。
  12. 根据权利要求11所述的雷达检测模型确定方法,其特征在于,通过所述真实场景测试集确定所述雷达检测模型的第二检测精度之后,该雷达检测模型确定方法还包括:
    当所述第二检测精度小于所述第二预设值,判断是否存在场景缺失和/或组件缺失;及
    响应于判断结果为存在场景缺失,确定待调整场景及所述待调整场景对应的待调整目标点云组件,调整所述待调整目标点云组件的布设参数,然后重复根据所述目标点云组件构建虚拟场景集的步骤;或,响应于判断结果为存在组件缺失,确定所述真实场景测试集中的新的点云组件的类型,利用数字孪生技术生成所述类型下的所有点云组件,并将生成的所有点云组件添加至所述基础数据库中。
  13. 根据权利要求7所述的雷达检测模型确定方法,其特征在于,所述当前点云组件包括道路组件、检测对象组件和外围环境点云组件;
    基于所述点云数据确定当前环境中的当前点云组件的类型,包括:
    基于所述点云数据统计当前环境中的道路组件的类型、检测对象组件的类型和外围环境点云组件的类型。
  14. 根据权利要求13所述的雷达检测模型确定方法,其特征在于,基于所述类型从基础数据库中确定目标点云组件之前,所述方法还包括:
    基于以下公式判断构建的基础数据库是否能满足当前环境的需求,其中,以下公式 成立时表明基础数据库能满足当前环境的需求;
    其中,Rneed表示道路组件的类型、Oneed表示检测对象组件的类型、Eneed表示外围环境点云组件的类型,R表示基础道路点云组件数据库,O表示基础检测对象点云组件数据库、E表示基础外围环境点云组件数据库。
  15. 根据权利要求2所述的雷达检测模型确定方法,其特征在于,基于所述目标点云组件构建第一场景集,包括:
    将所述目标点云组件进行灵活组合,构建所述第一场景集。
  16. 根据权利要求4所述的雷达检测模型确定方法,其特征在于,根据所述目标点云组件和所述目标噪声转换矩阵构建虚拟场景集之前,所述方法还包括:
    基于以下公式判断构建的基础数据库是否能满足当前环境的需求,其中,以下公式成立时表明基础数据库能满足当前环境的需求;
    其中,Rneed表示道路组件的类型、Oneed表示检测对象组件的类型、Eneed表示外围环境点云组件的类型,Tneed表示当前噪声类型,R表示基础道路点云组件数据库,O表示基础检测对象点云组件数据库、E表示基础外围环境点云组件数据库、T表示噪声转换矩阵数据库。
  17. 根据权利要求5所述的雷达检测模型确定方法,其特征在于,所述工作参数包 括检测角度、横向分辨率和纵向分辨率;根据所述工作参数对所述第一场景集进行调整得到第二场景集,包括:
    基于雷达装置的检测角度确定点云数据的坐标范围,从所述第一场景集中筛选出在所述坐标范围内的点云数据;
    基于雷达装置的横向分辨率和纵向分辨率确定环境中各位置的点云数据的疏密度,根据所述各位置的点云数据的疏密度,对从所述第一场景集中筛选出的各点云数据的疏密度进行调整,得到所述第二场景集。
  18. 一种雷达检测模型确定系统,其特征在于,包括:
    第一确定模块,用于通过采集车辆中配置的雷达装置采集当前环境中的点云数据,基于所述点云数据确定当前环境中的当前点云组件的类型;
    第二确定模块,用于基于所述类型从基础数据库中确定目标点云组件,所述基础数据库包括利用数字孪生技术生成的各个基础类型下的点云组件,所述目标点云组件为任一所述点云组件;
    第一构建模块,用于根据所述目标点云组件构建虚拟场景集,基于所述虚拟场景集训练雷达检测模型;及
    部署模块,用于将所述雷达检测模型部署在目标车辆上。
  19. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机可读指令;及
    一个或多个处理器,用于执行所述计算机可读指令时实现如权利要求1-17任意一项所述的雷达检测模型确定方法的步骤。
  20. 一种非易失性的可读存储介质,其特征在于,所述可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如权利要求1-17任意一项所述的雷达检测模型确定方法的步骤。
PCT/CN2023/071958 2022-08-25 2023-01-12 雷达检测模型确定方法、系统、电子设备及可读存储介质 WO2024040864A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211022746.3A CN115098079B (zh) 2022-08-25 2022-08-25 雷达检测模型确定方法、系统、电子设备及可读存储介质
CN202211022746.3 2022-08-25

Publications (1)

Publication Number Publication Date
WO2024040864A1 true WO2024040864A1 (zh) 2024-02-29

Family

ID=83300446

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071958 WO2024040864A1 (zh) 2022-08-25 2023-01-12 雷达检测模型确定方法、系统、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN115098079B (zh)
WO (1) WO2024040864A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098079B (zh) * 2022-08-25 2023-01-24 苏州浪潮智能科技有限公司 雷达检测模型确定方法、系统、电子设备及可读存储介质
CN115906282B (zh) * 2022-11-14 2024-05-24 昆山适途模型科技有限公司 一种基于整车仿真的汽车模拟方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10408939B1 (en) * 2019-01-31 2019-09-10 StradVision, Inc. Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same
CN111353417A (zh) * 2020-02-26 2020-06-30 北京三快在线科技有限公司 一种目标检测的方法及装置
CN113111692A (zh) * 2020-01-13 2021-07-13 北京地平线机器人技术研发有限公司 目标检测方法、装置、计算机可读存储介质及电子设备
CN115098079A (zh) * 2022-08-25 2022-09-23 苏州浪潮智能科技有限公司 雷达检测模型确定方法、系统、电子设备及可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9760788B2 (en) * 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10408939B1 (en) * 2019-01-31 2019-09-10 StradVision, Inc. Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same
CN113111692A (zh) * 2020-01-13 2021-07-13 北京地平线机器人技术研发有限公司 目标检测方法、装置、计算机可读存储介质及电子设备
CN111353417A (zh) * 2020-02-26 2020-06-30 北京三快在线科技有限公司 一种目标检测的方法及装置
CN115098079A (zh) * 2022-08-25 2022-09-23 苏州浪潮智能科技有限公司 雷达检测模型确定方法、系统、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN115098079B (zh) 2023-01-24
CN115098079A (zh) 2022-09-23

Similar Documents

Publication Publication Date Title
WO2024040864A1 (zh) 雷达检测模型确定方法、系统、电子设备及可读存储介质
US11783590B2 (en) Method, apparatus, device and medium for classifying driving scenario data
CN108805348B (zh) 一种交叉口信号配时控制优化的方法和装置
KR102160990B1 (ko) 객체 기반의 3d 도시 모델링 방법 및 이를 구현하는 서버, 그리고 이를 이용하는 시스템
CN111797001A (zh) 一种基于SCANeR的自动驾驶仿真测试模型的构建方法
CN106198049A (zh) 真实车辆在环测试系统和方法
CN110716529A (zh) 一种自动驾驶测试用例自动生成方法和装置
CN110189517B (zh) 一种面向车联网隐私保护研究的仿真实验平台
US11842549B2 (en) Method and system for muck truck management in smart city based on internet of things
WO2024016877A1 (zh) 一种面向车路协同的路测感知仿真系统
CN114863706B (zh) 一种面向高速公路的车路协同自动驾驶仿真测试系统及方法
CN113935442A (zh) 汽车自动驾驶功能测试道路的分类方法、设备和存储介质
CN115392015A (zh) 一种基于数字孪生的自动驾驶推演系统和推演方法
JP2023095812A (ja) 車載データ処理方法、装置、電子デバイス、記憶媒体、及びプログラム
CN115857685A (zh) 一种感知算法数据闭环方法及相关装置
CN114356931A (zh) 数据处理方法、装置、存储介质、处理器及电子装置
Chen et al. Workflow for generating 3D urban models from open city data for performance-based urban design
Che et al. An open vehicle-in-the-loop test method for autonomous vehicle
CN113049267A (zh) 一种交通环境融合感知在环vthil传感器物理建模方法
Chen et al. A 5G Cloud Platform and Machine Learning-Based Mobile Automatic Recognition of Transportation Infrastructure Objects
Tarko et al. Guaranteed lidar-aided multi-object tracking at road intersections
CN117079142B (zh) 无人机自动巡检的反注意生成对抗道路中心线提取方法
CN114384940B (zh) 一种应用于民用无人机的嵌入式识别模型获得方法和系统
Li et al. Integrating Adaptive Lighting Database with SHRP 2 Naturalistic Driving Study Data
Box et al. Signal control using vehicle localization probe data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23855961

Country of ref document: EP

Kind code of ref document: A1