WO2022016859A1 - 一种自动驾驶仿真渲染方法、装置、设备及可读介质 - Google Patents

一种自动驾驶仿真渲染方法、装置、设备及可读介质 Download PDF

Info

Publication number
WO2022016859A1
WO2022016859A1 PCT/CN2021/076877 CN2021076877W WO2022016859A1 WO 2022016859 A1 WO2022016859 A1 WO 2022016859A1 CN 2021076877 W CN2021076877 W CN 2021076877W WO 2022016859 A1 WO2022016859 A1 WO 2022016859A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
scene
frame
clipping
tree model
Prior art date
Application number
PCT/CN2021/076877
Other languages
English (en)
French (fr)
Inventor
张雨
龚湛
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Priority to US18/005,940 priority Critical patent/US20230351685A1/en
Publication of WO2022016859A1 publication Critical patent/WO2022016859A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/16Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Definitions

  • the present invention relates to the technical field of automatic driving, and in particular, to an automatic driving simulation rendering method, apparatus, device and readable medium.
  • the self-driving algorithm model is the "brain" of the entire self-driving car and is crucial for self-driving vehicles. It controls how the vehicle perceives the environment, receives and processes data in real time, which is used by the system to make decisions, provide real-time command feedback to the chassis execution system, and minimize risk.
  • simulation testing is the basic technology for technical verification, support system training, testing and verification.
  • the automated driving simulation software has functions such as road generation, scene definition, traffic flow simulation, control simulation, and sensor simulation.
  • the simulation process is mainly First create static scenes (roads, traffic signs, etc.), then create dynamic traffic flows, obtain simulation images or videos through sensor simulation, and then transmit the simulation data to the device under test.
  • the existing autonomous driving simulation software is mainly divided into several parts for sensor simulation: the first is to generate a simulated environment model according to the scene file, the second is to generate dynamic traffic flow, the third is to clarify the sensor configuration, and the fourth is to render the scene to get the picture.
  • the logical relationship of the objects in the current simulation world will be obtained, and then the information of objects within a certain distance around the vehicle will be obtained according to the position of the vehicle, and these objects will be rendered, and then based on the position and direction of the sensors in the vehicle, etc. information to obtain a certain area corresponding to the sensor, and intercept the rendered image of this area.
  • the purpose of the embodiments of the present invention is to propose an automatic driving simulation rendering method, device, device and readable medium, which is used for modeling and efficient clipping of the effective area through a tree model, so as to ensure that all valid information is rendered, Reduce the waste of resources, at the same time, the scene simulation rendering is divided into two stages, and a multi-threaded parallel mode is designed according to the resources consumed in each stage, which speeds up the simulation rendering speed, is conducive to the real-time performance of the simulation, and effectively improves the use of resources. efficient.
  • an automatic driving simulation rendering method which includes the following steps: loading a scene file, performing scene modeling based on a tree model, and establishing a static spatial index; configuring dynamic traffic flow to generate a dynamic scene, and configure the position and orientation of the sensor; and create a first thread and a second thread, and make the first thread clip each frame of the dynamic scene based on the tree model, and the second thread clips the first thread based on the position and orientation of the sensor Each resulting frame is rendered and output.
  • creating the first thread and the second thread includes: creating corresponding viewports according to the number of sensors, and creating a first thread and a second thread for each viewport respectively; making the first thread of each viewport simultaneously Start clipping the first frame of the dynamic scene.
  • causing the first thread to trim each frame of the dynamic scene based on the tree model includes: the first thread trims the current frame of the dynamic scene based on the tree model; if the trimming of the current frame is completed, the next frame to be cropped.
  • the second thread renders and outputs each frame clipped by the first thread based on the position and orientation of the sensor, and the output includes: The 3D scene is rendered into a 2D image for output; if the rendering of the current frame is completed, it is judged whether the first thread has completed the clipping of the next frame; if the first thread has completed the clipping of the next frame, the second thread will make the next frame Render and output.
  • causing the first thread to trim each frame of the dynamic scene based on the tree model includes: judging whether the area of the dynamic scene corresponding to the tree model is the area of interest for rendering; relationship; if the region of interest is within the clipping area, construct a 3D scene and output the clipping result.
  • the method further includes: if the region of interest intersects with the clipping region, performing clipping filtering on the tree model; reconstructing the 3D scene model and outputting the clipping result.
  • establishing a static spatial index includes: dividing into four first-level nodes according to the direction of the scene, east, west, south, and north; and dividing into several second-level nodes according to the objects of the scene.
  • an automatic driving simulation rendering device including: a modeling module, configured to load a scene file, perform scene modeling based on a tree model, and establish a static spatial index; a collection module, configured used to configure dynamic traffic flow to generate dynamic scenes, and configure the position and orientation of sensors; and a clip rendering module, configured to create a first thread and a second thread, and make the first thread based on the tree model for each frame of the dynamic scene After clipping, the second thread renders and outputs each frame clipped by the first thread based on the position and orientation of the sensor.
  • a modeling module configured to load a scene file, perform scene modeling based on a tree model, and establish a static spatial index
  • a collection module configured used to configure dynamic traffic flow to generate dynamic scenes, and configure the position and orientation of sensors
  • a clip rendering module configured to create a first thread and a second thread, and make the first thread based on the tree model for each frame of the dynamic scene After clipping, the second thread renders and outputs each frame clipped by the first thread based on
  • the clipping rendering module is further configured to: create corresponding viewports according to the number of sensors, and create a first thread and a second thread for each viewport respectively; The first frame of the dynamic scene is clipped.
  • the clipping and rendering module is further configured to: the first thread clips the current frame of the dynamic scene based on the tree model; if the clipping of the current frame is completed, the next frame is clipped.
  • the clipping and rendering module is further configured to: the second thread renders the 3D scene obtained by clipping the current frame by the first thread into a 2D picture based on the position and orientation of the sensor, and outputs the output; if the rendering of the current frame is completed, It is judged whether the first thread completes the clipping of the next frame; if the first thread completes the clipping of the next frame, the second thread renders and outputs the next frame.
  • the clipping and rendering module is further configured to: determine whether the region of the dynamic scene corresponding to the tree model is the region of interest for rendering; if it is the region of interest, further determine the relationship with the clipping region; if the region of interest is within the clipping region , construct a 3D scene and output the cropping result; if the area of interest intersects with the cropping area, perform cropping and filtering on the tree model; reconstruct the 3D scene model and output the cropping result.
  • the modeling module is further configured to: be divided into four first-level nodes according to the direction of the scene, east, west, south, and north; and divided into several second-level nodes according to the objects of the scene.
  • a computer device including: at least one processor; and a memory, where the memory stores computer instructions that can be executed on the processor, and when the instructions are executed by the processor, implement the above-mentioned method. step.
  • a computer-readable storage medium stores a computer program that implements the above method steps when executed by a processor.
  • the present invention has the following beneficial technical effects: modeling through tree models and efficiently cutting effective areas, ensuring that all valid information is rendered, reducing resource waste, and simultaneously dividing the scene simulation and rendering into two stages, and according to the requirements of each stage
  • the consumption of resources is designed with a multi-thread parallel mode, which speeds up the simulation rendering speed, is beneficial to the real-time simulation, and effectively improves the utilization efficiency of resources.
  • FIG. 1 is a schematic diagram of an embodiment of an automatic driving simulation rendering method provided by the present invention
  • FIG. 2 is a schematic diagram of an embodiment of an automatic driving simulation rendering device provided by the present invention.
  • FIG. 3 is a schematic diagram of an embodiment of a computer device provided by the present invention.
  • FIG. 4 is a schematic diagram of an embodiment of a computer-readable storage medium provided by the present invention.
  • FIG. 1 shows a schematic diagram of an embodiment of an automatic driving simulation rendering method provided by the present invention.
  • the embodiment of the present invention includes the following steps:
  • the simulation rendering is divided into two stages, a multi-threaded parallel mode is designed according to the resource consumption of the two stages, and different resource allocations are performed for a single viewport and a multi-viewport, so as to ensure the simulation quality.
  • a more reasonable allocation of resources can effectively improve the maximum frame rate of the simulation and maximize the use of resources.
  • the first stage corresponding to the first thread is mainly logical operations, consuming CPU (Central Processing Unit, central processing unit) resources, while the second stage corresponding to the second thread requires not only logical operations, but also screen rendering, so It consumes CPU and GPU resources, so the processing time of the first stage is much less than that of the second stage.
  • the two threads run on different CPU cores. Because the first stage consumes less time, while waiting for the second stage to render, the scene update in the next frame can be processed in advance, and the rendering part can be confirmed, which can shorten the rendering of each frame. time and improve the utilization of resources.
  • creating the first thread and the second thread includes: creating corresponding viewports according to the number of sensors, and respectively creating a first thread and a second thread for each viewport; making the first thread of each viewport A thread simultaneously starts cropping the first frame of the dynamic scene.
  • the multi-viewport simulation is generally used to test a situation where multiple sensors are installed in a vehicle, one viewport represents one sensor, and the multi-viewport simulation is used more in the simulation test.
  • Multi-viewport single-threaded mode is the same as single viewport, processing one or two stages of each viewport in sequence.
  • the server is Inspur NF5280M5 server; the CPU is Gold 6130 CPU@2.10GHz; graphics card is 4*1080TI, 11G video memory/card; software environment is Ubuntu 18.04.4 LTS operating system.
  • the method of the present invention is used to perform a rendering test on a character-eight cycle driving scene.
  • the measured frame rate is 40-50HZ, and the GPU utilization rate is 83%; in multi-thread mode, the measured frame rate is 50-60HZ, and the GPU utilization rate is 97%.
  • causing the first thread to trim each frame of the dynamic scene based on the tree model includes: the first thread trims the current frame of the dynamic scene based on the tree model; if the trimming of the current frame is completed, Crop the next frame.
  • the second thread rendering and outputting each frame clipped by the first thread based on the position and orientation of the sensor includes: The clipped 3D scene is rendered into a 2D image for output; if the rendering of the current frame is completed, determine whether the first thread has completed the clipping of the next frame; if the first thread has completed the clipping of the next frame, the second thread will The next frame is rendered and output.
  • the second thread uses OpenGL (Open Graphics Library, Open Graphics Library or Open Graphics Library) to render the determined 3D scene into a 2D picture for output according to the position and orientation of the sensor.
  • OpenGL Open Graphics Library, Open Graphics Library or Open Graphics Library
  • causing the first thread to trim each frame of the dynamic scene based on the tree model includes: judging whether the area corresponding to the tree model of the dynamic scene is the area of interest for rendering; if it is the area of interest, further judging and The relationship of the clipping area; if the area of interest is within the clipping area, construct a 3D scene and output the clipping result.
  • the method further includes: if the region of interest intersects with the clipping region, performing clipping filtering on the tree model; reconstructing the 3D scene model and outputting the clipping result.
  • the binary tree model it is quickly confirmed which scenes, vehicles, etc. are included in the part to be rendered, and the area corresponding to the sensor is obtained according to the position and direction of the sensor of the vehicle, that is, a certain part of the corresponding tree model, according to This range obtains the objects to be rendered, and then corresponds to the object information contained in the binary tree model; the clipping of the scene is based on the tree model, and the data layering of the tree model is conducive to extracting the data layer of interest. In one area, the data outside the clipping area is directly excluded, and the tree model is filtered and clipped step by step to obtain the clipping result, and then the result is reconstructed according to the tree model to obtain the scene located in the clipping area.
  • it will first filter according to the configuration of the sensor, remove the object information that does not need to be rendered, and avoid the waste of resources.
  • establishing a static spatial index includes: dividing into four first-level nodes according to the direction of the scene, east, west, south, and north; and dividing into several second-level nodes according to the objects of the scene.
  • information of each object in the simulation is clarified according to the tree model, and efficient scene object information query and deletion are performed.
  • the scene content is divided into multiple subordinate nodes according to the direction and other information, and then based on the tree model, an efficient spatial index is constructed.
  • the data layering of the tree model is conducive to data extraction, which facilitates dynamic cropping. According to the sensor information and the tree model, the objects that need to be rendered can be quickly cropped, thereby reducing some unnecessary resource consumption and improving the efficiency of rendering.
  • FIG. 2 shows a schematic diagram of an embodiment of an automatic driving simulation rendering apparatus provided by the present invention.
  • an embodiment of the present invention includes the following modules: a modeling module S11, configured to load a scene file and perform scene modeling based on a tree model, and establish a static spatial index; a collection module S12, configured to collect module, configured to configure the dynamic traffic flow to generate a dynamic scene, and configure the position and direction of the sensor; and a clip rendering module S13, configured to create a first thread and a second thread, and make the first thread based on the tree model to the dynamic scene Each frame is clipped, and the second thread renders and outputs each frame clipped by the first thread based on the position and orientation of the sensor.
  • a modeling module S11 configured to load a scene file and perform scene modeling based on a tree model, and establish a static spatial index
  • a collection module S12 configured to collect module, configured to configure the dynamic traffic flow to generate a dynamic scene, and configure the position and direction of the sensor
  • the clipping and rendering module S13 is further configured to: create corresponding viewports according to the number of sensors, and respectively create a first thread and a second thread for each viewport; The threads simultaneously start cropping the first frame of the dynamic scene.
  • the clipping and rendering module S13 is further configured to: the first thread clips the current frame of the dynamic scene based on the tree model; if clipping of the current frame is completed, clips the next frame.
  • the clipping and rendering module S13 is further configured to: the second thread renders the 3D scene obtained by clipping the current frame by the first thread into a 2D picture based on the position and direction of the sensor for output; For frame rendering, it is judged whether the first thread has completed the clipping of the next frame; if the first thread has completed the clipping of the next frame, the second thread is made to render and output the next frame.
  • the clipping and rendering module S13 is further configured to: determine whether the region corresponding to the tree model of the dynamic scene is the region of interest for rendering; if it is the region of interest, further determine the relationship with the clipping region; if the region of interest is the region of interest In the clipping area, construct a 3D scene and output the clipping result; if the area of interest intersects the clipping area, perform clipping filtering on the tree model; reconstruct the 3D scene model and output the clipping result.
  • the modeling module S11 is further configured to: be divided into four primary nodes of east, west, south and north according to the direction of the scene; divided into several secondary nodes according to the objects of the scene.
  • FIG. 3 shows a schematic diagram of an embodiment of a computer device provided by the present invention.
  • an embodiment of the present invention includes the following devices: at least one processor S21; and a memory S22, where the memory S22 stores computer instructions S23 that can be run on the processor, and when the instructions are executed by the processor, implement the steps of the above method .
  • FIG. 4 shows a schematic diagram of an embodiment of a computer-readable storage medium provided by the present invention.
  • the computer-readable storage medium stores S31 a computer program S32 that executes the above method when executed by the processor.
  • the program of the automatic driving simulation rendering method can be stored in a computer readable In the storage medium, when the program is executed, it may include the processes of the foregoing method embodiments.
  • the storage medium of the program may be a magnetic disk, an optical disk, a read only memory (ROM, Read Only Memory), or a random access memory (RAM, Random Access Memory).
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the methods disclosed according to the embodiments of the present invention may also be implemented as a computer program executed by a processor, and the computer program may be stored in a computer-readable storage medium.
  • the computer program is executed by the processor, the above-mentioned functions defined in the methods disclosed in the embodiments of the present invention are executed.
  • the above-mentioned method steps and system units can also be implemented by using a controller and a computer-readable storage medium for storing a computer program that enables the controller to implement the functions of the above-mentioned steps or units.
  • functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
  • the computer-readable medium may include RAM, ROM, EEPROM (Electrically Erasable Programmable Read Only Memory), CD-ROM (Compact Disc Read Only Memory, compact disc read only memory) memory) or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, or may be used to carry or store the desired program code in the form of instructions or data structures and which can be accessed by a general purpose or special purpose computer or a general purpose or special purpose processor any other medium. Also, any connection is properly termed a computer-readable medium.
  • Disk and disc includes Compact Disc (CD), Laser Disc, Optical Disc, Digital Video Disc (DVD), Floppy Disk, Blu-ray Disc, where disks generally reproduce data magnetically, while Optical discs reproduce data optically using lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • the storage medium can be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种自动驾驶仿真渲染方法、装置、计算机设备和可读存储介质,该方法包括:加载场景文件并基于树模型进行场景建模,建立静态空间索引(S01);配置动态交通流生成动态场景,并配置传感器的位置和方向(S02);以及创建第一线程和第二线程,使第一线程基于树模型对动态场景的每一帧进行剪裁,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出(S03)。通过树模型来进行建模并高效裁剪有效区域,确保渲染到的都是有效信息,减少资源浪费,同时将场景仿真渲染分成两个阶段,并根据每个阶段所消耗的资源情况设计了多线程并行模式,加快了仿真渲染速度,有利于仿真的实时性,有效提高了资源的使用效率。

Description

一种自动驾驶仿真渲染方法、装置、设备及可读介质
本申请要求于2020年7月23日提交中国专利局、申请号为202010717480.9、发明名称为“一种自动驾驶仿真渲染方法、装置、设备及可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及自动驾驶技术领域,尤其涉及一种自动驾驶仿真渲染方法、装置、设备及可读介质。
背景技术
自动驾驶算法模型是整个自动驾驶汽车的“大脑”,对于自动驾驶车辆来说至关重要。它控制车辆如何感知环境,如何实时接收数据,如何进行数据处理,这些数据被系统用来做出决策,向底盘执行系统提供实时命令反馈,并实现最小化风险。
对于自动驾驶来说,仿真测试是技术验证、支撑系统培训、测试和验证的基础技术,自动驾驶仿真软件有道路生成、场景定义、交通流模拟、控制模拟、传感器模拟等功能,仿真流程主要是先创建静态场景(道路、交通标识等等),再制造动态交通流,通过传感器仿真来得到仿真画面或视频,进而将仿真数据传送给被测设备。
现有自动驾驶仿真软件进行传感器仿真时主要分为几个部分:一是根据场景文件生成模拟环境模型,二是生成动态交通流,三是明确传感器配置,四是对场景进行渲染得到画面。
在进行场景渲染时,会得到当前仿真世界中各物体的逻辑关系,再根据本车位置获取本车周围一定距离内的物体信息,对这些物体进行渲染,再根据本车中传感器的位置方向等信息来获取该传感器所对应的某一区域,截取该区域的渲染画面。
现有技术仿真场景渲染时会获取本车周围一定距离内所有物体的信息,传感器所需的信息包含在内,但是会有部分信息并不需要,造成资源使用的浪费;同时,仿真场景渲染进行多传感器仿真时需要同时渲染多个视口,效率较低,GPU(Graphics Processing Unit,图形处理器)资源没有使用充分。
发明内容
有鉴于此,本发明实施例的目的在于提出一种自动驾驶仿真渲染方法、装置、设备及可读介质,通过树模型来进行建模并高效裁剪有效区域,确保渲染到的都是有效信息,减少资源浪费,同时将场景仿真渲染分成两个阶段,并根据每个阶段所消耗的资源情况设计了多线程并行模式,加快了仿真渲染速度,有利于仿真的实时性,有效提高了资源的使用效率。
基于上述目的,本发明实施例的一方面提供了一种自动驾驶仿真渲染方法,包括以下步骤:加载场景文件并基于树模型进行场景建模,建立静态空间索引;配置动态交通流生成动态场景,并配置传感器的位置和方向;以及创建第一线程和第二线程,并使第一线程基于树模型对动态场景的每一帧进行剪裁,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出。
在一些实施方式中,创建第一线程和第二线程包括:根据传感器数量创建对应视口,并分别为每个视口创建第一线程和第二线程;使每个视口的第一线程同时开始对动态场景的第一帧进行剪裁。
在一些实施方式中,使第一线程基于树模型对动态场景的每一帧进行剪裁包括:第一线程基于树模型对动态场景的当前帧进行剪裁;若是完成对当前帧的剪裁,对下一帧进行剪裁。
在一些实施方式中,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出包括:第二线程基于传感器的位置和方向对第一线程对当前帧剪裁得到的3D场景渲染成2D画面进行输出;若是完 成对当前帧的渲染,判断第一线程是否完成对下一帧的剪裁;若是第一线程完成对下一帧的剪裁,使第二线程对下一帧进行渲染并输出。
在一些实施方式中,使第一线程基于树模型对动态场景的每一帧进行剪裁包括:判断动态场景对应树模型的区域是否为渲染感兴趣区域;若是感兴趣区域,进一步判断与剪裁区域的关系;若是感兴趣区域在剪裁区域内,构建3D场景并输出剪裁结果。
在一些实施方式中,还包括:若是感兴趣区域与剪裁区域相交,对树模型进行剪裁过滤;重构3D场景模型并输出剪裁结果。
在一些实施方式中,建立静态空间索引包括:根据场景的方向分为东、西、南、北四个一级节点;根据场景的物体分为若干二级节点。
本发明实施例的另一方面,还提供了一种自动驾驶仿真渲染装置,包括:建模模块,配置用于加载场景文件并基于树模型进行场景建模,建立静态空间索引;采集模块,配置用于配置动态交通流生成动态场景,并配置传感器的位置和方向;以及剪裁渲染模块,配置用于创建第一线程和第二线程,并使第一线程基于树模型对动态场景的每一帧进行剪裁,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出。
在一些实施方式中,剪裁渲染模块进一步配置用于:根据传感器数量创建对应视口,并分别为每个视口创建第一线程和第二线程;使每个视口的第一线程同时开始对动态场景的第一帧进行剪裁。
在一些实施方式中,剪裁渲染模块进一步配置用于:第一线程基于树模型对动态场景的当前帧进行剪裁;若是完成对当前帧的剪裁,对下一帧进行剪裁。
在一些实施方式中,剪裁渲染模块进一步配置用于:第二线程基于传感器的位置和方向对第一线程对当前帧剪裁得到的3D场景渲染成2D画面进行输出;若是完成对当前帧的渲染,判断第一线程是否完成对下一帧的剪裁;若是第一线程完成对下一帧的剪裁,使第二线程对下一帧进行渲染 并输出。
在一些实施方式中,剪裁渲染模块进一步配置用于:判断动态场景对应树模型的区域是否为渲染感兴趣区域;若是感兴趣区域,进一步判断与剪裁区域的关系;若是感兴趣区域在剪裁区域内,构建3D场景并输出剪裁结果;若是感兴趣区域与剪裁区域相交,对树模型进行剪裁过滤;重构3D场景模型并输出剪裁结果。
在一些实施方式中,建模模块进一步配置用于:根据场景的方向分为东、西、南、北四个一级节点;根据场景的物体分为若干二级节点。
本发明实施例的再一方面,还提供了一种计算机设备,包括:至少一个处理器;以及存储器,存储器存储有可在处理器上运行的计算机指令,指令由处理器执行时实现上述方法的步骤。
本发明实施例的再一方面,还提供了一种计算机可读存储介质,计算机可读存储介质存储有被处理器执行时实现如上方法步骤的计算机程序。
本发明具有以下有益技术效果:通过树模型来进行建模并高效裁剪有效区域,确保渲染到的都是有效信息,减少资源浪费,同时将场景仿真渲染分成两个阶段,并根据每个阶段所消耗的资源情况设计了多线程并行模式,加快了仿真渲染速度,有利于仿真的实时性,有效提高了资源的使用效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。
图1为本发明提供的自动驾驶仿真渲染方法的实施例的示意图;
图2为本发明提供的自动驾驶仿真渲染装置的实施例的示意图;
图3为本发明提供的计算机设备的实施例的示意图;
图4为本发明提供的计算机可读存储介质的实施例的示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明实施例进一步详细说明。
需要说明的是,本发明实施例中所有使用“第一”和“第二”的表述均是为了区分两个相同名称非相同的实体或者非相同的参量,可见“第一”“第二”仅为了表述的方便,不应理解为对本发明实施例的限定,后续实施例对此不再一一说明。
基于上述目的,本发明实施例的第一个方面,提出了自动驾驶仿真渲染方法的实施例。图1示出的是本发明提供的自动驾驶仿真渲染方法的实施例的示意图。如图1所示,本发明实施例包括如下步骤:
S1、加载场景文件并基于树模型进行场景建模,建立静态空间索引;
S2、配置动态交通流生成动态场景,并配置传感器的位置和方向;以及
S3、创建第一线程和第二线程,并使第一线程基于树模型对动态场景的每一帧进行剪裁,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出。
在本实施例中,将仿真渲染分为两个阶段,根据这两个阶段的资源消耗情况设计了多线程并行模式,针对单视口和多视口进行不同的资源分配,在保证仿真质量的前提下,对资源进行更合理的分配,有效提高仿真的最大帧率,提高资源的最大化使用。其中,对应于第一线程的第一阶段主要为逻辑运算,消耗CPU(Central Processing Unit,中央处理器)资源,而对应于第二线程的第二阶段不但需要逻辑运算,还需要画面渲染,所以消 耗CPU和GPU资源,因此第一阶段的处理时间要比第二阶段的时间少很多。两个线程运行在不同的CPU核上,因为一阶段消耗时间少,在等待二阶段进行渲染的过程中可以提前处理下一帧中场景更新、确认渲染部分等,这样可以缩短每一帧的渲染时间,提高资源的利用。
在本发明的一些实施例中,创建第一线程和第二线程包括:根据传感器数量创建对应视口,并分别为每个视口创建第一线程和第二线程;使每个视口的第一线程同时开始对动态场景的第一帧进行剪裁。
在本实施例中,多视口仿真一般用于测试一辆车中安装多个传感器的情况,一个视口代表一个传感器,多视口仿真在仿真测试中使用更多。多视口单线程模式与单视口一样,按顺序处理每个视口的一二两个阶段。
在本实施例中,以如下配置为例:服务器为浪潮NF5280M5服务器;CPU为
Figure PCTCN2021076877-appb-000001
Gold 6130 CPU@2.10GHz;显卡为4*1080TI,11G显存/卡;软件环境为Ubuntu 18.04.4 LTS操作系统。基于现有的自动驾驶仿真软件,使用本发明方法对八字循环行车场景进行渲染测试,测试采用双视口配置,并且分别测试单线程和多线程模式的仿真帧率和资源消耗。
加载场景文件来进行场景建模,建立静态场景;配置动态交通流,生成动态场景;配置传感器的位置、方向等;进行传感器的仿真画面渲染。单线程模式下,测得帧率为40-50HZ,GPU利用率为83%;多线程模式下,测得帧率为50-60HZ,GPU利用率为97%。
在本发明的一些实施例中,使第一线程基于树模型对动态场景的每一帧进行剪裁包括:第一线程基于树模型对动态场景的当前帧进行剪裁;若是完成对当前帧的剪裁,对下一帧进行剪裁。
在本发明的一些实施例中,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出包括:第二线程基于传感器的位置和方向对第一线程对当前帧剪裁得到的3D场景渲染成2D画面进行输出;若是完成对当前帧的渲染,判断第一线程是否完成对下一帧的剪裁;若是第一线程完成对下一帧的剪裁,使第二线程对下一帧进行渲染并输出。
在本实施例中,第二线程是根据传感器的位置及方向使用OpenGL(Open Graphics Library,开放图形库或者开放式图形库)把确定的3D场景渲染成2D画面进行输出。
在本发明的一些实施例中,使第一线程基于树模型对动态场景的每一帧进行剪裁包括:判断动态场景对应树模型的区域是否为渲染感兴趣区域;若是感兴趣区域,进一步判断与剪裁区域的关系;若是感兴趣区域在剪裁区域内,构建3D场景并输出剪裁结果。
在本发明的一些实施例中,还包括:若是感兴趣区域与剪裁区域相交,对树模型进行剪裁过滤;重构3D场景模型并输出剪裁结果。
在本实施例中,根据二叉树模型快速确认出需要渲染部分中包括哪些场景、车辆等,根据本车传感器的位置及方向来获取该传感器所对应的区域范围,即对应树模型中某一部分,根据该范围获取所需渲染的物体,再对应到二叉树模型中所包含的物体信息;场景的裁剪基于树模型,树模型的数据分层有利于提取感兴趣的数据层,基于树模型只需要处理某一区域,直接排除掉在裁剪区域外的数据,分别对树模型逐级过滤裁剪,得到裁剪结果,再将结果按照树模型进行重构,得到位于裁剪区域的场景。在获取仿真物体信息时会先根据传感器的配置来进行筛选,去掉不需进行渲染的物体信息,避免资源的浪费。
在本发明的一些实施例中,建立静态空间索引包括:根据场景的方向分为东、西、南、北四个一级节点;根据场景的物体分为若干二级节点。
在本实施例中,根据树模型明确仿真中每个物体的信息,进行高效的场景物体信息查询、删减。场景内容根据方向等信息分为多个下级节点,再以树模型为基础,构建高效的空间索引。树模型的数据分层有利于数据提取,进而便于进行动态裁剪,根据传感器信息及树模型来快速裁剪出需要渲染的物体,从而降低一些不必要的资源消耗,同时也能提高渲染的效率。
需要特别指出的是,上述自动驾驶仿真渲染方法的各个实施例中的各 个步骤均可以相互交叉、替换、增加、删减,因此,这些合理的排列组合变换之于自动驾驶仿真渲染方法也应当属于本发明的保护范围,并且不应将本发明的保护范围局限在实施例之上。
基于上述目的,本发明实施例的第二个方面,提出了一种自动驾驶仿真渲染装置。图2示出的是本发明提供的自动驾驶仿真渲染装置的实施例的示意图。如图2所示,本发明实施例包括如下模块:建模模块S11,配置用于配置用于加载场景文件并基于树模型进行场景建模,建立静态空间索引;采集模块S12,配置用于采集模块,配置用于配置动态交通流生成动态场景,并配置传感器的位置和方向;以及剪裁渲染模块S13,配置用于创建第一线程和第二线程,并使第一线程基于树模型对动态场景的每一帧进行剪裁,第二线程基于传感器的位置和方向对第一线程剪裁得到的每一帧进行渲染并输出。
在本发明的一些实施例中,剪裁渲染模块S13进一步配置用于:根据传感器数量创建对应视口,并分别为每个视口创建第一线程和第二线程;使每个视口的第一线程同时开始对动态场景的第一帧进行剪裁。
在本发明的一些实施例中,剪裁渲染模块S13进一步配置用于:第一线程基于树模型对动态场景的当前帧进行剪裁;若是完成对当前帧的剪裁,对下一帧进行剪裁。
在本发明的一些实施例中,剪裁渲染模块S13进一步配置用于:第二线程基于传感器的位置和方向对第一线程对当前帧剪裁得到的3D场景渲染成2D画面进行输出;若是完成对当前帧的渲染,判断第一线程是否完成对下一帧的剪裁;若是第一线程完成对下一帧的剪裁,使第二线程对下一帧进行渲染并输出。
在本发明的一些实施例中,剪裁渲染模块S13进一步配置用于:判断动态场景对应树模型的区域是否为渲染感兴趣区域;若是感兴趣区域,进一步判断与剪裁区域的关系;若是感兴趣区域在剪裁区域内,构建3D场景并输出剪裁结果;若是感兴趣区域与剪裁区域相交,对树模型进行剪裁 过滤;重构3D场景模型并输出剪裁结果。
在本发明的一些实施例中,建模模块S11进一步配置用于:根据场景的方向分为东、西、南、北四个一级节点;根据场景的物体分为若干二级节点。
基于上述目的,本发明实施例的第三个方面,提出了一种计算机设备。图3示出的是本发明提供的计算机设备的实施例的示意图。如图3所示,本发明实施例包括如下装置:至少一个处理器S21;以及存储器S22,存储器S22存储有可在处理器上运行的计算机指令S23,指令由处理器执行时实现以上方法的步骤。
本发明还提供了一种计算机可读存储介质。图4示出的是本发明提供的计算机可读存储介质的实施例的示意图。如图4所示,计算机可读存储介质存储S31有被处理器执行时执行如上方法的计算机程序S32。
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关硬件来完成,自动驾驶仿真渲染方法的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,程序的存储介质可为磁碟、光盘、只读存储记忆体(ROM,Read Only Memory)或随机存储记忆体(RAM,Random Access Memory)等。上述计算机程序的实施例,可以达到与之对应的前述任意方法实施例相同或者相类似的效果。
此外,根据本发明实施例公开的方法还可以被实现为由处理器执行的计算机程序,该计算机程序可以存储在计算机可读存储介质中。在该计算机程序被处理器执行时,执行本发明实施例公开的方法中限定的上述功能。
此外,上述方法步骤以及系统单元也可以利用控制器以及用于存储使得控制器实现上述步骤或单元功能的计算机程序的计算机可读存储介质实现。
本领域技术人员还将明白的是,结合这里的公开所描述的各种示例性逻辑块、模块、电路和算法步骤可以被实现为电子硬件、计算机软件或两 者的组合。为了清楚地说明硬件和软件的这种可互换性,已经就各种示意性组件、方块、模块、电路和步骤的功能对其进行了一般性的描述。这种功能是被实现为软件还是被实现为硬件取决于具体应用以及施加给整个系统的设计约束。本领域技术人员可以针对每种具体应用以各种方式来实现的功能,但是这种实现决定不应被解释为导致脱离本发明实施例公开的范围。
在一个或多个示例性设计中,功能可以在硬件、软件、固件或其任意组合中实现。如果在软件中实现,则可以将功能作为一个或多个指令或代码存储在计算机可读介质上或通过计算机可读介质来传送。计算机可读介质包括计算机存储介质和通信介质,该通信介质包括有助于将计算机程序从一个位置传送到另一个位置的任何介质。存储介质可以是能够被通用或专用计算机访问的任何可用介质。作为例子而非限制性的,该计算机可读介质可以包括RAM、ROM、EEPROM(Electrically Erasable Programmable Read Only Memory,电可擦写可编程ROM)、CD-ROM(Compact Disc Read Only Memory,光盘只读存储器)或其它光盘存储设备、磁盘存储设备或其它磁性存储设备,或者是可以用于携带或存储形式为指令或数据结构的所需程序代码并且能够被通用或专用计算机或者通用或专用处理器访问的任何其它介质。此外,任何连接都可以适当地称为计算机可读介质。例如,如果使用同轴线缆、光纤线缆、双绞线、数字用户线路(DSL,Digital SubscriberLine)或诸如红外线、无线电和微波的无线技术来从网站、服务器或其它远程源发送软件,则上述同轴线缆、光纤线缆、双绞线、DSL或诸如红外线、无线电和微波的无线技术均包括在介质的定义。如这里所使用的,磁盘和光盘包括压缩盘(CD,Compact Disc)、激光盘、光盘、数字多功能盘(DVD,Digital Video Disc)、软盘、蓝光盘,其中磁盘通常磁性地再现数据,而光盘利用激光光学地再现数据。上述内容的组合也应当包括在计算机可读介质的范围内。
以上是本发明公开的示例性实施例,但是应当注意,在不背离权利要求限定的本发明实施例公开的范围的前提下,可以进行多种改变和修改。 根据这里描述的公开实施例的方法权利要求的功能、步骤和/或动作不需以任何特定顺序执行。此外,尽管本发明实施例公开的元素可以以个体形式描述或要求,但除非明确限制为单数,也可以理解为多个。
应当理解的是,在本文中使用的,除非上下文清楚地支持例外情况,单数形式“一个”旨在也包括复数形式。还应当理解的是,在本文中使用的“和/或”是指包括一个或者一个以上相关联地列出的项目的任意和所有可能组合。
上述本发明实施例公开实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本发明实施例公开的范围(包括权利要求)被限于这些例子;在本发明实施例的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,并存在如上的本发明实施例的不同方面的许多其它变化,为了简明它们没有在细节中提供。因此,凡在本发明实施例的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本发明实施例的保护范围之内。

Claims (10)

  1. 一种自动驾驶仿真渲染方法,其特征在于,包括以下步骤:
    加载场景文件并基于树模型进行场景建模,建立静态空间索引;
    配置动态交通流生成动态场景,并配置传感器的位置和方向;以及
    创建第一线程和第二线程,并使所述第一线程基于所述树模型对所述动态场景的每一帧进行剪裁,所述第二线程基于所述传感器的位置和方向对所述第一线程剪裁得到的每一帧进行渲染并输出。
  2. 根据权利要求1所述的自动驾驶仿真渲染方法,其特征在于,创建第一线程和第二线程包括:
    根据所述传感器数量创建对应视口,并分别为每个所述视口创建第一线程和第二线程;
    使每个所述视口的所述第一线程同时开始对所述动态场景的第一帧进行剪裁。
  3. 根据权利要求1所述的自动驾驶仿真渲染方法,其特征在于,使所述第一线程基于所述树模型对所述动态场景的每一帧进行剪裁包括:
    所述第一线程基于所述树模型对所述动态场景的当前帧进行剪裁;
    若是完成对当前帧的剪裁,对下一帧进行剪裁。
  4. 根据权利要求1所述的自动驾驶仿真渲染方法,其特征在于,所述第二线程基于所述传感器的位置和方向对所述第一线程剪裁得到的每一帧进行渲染并输出包括:
    所述第二线程基于所述传感器的位置和方向对所述第一线程对当前帧剪裁得到的3D场景渲染成2D画面进行输出;
    若是完成对当前帧的渲染,判断所述第一线程是否完成对下一帧的剪裁;
    若是所述第一线程完成对下一帧的剪裁,使所述第二线程对所述下一 帧进行渲染并输出。
  5. 根据权利要求1所述的自动驾驶仿真渲染方法,其特征在于,使所述第一线程基于所述树模型对所述动态场景的每一帧进行剪裁包括:
    判断所述动态场景对应所述树模型的区域是否为渲染感兴趣区域;
    若是感兴趣区域,进一步判断与剪裁区域的关系;
    若是感兴趣区域在所述剪裁区域内,构建3D场景并输出剪裁结果。
  6. 根据权利要求5所述的自动驾驶仿真渲染方法,其特征在于,还包括:
    若是感兴趣区域与剪裁区域相交,对所述树模型进行剪裁过滤;
    重构3D场景模型并输出剪裁结果。
  7. 根据权利要求1所述的自动驾驶仿真渲染方法,其特征在于,建立静态空间索引包括:
    根据所述场景的方向分为东、西、南、北四个一级节点;
    根据所述场景的物体分为若干二级节点。
  8. 一种自动驾驶仿真渲染装置,其特征在于,包括:
    建模模块,配置用于加载场景文件并基于树模型进行场景建模,建立静态空间索引;
    采集模块,配置用于配置动态交通流生成动态场景,并配置传感器的位置和方向;以及
    剪裁渲染模块,配置用于创建第一线程和第二线程,并使所述第一线程基于所述树模型对所述动态场景的每一帧进行剪裁,所述第二线程基于所述传感器的位置和方向对所述第一线程剪裁得到的每一帧进行渲染并输出。
  9. 一种计算机设备,其特征在于,包括:
    至少一个处理器;以及
    存储器,所述存储器存储有可在所述处理器上运行的计算机指令,所述指令由所述处理器执行时实现1-7任意一项所述方法的步骤。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-7任意一项所述方法的步骤。
PCT/CN2021/076877 2020-07-23 2021-02-19 一种自动驾驶仿真渲染方法、装置、设备及可读介质 WO2022016859A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/005,940 US20230351685A1 (en) 2020-07-23 2021-02-19 Self-Driving Simulation Rendering Method and Apparatus, Device, and Readable Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010717480.9A CN111862314B (zh) 2020-07-23 2020-07-23 一种自动驾驶仿真渲染方法、装置、设备及可读介质
CN202010717480.9 2020-07-23

Publications (1)

Publication Number Publication Date
WO2022016859A1 true WO2022016859A1 (zh) 2022-01-27

Family

ID=72949806

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076877 WO2022016859A1 (zh) 2020-07-23 2021-02-19 一种自动驾驶仿真渲染方法、装置、设备及可读介质

Country Status (3)

Country Link
US (1) US20230351685A1 (zh)
CN (1) CN111862314B (zh)
WO (1) WO2022016859A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862314B (zh) * 2020-07-23 2022-05-13 苏州浪潮智能科技有限公司 一种自动驾驶仿真渲染方法、装置、设备及可读介质
CN113592992A (zh) * 2021-08-09 2021-11-02 郑州捷安高科股份有限公司 一种轨道交通模拟驾驶的渲染方法及装置
WO2024087021A1 (zh) * 2022-10-25 2024-05-02 西门子股份公司 一种渲染方法、系统、电子设备和计算机介质
CN117076143B (zh) * 2023-10-18 2024-01-26 腾讯科技(深圳)有限公司 装备资源的处理方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7015913B1 (en) * 2003-06-27 2006-03-21 Nvidia Corporation Method and apparatus for multithreaded processing of data in a programmable graphics processor
CN103914868A (zh) * 2013-12-20 2014-07-09 柳州腾龙煤电科技股份有限公司 虚拟现实下的海量模型数据动态调度与实时异步加载方法
CN104102488A (zh) * 2014-07-18 2014-10-15 无锡梵天信息技术股份有限公司 一种基于多线程并行化的3d引擎系统
CN110779730A (zh) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 基于虚拟驾驶场景车辆在环的l3级自动驾驶系统测试方法
CN111862314A (zh) * 2020-07-23 2020-10-30 苏州浪潮智能科技有限公司 一种自动驾驶仿真渲染方法、装置、设备及可读介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399729B (zh) * 2013-06-28 2016-04-27 广州市动景计算机科技有限公司 一种HTML5 Canvas应用处理方法、装置及处理器
CN105979243A (zh) * 2015-12-01 2016-09-28 乐视致新电子科技(天津)有限公司 一种显示立体图像的处理方法和装置
CN108701164A (zh) * 2017-08-25 2018-10-23 深圳市大疆创新科技有限公司 获得飞行模拟数据的方法、装置、存储介质及设备
CN109101690B (zh) * 2018-07-11 2023-05-02 深圳地平线机器人科技有限公司 用于渲染车辆自动驾驶模拟器中的场景的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7015913B1 (en) * 2003-06-27 2006-03-21 Nvidia Corporation Method and apparatus for multithreaded processing of data in a programmable graphics processor
CN103914868A (zh) * 2013-12-20 2014-07-09 柳州腾龙煤电科技股份有限公司 虚拟现实下的海量模型数据动态调度与实时异步加载方法
CN104102488A (zh) * 2014-07-18 2014-10-15 无锡梵天信息技术股份有限公司 一种基于多线程并行化的3d引擎系统
CN110779730A (zh) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 基于虚拟驾驶场景车辆在环的l3级自动驾驶系统测试方法
CN111862314A (zh) * 2020-07-23 2020-10-30 苏州浪潮智能科技有限公司 一种自动驾驶仿真渲染方法、装置、设备及可读介质

Also Published As

Publication number Publication date
CN111862314B (zh) 2022-05-13
CN111862314A (zh) 2020-10-30
US20230351685A1 (en) 2023-11-02

Similar Documents

Publication Publication Date Title
WO2022016859A1 (zh) 一种自动驾驶仿真渲染方法、装置、设备及可读介质
JP6898534B2 (ja) 機械学習におけるデータ・ストレージを低減するためのシステムおよび方法
CN105045663B (zh) 快速部署虚拟机的方法与系统
CN109816762A (zh) 一种图像渲染方法、装置、电子设备和存储介质
WO2021052169A1 (zh) 分布式数据的均衡处理方法、装置、计算终端和存储介质
CN104751507A (zh) 图形内容渲染方法和装置
CN110096268A (zh) 将vr/ar/mr装置的可执行程序活态化二次编辑的系统及方法
CN114429528A (zh) 图像处理方法、装置、设备、计算机程序及存储介质
CN111382647A (zh) 一种图片处理方法、装置、设备及存储介质
CN113018867A (zh) 一种特效文件的生成、播放方法、电子设备及存储介质
CN115482699A (zh) 虚拟驾驶视频教学方法、系统、存储介质及设备
CN115439637A (zh) 车载增强现实的渲染方法、系统、车辆及存储介质
JP7111873B2 (ja) 信号灯識別方法、装置、デバイス、記憶媒体及びプログラム
CN111640191B (zh) 基于vr一体机的投录屏画面采集处理方法
CN115937352A (zh) 矿山场景仿真方法、系统、电子设备及存储介质
CN104216951A (zh) 一种基于移动终端实现增强实境的方法及移动终端
CN110609861A (zh) 车辆识别方法、装置、电子设备及存储介质
CN114173059A (zh) 一种视频编辑系统、方法及装置
CN113963204A (zh) 一种孪生网络目标跟踪系统及方法
CN114419018A (zh) 图像采样方法、系统、设备及介质
CN113505861A (zh) 基于元学习和记忆网络的图像分类方法及系统
CN112395695A (zh) 一种实时建立仿真场景的方法及系统
CN112363689A (zh) Unity3D游戏动画数据的可视化编辑方法、设备和存储介质
CN116012532A (zh) 一种实景三维模型轻量化方法及系统
CN111666863B (zh) 视频处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21845608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21845608

Country of ref document: EP

Kind code of ref document: A1