CN112734923A - Method, device and equipment for constructing automatic driving three-dimensional virtual scene and storage medium - Google Patents

Method, device and equipment for constructing automatic driving three-dimensional virtual scene and storage medium Download PDF

Info

Publication number
CN112734923A
CN112734923A CN202110061838.1A CN202110061838A CN112734923A CN 112734923 A CN112734923 A CN 112734923A CN 202110061838 A CN202110061838 A CN 202110061838A CN 112734923 A CN112734923 A CN 112734923A
Authority
CN
China
Prior art keywords
constructing
automatic driving
description language
language information
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110061838.1A
Other languages
Chinese (zh)
Other versions
CN112734923B (en
Inventor
肖猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202110061838.1A priority Critical patent/CN112734923B/en
Publication of CN112734923A publication Critical patent/CN112734923A/en
Application granted granted Critical
Publication of CN112734923B publication Critical patent/CN112734923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for constructing an automatic driving three-dimensional virtual scene. The method for constructing the three-dimensional virtual scene of automatic driving comprises the following steps: acquiring description language information of a target scene; and constructing a rendering target scene according to the description language information based on a preset automatic driving three-dimensional man-machine interaction engine. According to the embodiment of the application, the three-dimensional virtual scene construction efficiency can be improved.

Description

Method, device and equipment for constructing automatic driving three-dimensional virtual scene and storage medium
Technical Field
The present application belongs to the field of automatic driving technologies, and in particular, to a method and an apparatus for constructing an automatic driving three-dimensional virtual scene, an electronic device, and a computer storage medium.
Background
With the explosive development of the automatic driving technology, Human-Machine Interaction (HMI) is widely mentioned in the field of automatic driving.
At present, a large amount of software development work is needed for constructing the three-dimensional virtual scene of automatic driving, so that the construction efficiency of the three-dimensional virtual scene is low.
Therefore, how to improve the three-dimensional virtual scene construction efficiency is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an automatic driving three-dimensional virtual scene construction method and device, electronic equipment and a computer storage medium, and the three-dimensional virtual scene construction efficiency can be improved.
In a first aspect, an embodiment of the present application provides an automatic driving three-dimensional virtual scene construction method, including:
acquiring description language information of a target scene;
and constructing a rendering target scene according to the description language information based on a preset automatic driving three-dimensional man-machine interaction engine.
Optionally, the target scene includes a plurality of environment elements, and each environment element is presented in a three-dimensional model manner.
Optionally, based on a preset automatic driving three-dimensional human-computer interaction engine, a rendering target scene is constructed according to the description language information, including:
determining an environment element corresponding to the description language information according to the description language information based on an automatic driving three-dimensional man-machine interaction engine;
and constructing a rendering target scene based on the environment elements.
Optionally, when the description language information is path planning display information, based on a preset automatic driving three-dimensional human-computer interaction engine, according to the description language information, a rendering target scene is constructed, including:
and constructing a target planning path corresponding to the rendered path planning display information according to the path planning display information based on the automatic driving three-dimensional man-machine interaction engine.
Optionally, after constructing the rendering target scene, the method further includes:
receiving a display mode adjusting instruction;
and adjusting the display mode of the target scene according to the display mode adjusting instruction.
Optionally, the method further includes:
acquiring at least two real-time video sources;
and dynamically appointing the size and the position of a display area based on at least two paths of real-time video sources.
Optionally, the obtaining of the description language information of the target scene includes:
collecting user voice information;
and recognizing the voice information of the user to obtain the description language information.
In a second aspect, an embodiment of the present application provides an automatic driving three-dimensional virtual scene constructing apparatus, including:
the acquisition module is used for acquiring the description language information of the target scene;
and the construction rendering module is used for constructing a rendering target scene based on a preset automatic driving three-dimensional man-machine interaction engine according to the description language information.
Optionally, the target scene includes a plurality of environment elements, and each environment element is presented in a three-dimensional model manner.
Optionally, the rendering module is configured to determine, based on the autopilot three-dimensional human-computer interaction engine, an environment element corresponding to the description language information according to the description language information; and constructing a rendering target scene based on the environment elements.
Optionally, when the description language information is path planning display information, a rendering module is constructed for rendering a target planning path corresponding to the path planning display information according to the path planning display information based on the automatic driving three-dimensional human-computer interaction engine.
Optionally, the rendering module is configured to receive a display mode adjustment instruction; and adjusting the display mode of the target scene according to the display mode adjusting instruction.
Optionally, the obtaining module is further configured to obtain at least two paths of real-time video sources; and dynamically appointing the size and the position of a display area based on at least two paths of real-time video sources.
Optionally, the acquiring module is configured to acquire user voice information; and recognizing the voice information of the user to obtain the description language information.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method of constructing an autonomous three-dimensional virtual scene as shown in the first aspect.
In a fourth aspect, the present application provides a computer storage medium, on which computer program instructions are stored, and when executed by a processor, the method for constructing a three-dimensional virtual scene is implemented, as shown in the first aspect.
The method, the device, the electronic equipment and the computer storage medium for constructing the three-dimensional virtual scene capable of automatically driving can improve the three-dimensional virtual scene constructing efficiency. According to the method for constructing the three-dimensional virtual scene for automatic driving, after the description language information of the target scene is obtained, a rendering target scene is constructed based on a preset automatic driving three-dimensional man-machine interaction engine and according to the description language information. Therefore, the method utilizes the preset automatic driving three-dimensional man-machine interaction engine to automatically construct the rendering target scene in real time according to the description language information, and the construction efficiency of the three-dimensional virtual scene is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for constructing an autonomous driving three-dimensional virtual scene according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an automatic driving three-dimensional virtual scene constructing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In order to solve the prior art problems, embodiments of the present application provide a method and an apparatus for constructing an automatic driving three-dimensional virtual scene, an electronic device, and a computer storage medium. First, a method for constructing an autonomous three-dimensional virtual scene provided in an embodiment of the present application is described below.
Fig. 1 shows a schematic flow chart of an autonomous three-dimensional virtual scene construction method according to an embodiment of the present application. As shown in fig. 1, the method for constructing an automatic driving three-dimensional virtual scene includes:
s101, obtaining description language information of a target scene.
In one embodiment, the target scene includes a plurality of environmental elements, each environmental element being rendered in a three-dimensional model. Environmental elements may include lane lines, surrounding vehicles (moving or disabled), parking spaces, pedestrians, riders, drivable areas, and so forth.
In one embodiment, obtaining description language information of a target scene includes: collecting user voice information; and recognizing the voice information of the user to obtain the description language information. Therefore, the embodiment can support secondary development of User Interface (UI) interaction and support multimode interaction in a voice mode.
The scene can be defined by a descriptive language, the layout of the whole scene, the types, states and predicted behaviors of all elements in the scene can be described by a concise language mechanism.
For example, an HMI scenario for highway driving may be described as:
Figure BDA0002902647220000051
s102, constructing a rendering target scene based on a preset automatic driving three-dimensional man-machine interaction engine according to the description language information.
The preset automatic driving three-dimensional man-machine interaction engine can be realized by using the Unity and Android technologies, and has the functions of real-time data input and synchronous scene updating.
In one embodiment, the method for constructing the rendering target scene according to the description language information based on the preset automatic driving three-dimensional man-machine interaction engine comprises the following steps: determining an environment element corresponding to the description language information according to the description language information based on an automatic driving three-dimensional man-machine interaction engine; and constructing a rendering target scene based on the environment elements. It can be seen that the environmental elements can be changed synchronously according to the description language information input in real time.
For example, the own vehicle and the other vehicles can smoothly travel according to the real-time execution data. Moreover, in one embodiment, special effects of roads, vehicles and pedestrians in different states can be supported.
In one embodiment, when the description language information is path planning display information, based on a preset automatic driving three-dimensional man-machine interaction engine, a rendering target scene is constructed according to the description language information, and the method comprises the following steps: and constructing a target planning path corresponding to the rendered path planning display information according to the path planning display information based on the automatic driving three-dimensional man-machine interaction engine. As can be seen, this embodiment may support the display of the path planning aspect of typical autopilot functionality.
In one embodiment, after constructing the render-target scene, the method further comprises: receiving a display mode adjusting instruction; and adjusting the display mode of the target scene according to the display mode adjusting instruction. It can be seen that this embodiment can support both day and night display modes, with parameters of the associated ambient atmosphere being adjustable.
In one embodiment, the method further comprises: acquiring at least two real-time video sources; and dynamically appointing the size and the position of a display area based on at least two paths of real-time video sources. Therefore, the embodiment can support the import of at least two real-time video sources, and can dynamically specify the size and the position of the display area.
The method utilizes a preset automatic driving three-dimensional man-machine interaction engine to automatically construct a rendering target scene in real time according to the description language information, and the construction efficiency of the three-dimensional virtual scene is improved. The description is carried out according to the algorithm result only by using a description mechanism provided by an automatic driving three-dimensional man-machine interaction engine, so that the development difficulty is greatly simplified, and the development progress is improved. Moreover, the scene description and engine rendering can be directly decoupled and can be improved separately.
Fig. 2 is a schematic structural diagram of an automatic driving three-dimensional virtual scene constructing apparatus according to an embodiment of the present application, and as shown in fig. 2, the automatic driving three-dimensional virtual scene constructing apparatus includes:
an obtaining module 201, configured to obtain description language information of a target scene;
and the construction rendering module 202 is configured to construct a rendering target scene according to the description language information based on a preset automatic driving three-dimensional human-computer interaction engine.
In one embodiment, the target scene includes a plurality of environmental elements, each environmental element being rendered in a three-dimensional model.
In one embodiment, the rendering module 202 is configured to determine, based on the autopilot three-dimensional human-computer interaction engine, an environment element corresponding to the description language information according to the description language information; and constructing a rendering target scene based on the environment elements.
In an embodiment, when the description language information is path planning display information, the construction rendering module 202 is configured to construct and render a target planning path corresponding to the path planning display information according to the path planning display information based on an autopilot three-dimensional human-computer interaction engine.
In one embodiment, the rendering module 202 is further configured to receive a display mode adjustment instruction; and adjusting the display mode of the target scene according to the display mode adjusting instruction.
In one embodiment, the obtaining module 201 is further configured to obtain at least two paths of real-time video sources; and dynamically appointing the size and the position of a display area based on at least two paths of real-time video sources.
In one embodiment, the obtaining module 201 is configured to collect voice information of a user; and recognizing the voice information of the user to obtain the description language information.
Each module/unit in the apparatus shown in fig. 2 has a function of implementing each step in fig. 1, and can achieve the corresponding technical effect, and for brevity, the description is not repeated here.
Fig. 3 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
The electronic device may comprise a processor 301 and a memory 302 in which computer program instructions are stored.
Specifically, the processor 301 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 302 may include removable or non-removable (or fixed) media, where appropriate. The memory 302 may be internal or external to the electronic device, where appropriate. In particular embodiments, memory 302 may be non-volatile solid-state memory.
In one example, the Memory 302 may be a Read Only Memory (ROM). In one example, the ROM may be mask programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically rewritable ROM (earom), or flash memory, or a combination of two or more of these.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement any one of the above-described embodiments of the method for constructing an autonomous three-dimensional virtual scene.
In one example, the electronic device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present application.
Bus 310 includes hardware, software, or both to couple the components of the online data traffic billing device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 310 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, the embodiment of the application can be realized by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any one of the above-described embodiments of the method for constructing an autonomous three-dimensional virtual scene.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. An automatic driving three-dimensional virtual scene construction method is characterized by comprising the following steps:
acquiring description language information of a target scene;
and constructing and rendering the target scene according to the description language information based on a preset automatic driving three-dimensional man-machine interaction engine.
2. The method of claim 1, wherein the target scene comprises a plurality of environmental elements, each of which is represented in a three-dimensional model.
3. The method for constructing the three-dimensional virtual scene of the automatic driving as claimed in claim 2, wherein the constructing and rendering the target scene based on the preset automatic driving three-dimensional human-computer interaction engine according to the description language information comprises:
determining an environment element corresponding to the description language information according to the description language information based on the automatic driving three-dimensional man-machine interaction engine;
and constructing and rendering the target scene based on the environment elements.
4. The method for constructing the three-dimensional virtual scene for automatic driving according to claim 1, wherein when the description language information is path planning display information, the constructing and rendering the target scene based on the preset three-dimensional human-computer interaction engine based on the description language information comprises:
and constructing and rendering a target planning path corresponding to the path planning display information according to the path planning display information based on the automatic driving three-dimensional man-machine interaction engine.
5. The method of claim 1, wherein after said constructing renders said target scene, said method further comprises:
receiving a display mode adjusting instruction;
and adjusting the display mode of the target scene according to the display mode adjusting instruction.
6. The method of claim 1, further comprising:
acquiring at least two real-time video sources;
and dynamically appointing the size and the position of a display area based on the at least two real-time video sources.
7. The method for constructing an automatically driven three-dimensional virtual scene according to any one of claims 1 to 6, wherein the obtaining of the description language information of the target scene comprises:
collecting user voice information;
and identifying the user voice information to obtain the description language information.
8. An automatic driving three-dimensional virtual scene constructing device is characterized by comprising:
the acquisition module is used for acquiring the description language information of the target scene;
and the construction rendering module is used for constructing and rendering the target scene according to the description language information based on a preset automatic driving three-dimensional man-machine interaction engine.
9. An electronic device, characterized in that the electronic device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the autonomous driving three-dimensional virtual scene construction method according to any of claims 1-7.
10. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement the method of constructing an autonomous-driving three-dimensional virtual scene as recited in any one of claims 1 to 7.
CN202110061838.1A 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium Active CN112734923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061838.1A CN112734923B (en) 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061838.1A CN112734923B (en) 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112734923A true CN112734923A (en) 2021-04-30
CN112734923B CN112734923B (en) 2024-05-24

Family

ID=75592068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061838.1A Active CN112734923B (en) 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112734923B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690286A (en) * 2022-10-19 2023-02-03 珠海云洲智能科技股份有限公司 Three-dimensional terrain generation method, terminal device and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US20160378861A1 (en) * 2012-09-28 2016-12-29 Sri International Real-time human-machine collaboration using big data driven augmented reality technologies
US20170154468A1 (en) * 2015-12-01 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method and electronic apparatus for constructing virtual reality scene model
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN109685904A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Virtual driving modeling method and system based on virtual reality
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
US20200074230A1 (en) * 2018-09-04 2020-03-05 Luminar Technologies, Inc. Automatically generating training data for a lidar using simulated vehicles in virtual space
US10665030B1 (en) * 2019-01-14 2020-05-26 Adobe Inc. Visualizing natural language through 3D scenes in augmented reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US20160378861A1 (en) * 2012-09-28 2016-12-29 Sri International Real-time human-machine collaboration using big data driven augmented reality technologies
US20170154468A1 (en) * 2015-12-01 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method and electronic apparatus for constructing virtual reality scene model
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN109685904A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Virtual driving modeling method and system based on virtual reality
US20200074230A1 (en) * 2018-09-04 2020-03-05 Luminar Technologies, Inc. Automatically generating training data for a lidar using simulated vehicles in virtual space
US10665030B1 (en) * 2019-01-14 2020-05-26 Adobe Inc. Visualizing natural language through 3D scenes in augmented reality
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIAJIE LU等: ""A new framework for automatic 3D scene construction from text description"", 《2010 IEEE INTERNATIONAL CONFERENCE ON PROGRESS IN INFORMATICS AND COMPUTING》, pages 964 - 968 *
RENSHU GU等: ""Efficient Multi-Person Hierarchical 3D Pose Estimation for Autonomous Driving"", 《2019 IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL》, 31 December 2019 (2019-12-31), pages 163 - 168 *
张燕咏等: ""基于多模态融合的自动驾驶感知及计算"", 《计算机研究与发展》, 1 September 2020 (2020-09-01), pages 1781 - 1799 *
王贤隆: ""交通场景的自动三维建模技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 1444 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690286A (en) * 2022-10-19 2023-02-03 珠海云洲智能科技股份有限公司 Three-dimensional terrain generation method, terminal device and computer-readable storage medium
CN115690286B (en) * 2022-10-19 2023-08-29 珠海云洲智能科技股份有限公司 Three-dimensional terrain generation method, terminal device and computer readable storage medium

Also Published As

Publication number Publication date
CN112734923B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
CN108921200B (en) Method, apparatus, device and medium for classifying driving scene data
EP3627471B1 (en) Method and device for assisting in controlling automatic driving of vehicle, and system
CN113261035B (en) Trajectory prediction method and related equipment
CN112668153A (en) Method, device and equipment for generating automatic driving simulation scene
CN108646752B (en) Control method and device of automatic driving system
CN111046709B (en) Vehicle lane level positioning method and system, vehicle and storage medium
CN112985432B (en) Vehicle navigation method, device, electronic equipment and storage medium
CN111401255B (en) Method and device for identifying bifurcation junctions
CN112650224A (en) Method, device, equipment and storage medium for automatic driving simulation
CN111338232B (en) Automatic driving simulation method and device
CN113177993B (en) Method and system for generating high-precision map in simulation environment
EP3588007B1 (en) Information processing method and information processing device
CN112613469B (en) Target object motion control method and related equipment
CN112249035A (en) Automatic driving method, device and equipment based on general data flow architecture
CN111081044A (en) Automatic driving method, device, equipment and storage medium for automatic driving vehicle
CN116783462A (en) Performance test method of automatic driving system
CN112734923B (en) Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium
CN111024107A (en) Path determining method, device, equipment and storage medium
CN113409393B (en) Method and device for identifying traffic sign
CN112230632B (en) Method, apparatus, device and storage medium for automatic driving
US11694544B2 (en) Traffic safety control method and vehicle-mounted device
CN109656245B (en) Method and device for determining brake position
TWI762887B (en) Traffic safety control method, vehicle-mounted device and readable storage medium
KR20120071750A (en) Electronic apparatus, method and recording medium for providing lane-changing information
CN111857117B (en) GPS message decoder for decoding GPS messages during autopilot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant