CN110647860A - Information rendering method, device, equipment and medium - Google Patents

Information rendering method, device, equipment and medium Download PDF

Info

Publication number
CN110647860A
CN110647860A CN201910936332.3A CN201910936332A CN110647860A CN 110647860 A CN110647860 A CN 110647860A CN 201910936332 A CN201910936332 A CN 201910936332A CN 110647860 A CN110647860 A CN 110647860A
Authority
CN
China
Prior art keywords
information
rendering
road surface
surface area
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910936332.3A
Other languages
Chinese (zh)
Other versions
CN110647860B (en
Inventor
李映辉
周志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910936332.3A priority Critical patent/CN110647860B/en
Publication of CN110647860A publication Critical patent/CN110647860A/en
Application granted granted Critical
Publication of CN110647860B publication Critical patent/CN110647860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

The embodiment of the application discloses an information rendering method, an information rendering device, information rendering equipment and an information rendering medium, relates to the field of image processing, and particularly relates to an intelligent traffic technology. The specific implementation scheme is as follows: identifying a real-scene road image acquired from an image collector, and determining a road surface area in the real-scene road image and a target object position included in the road surface area; and rendering auxiliary driving information into the road surface area by avoiding the target object position. The embodiment of the application provides an information rendering method, an information rendering device, equipment and a medium, and avoids rendered auxiliary driving information from covering front vehicle, pedestrian or road traffic sign information.

Description

Information rendering method, device, equipment and medium
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an intelligent traffic technology. In particular, the embodiment of the application relates to an information rendering method, an information rendering device, information rendering equipment and an information rendering medium.
Background
The AR-HUD is an enhanced head-up display technology, can reasonably display some auxiliary driving information in a superimposed manner in a sight line area of a driver, and is combined with actual traffic road conditions. By means of AR-HUD technology, the driver can extend and enhance his perception of the driving environment.
The existing implementation scheme includes: and rendering the auxiliary driving information to a pre-fixed area.
The disadvantages of the above solution are:
the rendered driving assistance information may be overlaid onto a preceding vehicle, pedestrian, or road traffic sign (e.g., a guide arrow), thereby causing a problem in that it is difficult to distinguish the driving assistance information and/or the road traffic sign information, thereby causing a traffic accident.
Disclosure of Invention
The embodiment of the application provides an information rendering method, an information rendering device, equipment and a medium, so that rendered auxiliary driving information is prevented from covering front vehicles, pedestrians or road traffic signs.
The embodiment of the application provides an information rendering method, which comprises the following steps:
identifying a real-scene road image acquired from an image collector, and determining a road surface area in the real-scene road image and a target object position included in the road surface area;
and rendering auxiliary driving information into the road surface area by avoiding the target object position.
The above embodiment has the following advantages or beneficial effects: and rendering the auxiliary driving information to the road surface area by avoiding the position of the target object, so that the target object is prevented from being shielded by the auxiliary driving information, and the driving safety of a driver is improved.
In addition, the auxiliary driving information is rendered to the road surface area but not other areas, so that the information in other areas is prevented from being shielded. In addition, since the driver has a high degree of attention to the road surface, the efficiency of acquiring the driving assistance information by the driver can be improved by rendering the driving assistance information to the road surface area.
Further, the rendering auxiliary driving information into the road surface area avoiding the target object position includes:
rendering the driving assistance information to the road surface area;
and erasing rendering information at the position of the target object in the road surface area.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: the auxiliary driving information is rendered to the road surface area, and then rendering information at the position of the target object in the road surface area is erased, so that the rendering position of the auxiliary driving information can avoid the position of the target object. In addition, the method can realize the rendering effect of avoiding the position of the target object under the condition of less changing the existing rendering code logic.
Further, the rendering the driving assistance information to the road surface area includes:
dividing the road surface area to generate at least two candidate sub-areas;
selecting a target sub-area associated with the auxiliary driving information from at least two candidate sub-areas according to the rendering priority of the auxiliary driving information and the attention of the candidate sub-areas;
rendering the auxiliary driving information to the target sub-area.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: according to the rendering priority of the auxiliary driving information and the attention degree of the candidate subareas, the target subarea associated with the auxiliary driving information is selected from the at least two candidate subareas, so that the auxiliary driving information with high rendering priority is rendered to the candidate subarea with high attention degree, and the acquisition efficiency of the auxiliary driving information by a driver is improved.
Further, the erasing rendering information of the target object position in the road surface area includes:
removing the target object position from the road surface area, and taking the residual area as a target rendering area;
generating an image mask according to the target rendering area;
erasing rendering information at the target object location using the image mask.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: taking the remaining area as a target rendering area by removing the target object position from the road surface area; generating an image mask according to the target rendering area; and erasing the rendering information at the position of the target object by using the image mask, thereby realizing the erasing of the rendering information at the position of the target object and avoiding the coverage of the rendering information on the target object.
An embodiment of the present application further provides an information rendering apparatus, including:
the image identification module is used for identifying the real-scene road image acquired from the image collector and determining a road surface area in the real-scene road image and a target object position included in the road surface area;
and the information rendering module is used for avoiding the position of the target object and rendering the auxiliary driving information into the road surface area.
Further, the information rendering module includes:
an information rendering unit configured to render the driving assistance information to the road surface area;
an information erasing unit configured to erase rendering information at a target object position in the road surface area.
Further, the information rendering unit is specifically configured to:
dividing the road surface area to generate at least two candidate sub-areas; selecting a target sub-area associated with the auxiliary driving information from at least two candidate sub-areas according to the rendering priority of the auxiliary driving information and the attention of the candidate sub-areas;
rendering the auxiliary driving information to the target sub-area.
Further, the information erasing unit is specifically configured to:
removing the target object position from the road surface area, and taking the residual area as a target rendering area;
generating an image mask according to the target rendering area;
erasing rendering information at the target object location using the image mask.
An embodiment of the present application further provides an electronic device, which includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments.
Embodiments of the present application also provide a non-transitory computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the embodiments of the present application.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart of an information rendering method according to a first embodiment of the present application;
fig. 2 is a schematic view of an application scenario provided in the first embodiment of the present application;
FIG. 3 is a schematic diagram of a target rendering area provided by a first embodiment of the present application;
FIG. 4 is a flowchart of an information rendering method according to a second embodiment of the present application;
fig. 5 is a schematic structural diagram of an information rendering apparatus according to a third embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing an information rendering method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
Fig. 1 is a flowchart of an information rendering method according to a first embodiment of the present application. The present embodiment is applicable to a case where the driving assistance information is rendered into the live-action road. Typically, the present implementation is applicable to a case where the auxiliary driving information is rendered in the AR-HUD technology. Referring to fig. 2, in this case, the driving assistance information is rendered into the target canvas 101, the target canvas 101 is put on a front windshield 103 of the vehicle through an optical engine 102, and a visual effect of the driving assistance information superimposed on a real road can be seen by human eyes 104 of a driver through the front windshield 103, so as to enhance the driver's perception of the driving environment.
The information rendering method provided by the embodiment may be executed by an information rendering apparatus, which may be implemented in software and/or hardware. Referring to fig. 1, the information rendering method provided in this embodiment includes:
s110, identifying the real-scene road image acquired from the image collector, and determining a road surface area in the real-scene road image and a target object position included in the road surface area.
The image collector is arranged on the vehicle and used for collecting the live-action road image in front of the vehicle in running.
The live-action road image includes road surface live-action information of a road ahead of the vehicle.
The target object position refers to a position where a target object is located, and the target object includes at least one of a vehicle, a pedestrian, and a road traffic sign.
The embodiment does not limit the identification method of the real road image, and any identification method in the prior art may be optionally adopted to realize the identification of the road surface area in the real road image and the identification of the target object in the road surface area.
S120, avoiding the position of the target object, and rendering auxiliary driving information into the road surface area.
Wherein the driving assistance information is information that assists the driver in driving the vehicle.
Typically, the driving assistance information includes: at least one of navigation information, meter information, and guideline information.
Specifically, the navigation information includes: at least one of upcoming turn information, distance between the current travel location and the destination, projected time to reach the destination, and distance between the current travel location and the location of the violation photograph.
The meter information refers to information displayed on the instrument panel of the vehicle.
Specifically, the meter information includes: at least one of vehicle speed, engine water temperature and fuel allowance.
The guide line information is information for guiding a current driving lane and a driving direction.
Referring to fig. 3, specifically, rendering auxiliary driving information into the road surface area while avoiding the target object position includes:
removing the target object position 301 from the road surface area, and taking the residual area as a target rendering area 302;
rendering the auxiliary driving information to the target rendering area 302.
In order to realize the superposition display of the auxiliary driving information and the live-action road, the rendering the auxiliary driving information to the target rendering area comprises the following steps:
rendering the auxiliary driving information to a target area in a target canvas, wherein the target area refers to an area in the target canvas, which is associated with the target rendering area;
and putting the target canvas on a front windshield of a vehicle so that a driver observes the overlapping display effect of the auxiliary driving information and the real scene road.
According to the technical scheme, the auxiliary driving information is rendered in the road surface area by avoiding the position of the target object, so that the shielding of the auxiliary driving information on the target object is avoided, and the driving safety of a driver is improved.
Second embodiment
Fig. 4 is a flowchart of an information rendering method according to a second embodiment of the present application. The present embodiment is an alternative proposed on the basis of the above-described embodiments. Referring to fig. 4, the information rendering method provided in this embodiment includes:
s210, identifying the real-scene road image acquired from the image collector, and determining a road surface area in the real-scene road image and a target object position included in the road surface area.
And S220, rendering the auxiliary driving information to the road surface area.
Specifically, rendering the navigation information and the meter information in the driving assistance information to the road surface area includes:
dividing the road surface area to generate at least two candidate sub-areas;
selecting a target sub-area associated with the navigation information from at least two candidate sub-areas according to the rendering priority of the navigation information and the attention of the candidate sub-areas;
selecting a target sub-area associated with the meter information from at least two candidate sub-areas according to the rendering priority of the meter information and the attention of the candidate sub-areas;
rendering the navigation information to a target sub-area associated with the navigation information;
rendering the meter information to a target sub-area associated with the meter information.
Specifically, the rendering priority of the navigation information is determined according to the importance of the navigation information.
The importance of the navigation information can be preset according to actual needs.
Specifically, dividing the road surface area includes:
if the road surface area has the lane line, dividing the road surface area according to the lane line;
if the road surface area has no lane line, the road surface area is divided according to the set width.
The method can realize the following advantages or beneficial effects: according to the rendering priority of the auxiliary driving information and the attention degree of the candidate subareas, the target subarea associated with the auxiliary driving information is selected from the at least two candidate subareas, so that the auxiliary driving information with high rendering priority is rendered to the candidate subarea with high attention degree, and the acquisition efficiency of the auxiliary driving information by a driver is improved.
To achieve perspective display of guide line information in driving assistance information, rendering the guide line information to a road surface area includes:
identifying the position of a target lane line in the live-action road image, wherein the target lane line is the lane line of the current driving lane;
determining a rendering position of guide line information according to the position of the target lane line;
rendering the three-dimensional model of the guideline to a rendering location of the guideline information.
And S230, erasing rendering information at the position of the target object in the road surface area.
Specifically, the erasing rendering information of the target object position in the road surface area includes:
removing the target object position from the road surface area, and taking the residual area as a target rendering area;
generating an image mask according to the target rendering area;
erasing rendering information at the target object location using the image mask.
The technical scheme provided by the embodiment of the application can realize the following technical effects:
the information is uniformly rendered to the road surface area, so that the safety problem caused by the fact that the view is shielded is avoided.
Rendering the driving assistance information into the road surface area by avoiding the target object position, thereby avoiding the driving assistance information from being aliased.
And grading the road surface area and the auxiliary driving information to effectively display the information.
Third embodiment
Fig. 5 is a schematic structural diagram of an information rendering apparatus according to a third embodiment of the present application. Referring to fig. 5, the information rendering apparatus 500 provided in the present embodiment includes: an image recognition module 501 and an information rendering module 502.
The image recognition module 501 is configured to recognize a real road image acquired from an image collector, and determine a road surface area in the real road image and a target object position included in the road surface area;
and an information rendering module 502, configured to render the driving assistance information into the road surface area while avoiding the target object position.
According to the technical scheme, the auxiliary driving information is rendered in the road surface area by avoiding the position of the target object, so that the shielding of the auxiliary driving information on the target object is avoided, and the driving safety of a driver is improved.
Further, the information rendering module includes:
an information rendering unit configured to render the driving assistance information to the road surface area;
an information erasing unit configured to erase rendering information at a target object position in the road surface area.
Further, the information rendering unit is specifically configured to:
dividing the road surface area to generate at least two candidate sub-areas; selecting a target sub-area associated with the auxiliary driving information from at least two candidate sub-areas according to the rendering priority of the auxiliary driving information and the attention of the candidate sub-areas;
rendering the auxiliary driving information to the target sub-area.
Further, the information erasing unit is specifically configured to:
removing the target object position from the road surface area, and taking the residual area as a target rendering area;
generating an image mask according to the target rendering area;
erasing rendering information at the target object location using the image mask.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, the electronic device is a block diagram of an electronic device according to an information rendering method of an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the information rendering method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the information rendering method provided herein.
The memory 602, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the information rendering method in the embodiments of the present application (e.g., the image recognition module 501 and the information rendering module 502 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing, i.e., implements the information rendering method in the above method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the information rendering electronic device, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, which may be connected to the information rendering electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the information rendering method may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the information-rendering electronic apparatus, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme, the auxiliary driving information is rendered in the road surface area by avoiding the position of the target object, so that the shielding of the auxiliary driving information on the target object is avoided, and the driving safety of a driver is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An information rendering method, comprising:
identifying a real-scene road image acquired from an image collector, and determining a road surface area in the real-scene road image and a target object position included in the road surface area;
and rendering auxiliary driving information into the road surface area by avoiding the target object position.
2. The method of claim 1, wherein the rendering of driving assistance information into the road surface area away from the target object location comprises:
rendering the driving assistance information to the road surface area;
and erasing rendering information at the position of the target object in the road surface area.
3. The method of claim 2, wherein the rendering the driving assistance information to the road surface area comprises:
dividing the road surface area to generate at least two candidate sub-areas;
selecting a target sub-area associated with the auxiliary driving information from at least two candidate sub-areas according to the rendering priority of the auxiliary driving information and the attention of the candidate sub-areas;
rendering the auxiliary driving information to the target sub-area.
4. The method of claim 2, wherein erasing rendering information for a target object location in the roadway area comprises:
removing the target object position from the road surface area, and taking the residual area as a target rendering area;
generating an image mask according to the target rendering area;
erasing rendering information at the target object location using the image mask.
5. An information rendering apparatus, characterized by comprising:
the image identification module is used for identifying the real-scene road image acquired from the image collector and determining a road surface area in the real-scene road image and a target object position included in the road surface area;
and the information rendering module is used for avoiding the position of the target object and rendering the auxiliary driving information into the road surface area.
6. The apparatus of claim 5, wherein the information rendering module comprises:
an information rendering unit configured to render the driving assistance information to the road surface area;
an information erasing unit configured to erase rendering information at a target object position in the road surface area.
7. The apparatus according to claim 6, wherein the information rendering unit is specifically configured to:
dividing the road surface area to generate at least two candidate sub-areas;
selecting a target sub-area associated with the auxiliary driving information from at least two candidate sub-areas according to the rendering priority of the auxiliary driving information and the attention of the candidate sub-areas;
rendering the auxiliary driving information to the target sub-area.
8. The apparatus according to claim 6, wherein the information erasing unit is specifically configured to:
removing the target object position from the road surface area, and taking the residual area as a target rendering area;
generating an image mask according to the target rendering area;
erasing rendering information at the target object location using the image mask.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN201910936332.3A 2019-09-29 2019-09-29 Information rendering method, device, equipment and medium Active CN110647860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910936332.3A CN110647860B (en) 2019-09-29 2019-09-29 Information rendering method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910936332.3A CN110647860B (en) 2019-09-29 2019-09-29 Information rendering method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN110647860A true CN110647860A (en) 2020-01-03
CN110647860B CN110647860B (en) 2022-11-08

Family

ID=69011984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910936332.3A Active CN110647860B (en) 2019-09-29 2019-09-29 Information rendering method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110647860B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553972A (en) * 2020-04-27 2020-08-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for rendering augmented reality data
CN111930279A (en) * 2020-07-24 2020-11-13 北京罗克维尔斯科技有限公司 Information display method and device suitable for vehicle, electronic equipment and storage medium
CN112053280A (en) * 2020-09-04 2020-12-08 北京百度网讯科技有限公司 Panoramic map display method, device, equipment and storage medium
CN113566842A (en) * 2021-07-26 2021-10-29 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN113946729A (en) * 2021-10-14 2022-01-18 阿波罗智联(北京)科技有限公司 Data processing method and device for vehicle, electronic equipment and medium
CN113946395A (en) * 2021-10-15 2022-01-18 阿波罗智联(北京)科技有限公司 Vehicle-mounted machine data rendering method and device, electronic equipment and storage medium
CN114067120A (en) * 2022-01-17 2022-02-18 腾讯科技(深圳)有限公司 Augmented reality-based navigation paving method, device and computer readable medium
CN114840288A (en) * 2022-03-29 2022-08-02 北京旷视科技有限公司 Rendering method of distribution diagram, electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207802A (en) * 2006-12-20 2008-06-25 爱信艾达株式会社 Driving support method and driving support apparatus
CN102735253A (en) * 2011-04-05 2012-10-17 现代自动车株式会社 Apparatus and method for displaying road guide information on windshield
CN105163972A (en) * 2013-09-13 2015-12-16 日立麦克赛尔株式会社 Information display system, and information display device
CN106796755A (en) * 2014-10-06 2017-05-31 丰田自动车株式会社 Strengthen the security system of road surface object on HUD
CN107406030A (en) * 2015-04-10 2017-11-28 日立麦克赛尔株式会社 Image projection apparatus
CN108515909A (en) * 2018-04-04 2018-09-11 京东方科技集团股份有限公司 A kind of automobile head-up-display system and its barrier prompt method
CN108657184A (en) * 2018-07-06 2018-10-16 京东方科技集团股份有限公司 Vehicle DAS (Driver Assistant System), method, apparatus and storage medium
WO2019103014A1 (en) * 2017-11-27 2019-05-31 株式会社小糸製作所 Head-up display device for vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207802A (en) * 2006-12-20 2008-06-25 爱信艾达株式会社 Driving support method and driving support apparatus
CN102735253A (en) * 2011-04-05 2012-10-17 现代自动车株式会社 Apparatus and method for displaying road guide information on windshield
CN105163972A (en) * 2013-09-13 2015-12-16 日立麦克赛尔株式会社 Information display system, and information display device
CN106796755A (en) * 2014-10-06 2017-05-31 丰田自动车株式会社 Strengthen the security system of road surface object on HUD
CN107406030A (en) * 2015-04-10 2017-11-28 日立麦克赛尔株式会社 Image projection apparatus
WO2019103014A1 (en) * 2017-11-27 2019-05-31 株式会社小糸製作所 Head-up display device for vehicle
CN108515909A (en) * 2018-04-04 2018-09-11 京东方科技集团股份有限公司 A kind of automobile head-up-display system and its barrier prompt method
CN108657184A (en) * 2018-07-06 2018-10-16 京东方科技集团股份有限公司 Vehicle DAS (Driver Assistant System), method, apparatus and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李卓: "基于AR_HUD的汽车驾驶辅助系统设计研究", 《武汉理工大学学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553972A (en) * 2020-04-27 2020-08-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for rendering augmented reality data
CN111930279A (en) * 2020-07-24 2020-11-13 北京罗克维尔斯科技有限公司 Information display method and device suitable for vehicle, electronic equipment and storage medium
CN111930279B (en) * 2020-07-24 2021-08-24 北京罗克维尔斯科技有限公司 Information display method and device suitable for vehicle, electronic equipment and storage medium
CN112053280A (en) * 2020-09-04 2020-12-08 北京百度网讯科技有限公司 Panoramic map display method, device, equipment and storage medium
CN112053280B (en) * 2020-09-04 2024-04-12 北京百度网讯科技有限公司 Panoramic map display method, device, equipment and storage medium
CN113566842A (en) * 2021-07-26 2021-10-29 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN113946729A (en) * 2021-10-14 2022-01-18 阿波罗智联(北京)科技有限公司 Data processing method and device for vehicle, electronic equipment and medium
CN113946729B (en) * 2021-10-14 2023-08-22 阿波罗智联(北京)科技有限公司 Data processing method and device for vehicle, electronic equipment and medium
CN113946395A (en) * 2021-10-15 2022-01-18 阿波罗智联(北京)科技有限公司 Vehicle-mounted machine data rendering method and device, electronic equipment and storage medium
CN114067120A (en) * 2022-01-17 2022-02-18 腾讯科技(深圳)有限公司 Augmented reality-based navigation paving method, device and computer readable medium
CN114840288A (en) * 2022-03-29 2022-08-02 北京旷视科技有限公司 Rendering method of distribution diagram, electronic device and storage medium

Also Published As

Publication number Publication date
CN110647860B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN110647860B (en) Information rendering method, device, equipment and medium
CN111623795B (en) Live-action navigation icon display method, device, equipment and medium
CN111275983B (en) Vehicle tracking method, device, electronic equipment and computer-readable storage medium
CN110758403A (en) Control method, device, equipment and storage medium for automatic driving vehicle
US20180354509A1 (en) Augmented reality (ar) visualization of advanced driver-assistance system
CN111397611B (en) Path planning method and device and electronic equipment
JP7258938B2 (en) Method for marking intersection virtual lane, device for marking intersection virtual lane, electronic device, computer readable storage medium and computer program
CN111693062B (en) Method and device for navigation of roundabout route, electronic equipment and storage medium
CN112141102A (en) Cruise control method, cruise control device, cruise control apparatus, vehicle, and cruise control medium
CN110502018B (en) Method and device for determining vehicle safety area, electronic equipment and storage medium
CN111540010B (en) Road monitoring method and device, electronic equipment and storage medium
CN111292531A (en) Tracking method, device and equipment of traffic signal lamp and storage medium
CN111703371B (en) Traffic information display method and device, electronic equipment and storage medium
CN111361560B (en) Method and device for controlling vehicle running in automatic driving and electronic equipment
CN111693059B (en) Navigation method, device and equipment for roundabout and storage medium
CN112258873A (en) Method, apparatus, electronic device, and storage medium for controlling vehicle
CN113844463B (en) Vehicle control method and device based on automatic driving system and vehicle
CN111652112B (en) Lane flow direction identification method and device, electronic equipment and storage medium
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
CN111767844A (en) Method and apparatus for three-dimensional modeling
CN112748720A (en) Control method, device, equipment and storage medium for automatic driving vehicle
CN111447561B (en) Image processing system for vehicle
CN111252069B (en) Method and device for changing lane of vehicle
CN113124887A (en) Route information processing method, device, equipment and storage medium
JP2022172481A (en) Method and apparatus for positioning vehicle, vehicle, storage medium, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211021

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant