CN115113963A - Information display method and device, electronic equipment and storage medium - Google Patents

Information display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115113963A
CN115113963A CN202210763843.1A CN202210763843A CN115113963A CN 115113963 A CN115113963 A CN 115113963A CN 202210763843 A CN202210763843 A CN 202210763843A CN 115113963 A CN115113963 A CN 115113963A
Authority
CN
China
Prior art keywords
information
target
navigation
area
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210763843.1A
Other languages
Chinese (zh)
Other versions
CN115113963B (en
Inventor
刘珊珊
张昊
谢忠宇
陈垚霖
孙龙威
肖瑶
张坤
徐濛
季永志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210763843.1A priority Critical patent/CN115113963B/en
Publication of CN115113963A publication Critical patent/CN115113963A/en
Application granted granted Critical
Publication of CN115113963B publication Critical patent/CN115113963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Navigation (AREA)

Abstract

The present disclosure provides an information display method, an information display apparatus, an electronic device, and a storage medium, which relate to the technical field of artificial intelligence, and in particular, to the technical field of high-precision maps, computer vision, deep learning, big data, intelligent transportation, automatic driving, autonomous parking, cloud service, internet of vehicles, and intelligent cabins. The specific implementation scheme is as follows: acquiring interaction demand information in response to detecting that an interaction control of a navigation page is triggered; and displaying the virtual object and the response information on the navigation page according to a target display mode, wherein the target display mode is determined according to the interaction demand information and the response information, and the response information is generated according to the interaction demand information.

Description

Information display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and in particular to the technical fields of high-precision maps, computer vision, deep learning, big data, intelligent transportation, automatic driving, autonomous parking, cloud service, internet of vehicles, and intelligent cabins. In particular, the invention relates to an information display method, an information display device, an electronic device and a storage medium.
Background
With the development of computer technology, the functions provided by electronic devices are more and more diversified. Such as navigation functions. The user can acquire road condition information, high-frequency electronic eye reminding information, estimated arrival time information and the like through a navigation page provided by the electronic equipment.
Disclosure of Invention
The disclosure provides a method, an apparatus, an electronic device and a storage medium for information presentation.
According to an aspect of the present disclosure, there is provided an information display method, including: acquiring interaction demand information in response to detecting that an interaction control of a navigation page is triggered; and displaying a virtual object and response information on the navigation page according to a target display mode, wherein the target display mode is determined according to the interaction demand information and the response information, and the response information is generated according to the interaction demand information.
According to another aspect of the present disclosure, there is provided an information presentation apparatus including: the acquisition module is used for responding to the detection that the interaction control of the navigation page is triggered and acquiring interaction demand information; and a display module, configured to display a virtual object and response information on the navigation page according to a target display mode, where the target display mode is determined according to the interaction requirement information and the response information, and the response information is generated according to the interaction requirement information.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method as described in the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates an exemplary system architecture to which the information presentation method and apparatus may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an information presentation method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates an exemplary system architecture to which the information presentation method and apparatus may be applied, according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an example presentation interface of an information presentation method according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of an example presentation interface of an information presentation method according to another embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an example presentation interface of an information presentation method according to another embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an example presentation interface for a method of presenting information according to another embodiment of the present disclosure;
FIG. 8A schematically illustrates an example presentation interface diagram of an information presentation method according to another embodiment of the present disclosure;
FIG. 8B is a schematic diagram of an example presentation interface for a method of presenting information according to another embodiment of the present disclosure;
FIG. 8C is a schematic diagram of an example presentation interface for a method of presenting information according to another embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of an information presentation device according to an embodiment of the present disclosure; and
FIG. 10 schematically illustrates a block diagram of an electronic device suitable for implementing the information presentation method, according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
Fig. 1 schematically shows an exemplary system architecture to which the information presentation method and apparatus may be applied, according to an embodiment of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios. For example, in another embodiment, an exemplary system architecture to which the content processing method and apparatus may be applied may include a terminal device, but the terminal device may implement the content processing method and apparatus provided in the embodiments of the present disclosure without interacting with a server.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a knowledge reading-type application, a web browser application, a search-type application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The terminal devices 101, 102, 103 may also be various intelligent driving devices that support positioning and navigation functions, including but not limited to intelligent cars, intelligent school buses, intelligent vans, and the like.
The terminal devices 101, 102, 103 may interact with the server 104 to receive or transmit positioning information or the like. The smart driving devices 101, 102, 103 may have installed thereon various client applications that may provide positioning and navigation functions, such as a map-like application, a navigation-like application, and the like (for example only).
The server 105 may be various types of servers that provide various services. For example, the Server 105 may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a conventional physical host and a VPS service (Virtual Private Server). Server 105 may also be a server of a distributed system or a server that incorporates a blockchain.
It should be noted that the information presentation method provided by the embodiment of the present disclosure may be generally executed by the terminal device 101, 102, or 103. Correspondingly, the information display device provided by the embodiment of the disclosure can also be arranged in the terminal equipment 101, 102 or 103.
Alternatively, the information presentation method provided by the embodiment of the present disclosure may also be generally performed by the server 105. Accordingly, the information display device provided by the embodiment of the present disclosure may be generally disposed in the server 105. The information presentation method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Correspondingly, the information display device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that the sequence numbers of the respective operations in the following methods are merely used as representations of the operations for description, and should not be construed as representing the execution order of the respective operations. The method need not be performed in the exact order shown, unless explicitly stated.
Fig. 2 schematically shows a flow chart of an information presentation method according to an embodiment of the present disclosure.
As shown in FIG. 2, the information presentation method 200 may include operations S210-S220.
In operation S210, in response to detecting that the interaction control of the navigation page is triggered, interaction requirement information is acquired.
In operation S220, the virtual object and the response information are displayed on the navigation page according to the target display mode. The target display mode is determined according to the interaction demand information and the response information. The response information is generated according to the interaction demand information.
According to an embodiment of the present disclosure, a navigation page may refer to a page providing a navigation function. The navigation page may include a top page area, a middle page area, and a bottom page area. The top page area may include an inducement panel area. The inducement panel area may refer to an area for displaying inducement information. The induction information may include at least one of: steering information, driving distance, steering distance, road name, vehicle speed code group and the like. The middle page area may guide the navigation map area. The navigation map area may include at least one of: road element information, bubble element information, path guidance information, congestion duration, traffic light countdown, high-frequency electronic eye reminding information and the like. The bottom page area may include at least one of: estimated Time of Arrival (ETA) information, remaining mileage information, road condition display control, voice broadcast control, voice interaction control, road condition reporting control, position sharing control, full view function control, exit navigation function control, and the like. Further, the bottom page area may further include at least one of: virtual objects and auxiliary information.
According to an embodiment of the present disclosure, the inducement panel area may be hidden in response to detecting one of a first sliding operation and a triggering operation for the inducement panel area. Thus, the top area of the navigation page may be made to appear through. The inducement panel area may be presented in response to detecting a trigger operation for the set of vehicle speed codes. The trigger operation may comprise a click operation. The first sliding operation may include a left sliding operation.
According to embodiments of the present disclosure, the height of the bottom page area may be dynamically varied. The area of the bottom page region may be increased in response to the second sliding operation for the bottom page region to enable more information to be presented. The second sliding operation may include a sliding up operation. In addition, a function of performing a page scroll playground in the bottom page section may be provided.
According to the embodiment of the disclosure, the positions of the information of the top page area and the bottom page area can be set according to actual business requirements, so that the shielding of the effective page area of the middle page area is reduced.
According to embodiments of the present disclosure, an interaction control may refer to a control for interacting with a user. The interaction control may include at least one of: voice interaction controls, navigation interaction controls, configuration controls, and the like. The interaction requirement information may refer to requirement information of the user. The type of the interaction requirement information may include at least one of: information related to navigation requirements, information related to voice interaction requirements, and information related to recommendation requirements. For example, the information related to navigation requirements may include at least one of: information related to path planning and information related to point of interest queries, etc. The information related to the voice interaction requirement may include at least one of: information related to the infotainment broadcast and information related to the weather broadcast. The information related to recommendation needs may include information related to point of interest vicinity information recommendations.
According to embodiments of the present disclosure, a virtual object may refer to a virtual character having a digitized appearance. Virtual objects may also be referred to as digital people. The virtual objects may have character characteristics, behavior of characters, and ideas of characters. The character characteristics may include the appearance, gender, and character of the character, etc. The character's behavior may include language expression ability, expression change ability, limb movement expression ability, and the like. The idea of a character can mean that the character has the capability of identifying the external environment and communicating and interacting with a user. In the disclosed embodiments, the virtual object has a character and may have the ability to perform corresponding actions and to exhibit corresponding expressions and lips, among other capabilities.
According to the embodiment of the disclosure, the interaction demand information can be acquired under the condition that the interaction control of the navigation page is triggered. And generating response information according to the interaction demand information. And determining a target display mode according to the response information and the interaction demand information. Therefore, the virtual object can be controlled to display the response information according to the target display mode on the navigation page.
According to the embodiment of the disclosure, the response information is displayed by controlling the virtual object on the navigation page according to the target display mode, so that the response information is displayed by using the visualized virtual object in the navigation process.
The above is only an exemplary embodiment, but is not limited thereto, and other information presentation methods known in the art may be included as long as information can be presented.
The information presentation method 200 according to the embodiment of the present disclosure is further explained below with reference to fig. 3, fig. 4, fig. 5, fig. 6, fig. 7, fig. 8A, fig. 8B, and fig. 8C.
Fig. 3 schematically shows an exemplary system architecture to which the information presentation method and apparatus may be applied, according to another embodiment of the present disclosure.
As shown in FIG. 3, in 300, the presentation layer of the client may include a speech synthesis software package 301_1, a virtual object software package 301_2, and a map region rendering 301_ 3. The voice synthesis software package 301_1 can convert the text instructions of the user into audio and broadcast the audio to the user, and has personified broadcast capability. The virtual object software package 301_2 can control expressions, actions, lips and the like corresponding to the characters through an instruction protocol, and has the capability of consistency of the characters and the texts. The map region rendering 301_3 can upgrade the product presentation form in navigation to improve the overall consistency, thereby improving the information acquisition efficiency and safety of users.
The interaction layer of the client may include a voice software package 302_1 and a navigation software package 302_ 2. The voice software package 302_1 may perform voice wake-up, voice recognition, and full-duplex interaction, e.g., the virtual object may speak simultaneously with the user. In the case of user wake-up and continuous conversation, the voice software package 302_1 can convert the audio signal of the user into a text instruction and send the text instruction to the voice service 304_1, so as to support voice interruption and recognition rejection. In addition, the voice software package 302_1 can synchronously pass the results of the voice service 304_1 to the upper layer product and call the voice synthesis software package 301_1, the virtual object software package 301_2 and the map region rendering 301_3 for presentation.
The navigation software package 302_2 can upload and send the instruction, and not only can transmit the voice instruction to the navigation service 304_2 in a specified protocol for service layer calculation, but also can synchronously transmit the result of the navigation service 304_2 to an upper layer product, and call the voice synthesis software package 301_1, the virtual object software package 301_2 and the map region rendering 301_3 for presentation.
The service side may include a voice service 304_1, a navigation service 304_2, and a retrieval service 304_ 3. The voice service 304_1 may be used to parse the instructions for understanding the user and distribute them to the corresponding stub capability modules. The navigation service 304_2 may have the capabilities of calculating roads, navigating, inducing script organization, querying road conditions, and the like. Further, a retrieval service 304_3 is embedded downstream of the navigation service 304_2, and the retrieval service 304_3 can be used for servicing retrieval requests in navigation requirements like switching destinations, adding route points, and the like. For example, for a search request for switching destinations, a search result related to the current path information may be determined from the current path information and the destination.
The voice service 304_1 and the voice software package 302_1, and the navigation service 304_2 and the navigation software package 302_2 can be connected through a unified broadcast platform.
The set of stub capability modules 305 may distribute the instructions to the corresponding services through the addresses of the instructions, the set of stub capability modules 305 may call the navigation service 304_2, and part of the information queried by the navigation service 304_2 is taken as a subset of the set of bots, such as distance from the terminal, remaining time, congestion length, and the like. In addition, the set of adoptive capacity modules 305 can also include a joke subset 305_1 and a weather subset 305_2, among others.
According to an embodiment of the present disclosure, operation S210 may include the following operations.
And in response to detecting that the interaction control of the navigation page is triggered, displaying at least one piece of recommendation information on the navigation page. And acquiring the interaction demand information in response to the detection of the selection operation aiming at the at least one piece of recommendation information.
According to the embodiment of the disclosure, a user can acquire at least one piece of recommendation information by clicking the interaction control of the navigation page and can select the at least one piece of recommendation information to determine the interaction demand information. Alternatively, the interaction requirement information may be inputted by a voice or a text of the user.
Fig. 4 schematically shows a presentation interface example diagram of an information presentation method according to an embodiment of the present disclosure.
As shown in fig. 4, in 400, a presentation interface may include an interface top panel 401, an interface middle panel 402, and an interface bottom panel 403.
The interface top panel 401 may be left-slid or clicked to retract, rendering the interface headspace transparent. With the interface top panel 401 stowed to the left or above, the interface top panel 401 may be re-presented by clicking on the speedometer component in the upper left corner of the interface top panel 401.
The height of the interface bottom panel 403 can be changed dynamically, and voice interaction can be performed by clicking on the virtual object 403_1 or waking up by voice command. Further, by sliding interface bottom panel 403 up, more content can be displayed, with scrolling supported inside interface bottom panel 403. In addition, the interface bottom panel 403 may also display collateral information 403_2, and the collateral information 403_2 may include default entry controls and voice interaction controls.
The mapping relationship between the scene and the actions and expressions of the virtual object 403_1 may be pre-established, so that the virtual object may be controlled to display the actions and expressions corresponding to the scene according to the mapping relationship.
The broadcast text can be displayed according to the navigation guidance and the emotional expression scene by clicking the default entry control, and the virtual object 403_1 can perform corresponding action expression presentation according to the scene.
The voice interaction state can be entered by voice wake-up by clicking the voice interaction control or by clicking the virtual object 403_ 1. After entering the voice interaction state, the avatar, action, and expression of the virtual object 403_1 may be changed. The right side panel of the interface bottom panel 403 may be raised and reveal the recommendation instructions. After receiving the user instruction, the recommendation instruction may be matched in place of the user instruction. After the reception is finished, voice and touch screen results can be output through semantic parsing. After the result output is completed or the exit control is clicked, the interface bottom panel 403 is restored to the original size, and the bottom panel 403 continues to display the virtual object 403_1 and the attached information 403_ 2.
According to an embodiment of the present disclosure, operation S220 may include the following operations.
And displaying the virtual object and the response information in a target area of the navigation page according to the target display mode. The position relation between the target area and the auxiliary area of the navigation page meets the preset position relation condition, and the target area and the auxiliary area are located in the navigation map area of the navigation page.
According to the embodiment of the present disclosure, the predetermined position relationship condition may be set by a person skilled in the art according to actual requirements, and is not limited herein. For example, the predetermined positional relationship condition may be that the target area and the auxiliary area are spaced apart by a predetermined distance or the like.
According to an embodiment of the present disclosure, the information presentation method may further include the following operations.
An auxiliary area is determined based on the expected auxiliary information. The expected assistance information includes road element information and bubble element information of the navigation map area. And determining the target area according to the auxiliary area, the navigation scene information and the size information of the virtual object. The navigation scenario information includes at least one of: navigation path information and road state information corresponding to the navigation path.
In accordance with embodiments of the present disclosure, it is contemplated that assistance information may be used to determine assistance areas. The auxiliary area, the navigation scene information, and the size information of the virtual object may be used to determine the target area. The bubble element information may be used to prompt the user.
According to an embodiment of the present disclosure, the road element information may include lane information and indication line information. The lane information may include, for example, lane lines, lanes, and the like. The lane lines may include a start connection point, an end connection point, a center shape point, a width, a style, a color, a type, and the like. The indicator line information may include stop lines, zebra stripes, and separator strips, among others.
According to embodiments of the present disclosure, the navigation path information may be used to characterize a navigation path from a navigation start address to a navigation end address. The road state information corresponding to the navigation path may be used to characterize the road state of the navigation path that is traversed from the navigation start address to the navigation end address, and may include, for example, road smoothness and road congestion.
For example, if the navigation start address is M streets in M city and the navigation end address is N streets in N city, the navigation path information from the navigation start address to the navigation end address may include M streets-x streets-y streets-z streets-N streets.
According to an embodiment of the present disclosure, determining the target area according to the auxiliary area, the navigation scene information, and the size information of the virtual object may include the following operations.
And determining expected azimuth information according to the navigation scene information. The expected orientation information includes an expected presentation area and an expected presentation direction. And determining the target area according to the auxiliary area, the expected direction information and the size information of the virtual object.
According to the embodiment of the disclosure, the corresponding relationship between the navigation scene information and the expected direction information can be preset, so that the expected direction information of the virtual object can be determined according to the navigation scene information. For example, in a scene of road congestion, the expected azimuth information of the virtual object may be set to the southwest direction. In a scenario where the navigation destination address is a coffee shop, the expected azimuth information of the virtual object may be set to be the east-ward direction or the like.
According to the embodiment of the present disclosure, the virtual object may be cut in advance, and the size information of the virtual object may be recorded. The cropping may be a rectangular cropping or an irregular cropping, and the size information may include the width and height of the virtual object.
According to the embodiment of the disclosure, under the condition that the target area of the navigation page is displayed with the virtual object and the response information in the navigation process, the auxiliary area can be determined according to the road element information and the bubble element information, and the target area can be determined according to the auxiliary area, the navigation path information, the road state information corresponding to the navigation path and the size information of the virtual object, so that the target area can be avoided from the auxiliary area.
According to the embodiment of the disclosure, the target area is determined according to the auxiliary area, the navigation scene information and the size information of the virtual object, and the auxiliary area is determined according to the road element information and the bubble element information, so that the target area can be linked with the existing map area without colliding with the content of the existing map area, the understanding cost is reduced, and the use experience of a user is improved.
According to the embodiment of the disclosure, the interactive control comprises a voice interactive control, and the response information comprises voice broadcast information.
According to an embodiment of the present disclosure, operation S220 may include the following operations.
And displaying the virtual object and broadcasting voice playing information on the navigation page according to a target display mode in the pluggable playing period. The interjectable time period represents a time period during which non-navigation broadcast information is allowed to be interjected during the navigation broadcast process.
Operation S220 may further include the following operations according to an embodiment of the present disclosure.
And in the navigation playing time interval, displaying the virtual object and broadcasting the navigation broadcasting information on the navigation page. The inter-cut time interval represents a target time interval in the navigation broadcasting process except the navigation broadcasting time interval.
According to an embodiment of the present disclosure, the navigation broadcasting information may include information related to a navigation path from a navigation start address to a navigation end address. The non-navigation broadcast information may include information other than the navigation broadcast information.
According to the embodiment of the disclosure, estimated arrival time information of a broadcast position point reaching the next piece of navigation broadcast information can be estimated according to information such as the distance between the position of a user and the broadcast position point of the next piece of navigation broadcast information, the speed of the user, the road condition and the like, and inter-broadcast time intervals except the navigation broadcast time interval in the navigation broadcast process can be determined according to the estimated arrival time information.
According to the embodiment of the disclosure, the voice playing information may include a reminding class, a traffic signal identification class and a maneuvering point turning class. The corresponding relation between different types and priorities can be preset, and whether the voice playing information can be inserted in the pluggable playing time interval or not can be determined according to the pluggable playing time interval.
According to the embodiment of the disclosure, because the navigation broadcasting information is broadcasted in the navigation broadcasting time interval, and the voice broadcasting information is broadcasted in the pluggable broadcasting time interval, the appropriate non-navigation broadcasting information can be selected to be broadcasted in the pluggable broadcasting time interval between the navigation broadcasting information on the premise of normal broadcasting of the navigation broadcasting information, and the use experience of a user is improved.
According to the embodiment of the disclosure, the target display mode is determined according to the interaction demand information, the response information and the configuration information.
According to an embodiment of the present disclosure, the configuration information may be acquired by: and in response to detecting that the configuration control of the navigation page is triggered, displaying the configuration page. The configuration page includes at least one of: spoken configuration items and emotional configuration items. In response to detecting a configuration operation for a configuration item, configuration information is obtained.
According to the embodiment of the disclosure, a user can perform spoken language configuration and emotional configuration by clicking the configuration control of the navigation page. In the spoken language configuration, spoken navigation broadcast information corresponding to written navigation broadcast information may be determined according to the configuration information of the spoken language configuration. In the emotional configuration, different emotional configuration templates can be configured in advance for the user to select. For example, the emotional configuration templates may include a normal template, a enthusiasm template, and an ice-cold template.
According to the embodiment of the disclosure, since the colloquial configuration and the emotional configuration can be performed by using the configuration control, different users can perform different configuration operations according to personal preferences, so that the navigation broadcasting information can be adapted to the users, and the use experience of the users is improved.
According to an embodiment of the present disclosure, the information presentation method may further include the following operations.
And displaying the auxiliary interaction information on the navigation page in the process of displaying the virtual object and the response information on the navigation page according to the target display mode.
According to an embodiment of the present disclosure, the information presentation method may further include the following operations.
And displaying the preset page information in a preset display area of the navigation page. The predetermined page information includes at least one of: the method comprises the steps of estimating arrival time information, whole road condition information, an enlarged intersection image, a whole-view function control, a quit navigation function control and a vehicle speed code group, wherein the intersection between a preset display area and a target navigation map area is less than or equal to a preset area threshold value, and the target navigation map area is a preset part in a navigation map area of a navigation page.
According to the embodiment of the disclosure, the all-round road condition information can be used for representing the road condition of each road from the navigation starting point address to the navigation end point address. The estimated time of arrival information may be used to characterize an estimated time of arrival at the navigation destination address from the navigation origin address. The enlarged intersection view can be used to characterize a partially enlarged view for a road intersection. The full-view functionality control may be used to provide full-screen presentation functionality. The exit navigation function control may be used to provide functionality for exiting the navigation interface. The vehicle speed code set may be used to characterize the current travel speed of the vehicle.
According to an embodiment of the present disclosure, the predetermined presentation area may include an interface top, an interface middle, or an interface bottom. For example, the estimated arrival time information, the whole road condition information, the whole view function control and the exit navigation function control can be arranged at the bottom of the interface, and the enlarged intersection image and the vehicle speed code group can be arranged at the top of the interface.
According to the embodiment of the disclosure, the preset page information is displayed in the preset display area of the navigation page, so that the shielding of the navigation map area is reduced, and the reasonable layout of information display is realized.
Fig. 5 schematically shows a presentation interface example diagram of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 5, in 500, in a scene of road congestion, the presentation interface may include a vehicle speed code group 501 at the top of the interface, a navigation map area 502 in the middle of the interface, and a bottom panel area 503 at the bottom of the interface.
The vehicle speed code group 501 can be left-slid or clicked for packing, so that the top space of the interface is in a transparent state. Under the condition that the vehicle speed code group 501 is folded to the left or above, the vehicle speed code group 501 can be displayed again by clicking the vehicle speed code group 501. The vehicle speed code group 501 may display the distance still to be traveled on the current road. For example, the current vehicle speed is 58km/h, and the distance to be traveled by the vehicle on the current road is 345 meters.
The navigation map area 502 may include bubble element information 502_1, virtual object 502_2, and road element information 502_ 3. The bottom panel area 503 may include response information 503_1, estimated arrival time information 503_2, global road condition information 503_3, and a full-view functionality control 503_ 4. The global traffic information 503_2 may be "90 km 3 h 29 min", and the estimated arrival time information 503_3 may be "11: 30 arrival".
For example, in a scene of road congestion, the bubble element information 502_1 may be "3 minutes: waiting queue up about 3 times at 240 m ", the auxiliary area can be determined from the bubble element information 502_1 and the road element information 502_ 3.
The expected position information of the virtual object 502_2 can be determined to be the southwest direction according to the navigation scene information, and the target area can be determined according to the auxiliary area, the expected position information and the size information of the virtual object 502_ 2.
The virtual object 502_2 and the response message 503_1 can be displayed in the target area of the navigation page according to the target display mode, and the response message 503_1 can be "will congestion be aggravated and need to avoid a congested road again for you? ", the response message 503_1 may also provide an avoid congestion control and a cancel control.
Fig. 6 schematically shows a presentation interface example diagram of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 6, in 600, in the scenario of a road accident, the presentation interface may include a vehicle speed code group 601 at the top of the interface, a navigation map area 602 in the middle of the interface, and a bottom panel area 603 at the bottom of the interface.
The vehicle speed code group 601 can slide left or click to retract, so that the top space of the interface is in a transparent state. In the case where the vehicle speed code group 601 is folded to the left or above, the vehicle speed code group 601 can be displayed again by clicking the vehicle speed code group 601. The vehicle speed code group 601 can display the distance to be traveled on the current road. For example, the current vehicle speed is 58km/h, and the distance to be traveled by the vehicle on the current road is 345 meters.
The navigation map area 602 may include a virtual object 602_2 and road element information 602_ 3. The bottom panel area 603 may include response information 603_1, estimated time of arrival information 603_2, global road condition information 603_3, and a full-view functionality control 603_ 4. The whole road condition information 603_2 may be "90 km 3 h 29 min", and the estimated arrival time information 603_3 may be "11: 30 arrival".
For example, when a traffic accident occurs at the position 602_1 of the road a, the expected azimuth information of the virtual object 602_2 may be determined to be the northeast direction according to the navigation scene information, and the target area may be determined according to the auxiliary area, the expected azimuth information, and the size information of the virtual object 602_ 2.
The virtual object 602_2 and the response information 603_1 can be displayed in a target area of the navigation page according to a target display mode, the response information 603_1 can be "it is your way of avoiding road accidents", and the response information 603_1 can also provide a congestion avoidance control and a cancellation control.
Fig. 7 schematically shows a presentation interface example diagram of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 7, in 700, in a scene that needs to zoom in at an intersection, the display interface may include a vehicle speed code group 701 and an intersection enlargement 704 at the top of the interface, a navigation map area 702 in the middle of the interface, and a bottom panel area 703 at the bottom of the interface.
The vehicle speed code group 701 can be left-slid or clicked for packing, so that the top space of the interface is in a transparent state. Under the condition that the vehicle speed code group 701 is folded towards the left or upwards, the vehicle speed code group 701 can be displayed again by clicking the vehicle speed code group 701. The set of speed codes 701 may indicate the distance still to be traveled on the current road. For example, the current vehicle speed is 58km/h, and the distance that the vehicle still needs to travel on the current road is 345 meters.
The navigation map area 702 may include road element information 702_1 and a virtual object 702_ 2. The bottom panel area 703 may include response information 703_1, estimated arrival time information 703_2, full road condition information 703_3, and full-view function control 703_ 4. The whole road condition information 703_2 may be "90 km 3 h 29 min", and the estimated arrival time information 703_3 may be "11: 30 arrival".
For example, in a scene that needs to zoom in at an intersection, it may be determined that the expected orientation information of the virtual object 702_2 is at the bottom of the interface according to the navigation scene information, and the target area may be determined according to the auxiliary area, the expected orientation information, and the size information of the virtual object 702_ 2.
The virtual object 702_2 and the response information 703_1 may be displayed in a target area of the navigation page according to a target display mode, and the response information 703_1 may be "try to say navigation to XX, going home".
According to an embodiment of the present disclosure, the information presentation method may further include the following operations.
And determining an initial interest point according to the interaction demand information. And determining a target image from the association relationship set according to the initial interest point and the interest point relationship map. The interest point relation graph comprises at least two interest nodes and at least one edge, the two interest nodes connected by the edge have a subordinate relation, the interest nodes represent interest points, the association relation set comprises a plurality of association relations, and the association relations represent relations between target interest points and images. And determining the target action and the target expression according to the response information. And obtaining a target display mode according to the target image, the target action and the target expression.
According to an embodiment Of the present disclosure, a Point Of Interest (POI) may refer to an information Point. The point of interest information may include at least one of: point of interest identification, point of interest category, and point of interest location information. The point of interest relationship map may refer to a map for characterizing an association relationship between points of interest. The association between the points of interest may include an affiliation between the points of interest. An affiliation may be referred to as a parent-child relationship. The point of interest relationship graph may include at least two points of interest and at least one edge. The two interest nodes connected by the edge have a dependency relationship. The edge may be a directed edge. The interest node pointed to by the arrow with the directed edge may refer to a child interest node. Another interest node connected to a directed edge may refer to a parent interest node. The association relationship may refer to an association relationship between a target point of interest and an avatar.
According to the embodiment of the disclosure, the target interest node corresponding to the initial interest point can be determined from the interest point relation graph. Since the interest node may be used to characterize the interest point, the target interest point may be determined according to the target interest node corresponding to the initial interest point. The target image can be determined from the association set according to the target interest points.
According to the embodiment of the present disclosure, the target action and the target expression of the target character may be determined according to the response information. For example, the response information may be analyzed to obtain the target scene information. And determining target actions and target expressions required by the target image under the condition of displaying the response information according to the target scene information. Additionally, a target lip may be included. For example, the target motion and the target expression corresponding to the target scene information may be determined according to the association relationship among the scene information, the motion, and the expression.
According to the embodiment of the disclosure, the target image, the target action and the target expression can be combined to obtain the target display mode.
According to the embodiment of the present disclosure, determining the target image from the association relationship set according to the initial interest point and the interest point relationship map may include the following operations.
And determining the target interest points according to the initial interest points and the interest point relation map. And determining the image corresponding to the target interest point from the association relation set. And determining the image corresponding to the target interest point as a target image.
According to the embodiment of the disclosure, the target interest node corresponding to the initial interest point can be determined from the interest point relation graph. And determining a target interest point according to the target interest node. And then determining a matching interest point corresponding to the target interest point from the association relation set. And determining the image corresponding to the matched interest point as a target image.
According to an embodiment of the present disclosure, the interest point relation graph is created according to a position relation and a semantic relation of at least two interest points.
According to the embodiment of the disclosure, the interest points with the subordinate relationship can be determined according to the position relationship and the semantic relationship of at least two interest points. And creating an interest point relation map according to the interest points with the subordination relation.
For example, the point of interest "xx garden-east door" is a sub-point of interest of the point of interest "xx garden". In the case that the user performs the route planning online, it is determined that the initial interest point is "xx garden-east gate" according to the interaction demand information. And determining that the target interest point is a 'x garden' according to the initial interest point and the interest point relation map. And determining the image corresponding to the target interest point x circle from the association relation set. And determining the image corresponding to the target interest point as a target image, wherein the target image is an image related to the 'x garden' but not an image related to the 'gate', so that the accurate target image recall is realized.
According to an embodiment of the present disclosure, the information presentation method may further include the following operations.
And acquiring target interest point information of at least one target interest point to obtain at least one piece of target interest point information. And determining a target image matched with the at least one piece of target interest point information from the image library to obtain at least one target image. An association set of relationships is created based on the at least one target point of interest and the at least one target image.
According to an embodiment of the present disclosure, the target point of interest may include at least one of: navigation end points, navigation approach points and key points. The target point of interest information may include at least one of: target interest point identification, target interest point category and target interest point position information. The log information of the target interest point can be obtained, and the target key point information is determined according to the log information.
According to the embodiment of the disclosure, for the target interest point information in the at least one piece of target interest point information, a target image matched with the target interest point information can be determined from the image library by using an image retrieval method. Thereby, at least one target image may be obtained. And creating an association relation set according to the at least one target image and the target interest points corresponding to the at least one target interest point information.
According to an embodiment of the present disclosure, creating a set of association relationships from at least one target point of interest and at least one target image may include the following operations.
And identifying at least one target image to obtain at least one object. And performing semantic understanding on a target image comprising the object aiming at the object in at least one object to obtain a semantic understanding result. The semantic understanding result includes an avatar. And creating an association relation set according to the at least one target interest point and the image corresponding to the at least one target interest point.
According to an embodiment of the present disclosure, the target image may or may not include an object. At least one target image may be identified, resulting in at least one object. For example, the at least one target image may be processed using an image recognition model to obtain an image recognition result. The image recognition result may include at least one object. After obtaining the object, semantic understanding may be performed on the target image including the object, resulting in a semantic understanding result including an avatar. For example, a target image including an object may be processed using a semantic understanding model to obtain a semantic understanding result. And then associating the target interest points with the images corresponding to the target interest points to obtain an association relation.
According to the embodiment of the disclosure, the possible association relation is obtained through a data mining and image information extraction method, and the complexity, diversity and coverage rate of the image of the virtual object can be guaranteed.
According to embodiments of the present disclosure, the avatars may be ranked according to navigation traffic. For example, the avatar may be prioritized based on the diversion traffic. In the case where the number of the characters corresponding to the target interest point is plural, the character corresponding to the target interest point may be determined according to the priority of the character.
For example, the figure corresponding to the target point of interest "x hospital" is a doctor figure wearing a doctor's clothing. The image corresponding to the target point of interest "xx clinic" is also the doctor's image wearing the doctor's clothing. Therefore, the priority level of the doctor image can be improved according to the navigation flow, and the generation of the image guidance target display mode of the required virtual object can be acquired.
Fig. 8A schematically illustrates a presentation interface example diagram of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 8A, in a scenario 800A where the interaction demand information is that the navigation arrival end point is a coffee shop, the presentation interface may include a vehicle speed code group 801 at the top of the interface, a navigation map area 802 in the middle of the interface, and a bottom panel area 803 at the bottom of the interface.
Vehicle speed code group 801 can be left-handed slid or clicked for retraction, so that the top space of the interface is in a transparent state. In the case where the vehicle speed code group 801 is retracted to the left or above, the vehicle speed code group 801 can be displayed again by clicking on the vehicle speed code group 801. The vehicle speed code set 801 may display the distance still to be traveled on the current road. For example, the current vehicle speed is 58km/h, and the distance to be traveled by the vehicle on the current road is 345 meters.
The navigation map area 802 may include bubble element information 802_1, virtual objects 802_2, and road element information 802_ 3. The bottom panel area 803 may include response information 803_1, estimated arrival time information 803_2, full road condition information 803_3, and a full view functionality control 803_ 4. The whole road condition information 803_2 may be "90 km 3 h 29 min", and the estimated arrival time information 803_3 may be "11: 30 arrival". The virtual object 802_2 may be adapted to the context of the coffee shop, for example, as shown in fig. 8A, the virtual object 802_2 may display a coffee logo. Furthermore, the virtual object 802_2 may be adapted to the scene of the coffee shop by changing character features, behavior of the character, and idea of the character, for example, the virtual object 802_2 may have an avatar adapted to the scene of the coffee shop or perform corresponding actions, which is not limited by the embodiment of the disclosure.
For example, in a scenario where the navigation arrival end point is a coffee shop, the bubble element information 802_1 may be used to prompt the location of "coffee shop", and the auxiliary area may be determined from the bubble element information 802_1 and the road element information 802_ 3.
It is possible to determine the expected orientation information of the virtual object 802_2 as the east-ward direction from the navigation scene information 802_3, and determine the target area from the auxiliary area, the expected orientation information, and the size information of the virtual object 802_ 2.
The virtual object 802_2 and the response information 803_1 may be displayed in a target area of the navigation page according to a target display mode, and the response information 803_1 may be "end point reached: is the coffee shop want to quickly book a plain novelty? ", the response information 803_1 may also provide a quick reservation control and a cancel control.
Fig. 8B schematically illustrates a presentation interface example diagram of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 8B, in a scenario 800B where the interaction demand information is that the navigation arrival end point is a clothing store, the presentation interface may include a vehicle speed code group 804 at the top of the interface, a navigation map region 805 in the middle of the interface, and a bottom panel region 806 at the bottom of the interface.
Vehicle speed code group 804 can slide left or click to pack up, makes the interface headspace present penetrating state. Under the condition that the vehicle speed code group 804 is folded towards the left or above, the vehicle speed code group 804 can be displayed again by clicking the vehicle speed code group 804. The speed code group 804 may display the distance still to be traveled on the current road. For example, the current vehicle speed is 58km/h, and the distance to be traveled by the vehicle on the current road is 345 meters.
The navigation map area 805 may include bubble element information 805_1, virtual objects 805_2, and road element information 805_ 3. The bottom panel area 806 may include response information 806_1, estimated time of arrival information 806_2, global road condition information 806_3, and a full-view functionality control 806_ 4. The global traffic information 806_2 may be "90 km 3 h 29 min", and the estimated arrival time information 803_3 may be "11: 30 arrival". Virtual object 805_2 may be adapted to the scene of a clothing store, for example, as shown in fig. 8B, virtual object 805_2 may display a clothing identification. Furthermore, the virtual object 805_2 may also be adapted to the scene of the clothing store by changing the character characteristics, the behavior of the character, the thought of the character, and the like, for example, the virtual object 805_2 may have a character adapted to the scene of the clothing store or perform a corresponding action, which is not limited by the embodiment of the disclosure.
For example, in a scenario where the navigation arrival end point is a clothing store, the bubble element information 805_1 may be used to prompt the location of "clothing store", and the auxiliary area may be determined from the bubble element information 805_1 and the road element information 805_ 3.
It is possible to determine the expected azimuth information of the virtual object 805_2 as the main east direction from the navigation scene information, and determine the target area from the auxiliary area, the expected azimuth information, and the size information of the virtual object 805_ 2.
The virtual object 805_2 and the response information 806_1 may be displayed in a target area of the navigation page according to a target display mode, and the response information 806_1 may be "reach end: is a store of apparel an avenue to see the latest apparel in the stores of apparel that you have collected? ", response information 806_1 may also be provided as my recommend and cancel controls.
Fig. 8C schematically illustrates a presentation interface example diagram of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 8C, in a scene 800C where the interaction demand information is that the navigation arrives at the end point of the dessert, the presentation interface may include a vehicle speed code group 807 at the top of the interface, a navigation map region 808 in the middle of the interface, and a bottom panel region 809 at the bottom of the interface.
The vehicle speed code group 807 can be left-handed slid or clicked to retract, so that the interface headspace is in a transparent state. In the case where the vehicle speed code group 807 is retracted to the left or above, the vehicle speed code group 807 can be displayed again by clicking on the vehicle speed code group 807. The vehicle speed code group 807 may indicate the distance still to be traveled on the current road. For example, the current vehicle speed is 58km/h, and the distance to be traveled by the vehicle on the current road is 345 meters.
The navigation map area 808 may include bubble element information 808_1, virtual objects 808_2, and road element information 808_ 3. The bottom panel area 809 may include response information 809_1, estimated time of arrival information 809_2, global road condition information 809_3, and a full view functionality control 809_ 4. The whole road condition information 809_2 can be '90 km 3 h 29 min', and the estimated arrival time information 809_3 can be '11: 30 arrival'. Virtual object 802_2 can be adapted to the scene of the confectionary store, e.g., virtual object 808_2 can display confectionary identification, as shown in fig. 8C. Furthermore, the virtual object 808_2 may also be adapted to the scene of the confectionary, for example, the virtual object 808_2 may have a character adapted to the scene of the confectionary or perform a corresponding action, which is not limited by the embodiment of the present disclosure.
For example, in a scenario in which the navigation arrival end point is a dessert store, the bubble element information 808_1 may be used to indicate the location of "the dessert store", and the assistance area may be determined from the bubble element information 808_1 and the road element information 808_ 3.
The expected azimuth information of the virtual object 808_2 can be determined to be the east direction according to the navigation scene information, and the target area can be determined according to the auxiliary area, the expected azimuth information and the size information of the virtual object 808_ 2.
The virtual object 808_2 and the response information 809_1 can be displayed in a target area of the navigation page according to a target display mode, and the response information 809_1 can be "reach end: is a dessert store routed to reserve a new dessert quickly for a new dessert store that you have collected? ", the response information 809_1 may also provide a quick reservation control and a cancel control.
FIG. 9 schematically shows a block diagram of an information presentation device according to an embodiment of the present disclosure.
As shown in fig. 9, the information presentation apparatus 900 may include a first obtaining module 910 and a first presentation module 920.
The first obtaining module 910 is configured to obtain the interaction requirement information in response to detecting that the interaction control of the navigation page is triggered.
The first display module 920 is configured to display the virtual object and the response information on the navigation page according to the target display mode. The target display mode is determined according to the interaction demand information and the response information. The response information is generated according to the interaction demand information.
According to an embodiment of the present disclosure, the first display module 920 may include a first display unit.
And the first display unit is used for displaying the virtual object and the response information in the target area of the navigation page according to the target display mode. The position relation between the target area and the auxiliary area of the navigation page meets the preset position relation condition. The target area and the auxiliary information area are located in a navigation map area of the navigation page.
According to an embodiment of the present disclosure, the first presentation module 920 may further include a first determination unit and a second determination unit.
A first determining unit for determining the auxiliary area based on the expected auxiliary information. The expected assistance information includes road element information and bubble element information of the navigation map area.
And the second determining unit is used for determining the target area according to the auxiliary area, the navigation scene information and the size information of the virtual object. The navigation scenario information includes at least one of: navigation path information and road state information corresponding to the navigation path.
According to an embodiment of the present disclosure, the second determination unit may include a first determination subunit and a second determination subunit.
And the first determining subunit is used for determining the expected azimuth information according to the navigation scene information. The expected orientation information includes an expected presentation area and an expected presentation direction.
And the second determining subunit is used for determining the target area according to the auxiliary area, the expected azimuth information and the size information of the virtual object.
According to an embodiment of the present disclosure, the information presentation apparatus 900 may further include a first determination module, a second determination module, a third determination module, and a first acquisition module.
And the first determining module is used for determining the initial interest point according to the interaction demand information.
And the second determining module is used for determining the target image from the association relation set according to the initial interest point and the interest point relation map. The interest point relation graph comprises at least two interest nodes and at least one edge, the two interest nodes connected by the edge have a subordinate relation, the interest nodes represent interest points, the association relation set comprises a plurality of association relations, and the association relations represent relations between target interest points and images.
And the third determining module is used for determining the target action and the target expression according to the response information.
And the first acquisition module is used for acquiring a target display mode according to the target image, the target action and the target expression.
According to an embodiment of the present disclosure, the second determination module may include a third determination unit, a fourth determination unit, and a fifth determination unit.
And the third determining unit is used for determining the target interest point according to the initial interest point and the interest point relation map.
And the fourth determining unit is used for determining the image corresponding to the target interest point from the association relation set.
A fifth determining unit for determining the image corresponding to the target interest point as the target image.
According to an embodiment of the present disclosure, the interest point relation graph is created according to a position relation and a semantic relation of at least two interest points.
According to an embodiment of the present disclosure, the information presentation apparatus 900 may further include a second obtaining module, a fourth determining module, and a creating module.
And the second acquisition module is used for acquiring the target interest point information of each of the at least one target interest point to obtain the at least one target interest point information.
And the fourth determining module is used for determining a target image matched with the at least one piece of target interest point information from the image library to obtain at least one target image.
And the creating module is used for creating an association relation set according to the at least one target interest point and the at least one target image.
According to an embodiment of the present disclosure, the creation module may include an identification unit, a semantic understanding unit, and a creation unit.
And the identification unit is used for identifying at least one target image to obtain at least one object.
And the semantic understanding unit is used for carrying out semantic understanding on the target image comprising the object aiming at the object in at least one object to obtain a semantic understanding result. The semantic understanding result includes an avatar.
And the creating unit is used for creating an association relation set according to the at least one target interest point and the image corresponding to the at least one target interest point.
According to the embodiment of the disclosure, the interactive control comprises a voice interactive control, and the response information comprises voice broadcast information.
According to an embodiment of the present disclosure, the first display module 920 may include a second display unit.
And the second display unit is used for displaying the virtual object and broadcasting voice playing information on the navigation page according to the target display mode in the pluggable playing period. The interjectable time period represents a time period during which non-navigation broadcast information is allowed to be interjected during the navigation broadcast process.
According to an embodiment of the present disclosure, the first display module 920 may further include a third display unit.
And the third display unit is used for displaying the virtual object and broadcasting the navigation broadcasting information on the navigation page in the navigation broadcasting time interval. The inter-cut time interval represents a target time interval in the navigation broadcasting process except the navigation broadcasting time interval.
According to the embodiment of the disclosure, the target display mode is determined according to the interaction demand information, the response information and the configuration information.
According to an embodiment of the present disclosure, the configuration information is acquired by: and in response to detecting that the configuration control of the navigation page is triggered, displaying the configuration page. The configuration page includes at least one of: spoken configuration items and emotional configuration items. In response to detecting a configuration operation for a configuration item, configuration information is obtained.
According to an embodiment of the present disclosure, the information display apparatus 900 may further include a second display module.
And the second display module is used for displaying the auxiliary interaction information on the navigation page in the process of displaying the virtual object and the response information on the navigation page according to the target display mode.
According to an embodiment of the present disclosure, the first obtaining module 910 may include a fourth presentation unit and a obtaining unit.
And the fourth display unit is used for responding to the detection that the interaction control of the navigation page is triggered and displaying at least one piece of recommendation information on the navigation page.
The acquisition unit is used for responding to the detection of the selection operation aiming at the at least one piece of recommendation information and acquiring the interaction demand information.
According to an embodiment of the present disclosure, the information presentation apparatus 900 may further include a third presentation module.
And the third display module is used for displaying the preset page information in the preset display area of the navigation page. The predetermined page information includes at least one of: the method comprises the steps of pre-estimating arrival time information, whole road condition information, an enlarged intersection image, a whole-view function control, a quit navigation function control and a vehicle speed code group, wherein the intersection between a preset display area and a target navigation map area is smaller than or equal to a preset area threshold value, and the target navigation map area is a preset part in a navigation map area of a navigation page.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the present disclosure.
According to an embodiment of the present disclosure, a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method as described in the present disclosure.
According to an embodiment of the disclosure, a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the disclosure.
FIG. 10 schematically illustrates a block diagram of an electronic device suitable for implementing an information presentation method in accordance with an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the electronic device 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM1003, various programs and data necessary for the operation of the electronic apparatus 1000 can also be stored. The calculation unit 1001, the ROM 1002, and the RAM1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in the electronic device 1000 are connected to the I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the electronic device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 performs the respective methods and processes described above, such as the information presentation method. For example, in some embodiments, the information presentation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communications unit 1009. When the computer program is loaded into RAM1003 and executed by computing unit 1001, one or more steps of the information presentation method described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the information presentation method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. An information display method, comprising:
acquiring interaction demand information in response to detecting that an interaction control of a navigation page is triggered; and
and displaying a virtual object and response information on the navigation page according to a target display mode, wherein the target display mode is determined according to the interaction demand information and the response information, and the response information is generated according to the interaction demand information.
2. The method of claim 1, wherein the presenting the virtual object and the response information on the navigation page in the target presentation mode comprises:
and displaying the virtual object and the response information in a target area of the navigation page according to the target display mode, wherein the position relation between the target area and an auxiliary area of the navigation page meets a preset position relation condition, and the target area and the auxiliary area are located in a navigation map area of the navigation page.
3. The method of claim 2, further comprising:
determining the assistance area according to expected assistance information, wherein the expected assistance information comprises road element information and bubble element information of the navigation map area; and
determining the target area according to the auxiliary area, navigation scene information and size information of the virtual object, wherein the navigation scene information comprises at least one of the following: navigation path information and road state information corresponding to the navigation path.
4. The method of claim 3, wherein the determining the target area from the auxiliary area, navigation scene information, and size information of the virtual object comprises:
determining expected position information according to the navigation scene information, wherein the expected position information comprises an expected display area and an expected display direction; and
and determining the target area according to the auxiliary area, the expected position information and the size information of the virtual object.
5. The method of any of claims 1-4, further comprising:
determining an initial interest point according to the interaction demand information;
determining a target image from an association relationship set according to the initial interest point and an interest point relationship map, wherein the interest point relationship map comprises at least two interest nodes and at least one edge, the two interest nodes connected by the edge have an affiliation relationship, the interest nodes represent interest points, the association relationship set comprises a plurality of association relationships, and the association relationships represent relationships between the target interest points and the image;
determining a target action and a target expression according to the response information; and
and obtaining the target display mode according to the target image, the target action and the target expression.
6. The method of claim 5, wherein said determining a target persona from a set of associations according to said initial point of interest and point of interest relationship graph comprises:
determining a target interest point according to the initial interest point and the interest point relation map;
determining an image corresponding to the target interest point from the association relation set; and
and determining the image corresponding to the target interest point as the target image.
7. The method according to claim 5 or 6, wherein the point of interest relationship graph is created from the positional relationship and the semantic relationship of at least two points of interest.
8. The method of any of claims 5-7, further comprising:
obtaining target interest point information of at least one target interest point to obtain at least one target interest point information;
determining a target image respectively matched with the at least one target interest point information from an image library to obtain at least one target image; and
creating the set of associated relationships from the at least one target point of interest and the at least one target image.
9. The method of claim 8, wherein said creating said set of association relationships from said at least one target point of interest and said at least one target image comprises:
identifying the at least one target image to obtain at least one object;
performing semantic understanding on a target image comprising the object aiming at the object in the at least one object to obtain a semantic understanding result, wherein the semantic understanding result comprises an image; and
and creating the association relation set according to the at least one target interest point and the image corresponding to the at least one target interest point.
10. The method of claim 1, wherein the interactive controls comprise voice interactive controls, and the response information comprises voice announcement information;
wherein, according to the target display mode, displaying the virtual object and the response information on the navigation page comprises:
and displaying the virtual object and broadcasting the voice broadcasting information on the navigation page according to a target display mode in a pluggable broadcasting period, wherein the pluggable broadcasting period represents a period allowing non-navigation broadcasting information to be plugged in the navigation broadcasting process.
11. The method of claim 10, further comprising:
and displaying the virtual object and broadcasting navigation broadcasting information on the navigation page in a navigation broadcasting time interval, wherein the interjectable time interval represents a target time interval except the navigation broadcasting time interval in the navigation broadcasting process.
12. The method of claim 1, wherein the target presentation mode is determined according to the interaction requirement information, the response information, and configuration information;
the configuration information is obtained by the following method:
in response to detecting that a configuration control of the navigation page is triggered, presenting a configuration page, wherein the configuration page includes at least one of: colloquialisation configuration items and emotionalisation configuration items;
and acquiring the configuration information in response to detecting the configuration operation aiming at the configuration item.
13. The method of claim 1, further comprising:
and displaying auxiliary interaction information on the navigation page in the process of displaying the virtual object and the response information on the navigation page according to a target display mode.
14. The method according to any one of claims 1 to 13, wherein the acquiring interaction demand information in response to detecting that the interaction control of the navigation page is triggered comprises:
in response to detecting that an interaction control of the navigation page is triggered, displaying at least one piece of recommendation information on the navigation page; and
and acquiring the interaction demand information in response to the detection of the selection operation aiming at the at least one piece of recommendation information.
15. The method of any of claims 1-14, further comprising:
displaying predetermined page information in a predetermined display area of the navigation page, wherein the predetermined page information comprises at least one of the following: the navigation method comprises the steps of pre-estimating arrival time information, whole road condition information, an enlarged intersection image, a whole view function control, a quit navigation function control and a vehicle speed code group, wherein the intersection between a preset display area and a target navigation map area is less than or equal to a preset area threshold value, and the target navigation map area is a preset part in a navigation map area of a navigation page.
16. An information presentation device comprising:
the first acquisition module is used for responding to the detection that the interaction control of the navigation page is triggered and acquiring interaction demand information; and
and the first display module is used for displaying the virtual object and the response information on the navigation page according to a target display mode, wherein the target display mode is determined according to the interaction demand information and the response information, and the response information is generated according to the interaction demand information.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-15.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of claims 1-15.
19. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 15.
CN202210763843.1A 2022-06-29 2022-06-29 Information display method and device, electronic equipment and storage medium Active CN115113963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763843.1A CN115113963B (en) 2022-06-29 2022-06-29 Information display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763843.1A CN115113963B (en) 2022-06-29 2022-06-29 Information display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115113963A true CN115113963A (en) 2022-09-27
CN115113963B CN115113963B (en) 2023-04-07

Family

ID=83330453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763843.1A Active CN115113963B (en) 2022-06-29 2022-06-29 Information display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115113963B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030221183A1 (en) * 2002-05-24 2003-11-27 Petr Hejl Virtual friend with special features
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN105404629A (en) * 2014-09-12 2016-03-16 华为技术有限公司 Method and device for determining map interface
CN106546234A (en) * 2016-11-04 2017-03-29 江苏科技大学 Market touch navigation system
CN107045844A (en) * 2017-04-25 2017-08-15 张帆 A kind of landscape guide method based on augmented reality
CN107423445A (en) * 2017-08-10 2017-12-01 腾讯科技(深圳)有限公司 A kind of map data processing method, device and storage medium
CN107643084A (en) * 2016-07-21 2018-01-30 阿里巴巴集团控股有限公司 Data object information, real scene navigation method and device are provided
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN108803615A (en) * 2018-07-03 2018-11-13 东南大学 A kind of visual human's circumstances not known navigation algorithm based on deeply study
CN111595349A (en) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 Navigation method and device, electronic equipment and storage medium
CN111784271A (en) * 2019-04-04 2020-10-16 腾讯科技(深圳)有限公司 User guiding method, device, equipment and storage medium based on virtual object
CN112344932A (en) * 2019-08-09 2021-02-09 上海红星美凯龙悦家互联网科技有限公司 Indoor navigation method, device, equipment and storage medium
CN113566847A (en) * 2021-07-22 2021-10-29 北京百度网讯科技有限公司 Navigation calibration method and device, electronic equipment and computer readable medium
CN114608584A (en) * 2022-03-25 2022-06-10 深圳市商汤科技有限公司 Navigation method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030221183A1 (en) * 2002-05-24 2003-11-27 Petr Hejl Virtual friend with special features
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN105404629A (en) * 2014-09-12 2016-03-16 华为技术有限公司 Method and device for determining map interface
CN107643084A (en) * 2016-07-21 2018-01-30 阿里巴巴集团控股有限公司 Data object information, real scene navigation method and device are provided
CN106546234A (en) * 2016-11-04 2017-03-29 江苏科技大学 Market touch navigation system
CN107045844A (en) * 2017-04-25 2017-08-15 张帆 A kind of landscape guide method based on augmented reality
CN107423445A (en) * 2017-08-10 2017-12-01 腾讯科技(深圳)有限公司 A kind of map data processing method, device and storage medium
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN108803615A (en) * 2018-07-03 2018-11-13 东南大学 A kind of visual human's circumstances not known navigation algorithm based on deeply study
CN111784271A (en) * 2019-04-04 2020-10-16 腾讯科技(深圳)有限公司 User guiding method, device, equipment and storage medium based on virtual object
CN112344932A (en) * 2019-08-09 2021-02-09 上海红星美凯龙悦家互联网科技有限公司 Indoor navigation method, device, equipment and storage medium
CN111595349A (en) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 Navigation method and device, electronic equipment and storage medium
CN113566847A (en) * 2021-07-22 2021-10-29 北京百度网讯科技有限公司 Navigation calibration method and device, electronic equipment and computer readable medium
CN114608584A (en) * 2022-03-25 2022-06-10 深圳市商汤科技有限公司 Navigation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115113963B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
EP3268697B1 (en) Entity search along the route
KR101725886B1 (en) Navigation directions between automatically determined startin points and selected distinations
CN104200696B (en) The method for pushing and device of a kind of transport information
US10302442B2 (en) Transit incident reporting
US20160003637A1 (en) Route detection in a trip-oriented message data communications system
CN106643774B (en) Navigation route generation method and terminal
KR101886966B1 (en) Method for providing customized travel plan and server implementing the same
US20140358603A1 (en) Iterative public transit scoring
CN109631920B (en) Map application with improved navigation tool
CN111797184A (en) Information display method, device, equipment and medium
CN113160607A (en) Parking space navigation method and device, electronic equipment, storage medium and product
AU2017397651B2 (en) Providing navigation directions
CN115113963B (en) Information display method and device, electronic equipment and storage medium
US11042819B2 (en) Server, client, and information sharing system
CN106372095B (en) Electronic map display method and device and vehicle-mounted equipment
CN113761398B (en) Information recommendation method and device, electronic equipment and storage medium
JP2020166525A (en) Information providing device, information providing program, and information providing method
CN114428917A (en) Map-based information sharing method, map-based information sharing device, electronic equipment and medium
CN115033807A (en) Recommendation method, device and equipment for future departure and storage medium
CN114692968A (en) Number taking processing method and device and electronic equipment
CN114116929A (en) Navigation processing method and device, electronic equipment and storage medium
US20160255173A1 (en) Client, server, and information sharing system
CN113175940A (en) Data processing method, device, equipment and storage medium
CN112373519A (en) Subway line graphical interface based on dynamic display time of intelligent transparent display vehicle window
WO2012164333A1 (en) System and method to search, collect and present various geolocated information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liu Panpan

Inventor after: Zhang Hao

Inventor after: Xie Zhongyu

Inventor after: Chen Yaolin

Inventor after: Sun Longwei

Inventor after: Xiao Yao

Inventor after: Zhang Kun

Inventor after: Xu Meng

Inventor after: Ji Yongzhi

Inventor before: Liu Shanshan

Inventor before: Zhang Hao

Inventor before: Xie Zhongyu

Inventor before: Chen Yaolin

Inventor before: Sun Longwei

Inventor before: Xiao Yao

Inventor before: Zhang Kun

Inventor before: Xu Meng

Inventor before: Ji Yongzhi

GR01 Patent grant
GR01 Patent grant