CN115544390A - POI display method and device - Google Patents

POI display method and device Download PDF

Info

Publication number
CN115544390A
CN115544390A CN202110741983.4A CN202110741983A CN115544390A CN 115544390 A CN115544390 A CN 115544390A CN 202110741983 A CN202110741983 A CN 202110741983A CN 115544390 A CN115544390 A CN 115544390A
Authority
CN
China
Prior art keywords
poi
display information
dimensional scene
dimensional
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110741983.4A
Other languages
Chinese (zh)
Inventor
李威阳
闫国兴
唐忠伟
郑亚
冯艳妮
康一飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110741983.4A priority Critical patent/CN115544390A/en
Priority to PCT/CN2022/101799 priority patent/WO2023274205A1/en
Publication of CN115544390A publication Critical patent/CN115544390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a POI display method and equipment, which can be applied to the field of navigation electronic maps, can be particularly applied to the construction of AR/VR three-dimensional scenes and can also be applied to the construction of virtual three-dimensional scenes of games, and the method comprises the following steps: the method comprises the steps that terminal equipment obtains first data corresponding to a first point of interest (POI) in a three-dimensional scene (such as a three-dimensional scene constructed at the current moment), and generates first 3D display information based on the first data, wherein the first 3D display information comprises 3D characters and 3D icons, the problem that the existing POI content is not three-dimensional in display is optimized, the immersion sense of a user in use is improved, in addition, the shielded 3D display information is adjusted according to the shielding relation of the first 3D display information corresponding to the first POI in the three-dimensional scene (the condition that one or more 3D display information is possibly shielded when a plurality of 3D display information is displayed at the same time in the three-dimensional scene), and the situation that the user only sees part of the 3D display information to generate ambiguity in browsing is avoided.

Description

POI display method and device
Technical Field
The application relates to the field of electronic maps, in particular to a POI display method and device.
Background
A point of interest (POI) is an important component of a navigation electronic map, and generally refers to a certain landmark, a building, a scenic spot, etc. on the electronic map, and is used to mark places such as government departments represented by the place, commercial institutions of various industries (gas stations, department stores, supermarkets, restaurants, hotels, convenience stores, hospitals, etc.), tourist attractions (parks, public toilets, etc.), historic site, transportation facilities (various stations, parking lots, over-speed cameras, speed-limiting signs), etc.
With the widespread application of Augmented Reality (AR)/Virtual Reality (VR), a three-dimensional electronic map is inevitably used, and therefore, methods for displaying and updating a POI in a three-dimensional scene are also receiving more and more attention. Currently, existing methods for displaying POIs in a three-dimensional scene are mainly classified into two types: 1) The method comprises the steps of two-dimensional tiled POI, wherein surrounding POI data in a current scene are obtained through the position of a user, the POI data comprise positions, names and the like, the POI data can be associated with information such as buildings in the scene, then the content of the POI is displayed on a screen of equipment in a tiled mode, and the content of the POI comprises information such as names and scores. As shown in fig. 1, fig. 1 is a schematic diagram of an application scene of a tiled POI. 2) And the POI three-dimensional tag is fused in a scene by using a three-dimensional content tag according to POI data in the position near the user, and can be adjusted along with the visual angle of the user. As shown in fig. 2, (a) a sub-diagram and (b) a sub-diagram in fig. 2 are respectively schematic diagrams of an application scene of the three-dimensional POI tag.
In the above mode 1, the POI content is tiled in the screen range of the building, which is easy to be occluded with other content in the scene, and the POI is not strongly associated with the entity such as the building to which the POI belongs; in the above mode 2, the content in the three-dimensional POI tag is in a planar form, and the 3D POI information is expressed in a planar dimension, so that the immersion feeling of the user when the 3D POI tag is presented in the terminal device becomes weak, and the continuity of three-dimensional browsing of the user is affected.
Disclosure of Invention
The embodiment of the application provides a POI display method and device, which are used for generating 3D display information based on data corresponding to the POI, wherein the 3D display information comprises at least one of 3D characters, 3D icons and a background plate of the 3D characters, the problem that the existing POI content is not displayed stereoscopically is optimized, the immersion sense of a user in use is improved, in addition, the 3D display information corresponding to the blocked POI is adjusted according to the blocking relation of the 3D display information corresponding to the POI (the situation that one or more 3D display information is possibly blocked when a plurality of 3D display information is displayed simultaneously in a three-dimensional scene), and the ambiguity generated by the user only seeing part of the 3D display information in browsing is avoided.
Based on this, the embodiment of the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application firstly provides a method for displaying a POI, which may be applied to the field of navigation electronic maps, specifically to the construction of an AR/VR three-dimensional scene, and may also be applied to the construction of a virtual three-dimensional scene of a game, and is not limited herein. The method comprises the following steps: first, the terminal device obtains data (which may be referred to as first data) corresponding to a first POI in a three-dimensional scene (e.g., a three-dimensional scene constructed at a current time and a current location), where the first data includes target three-dimensional coordinates of the first POI in the constructed three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the constructed three-dimensional scene, for example, if the terminal device includes 3 POIs in the three-dimensional scene constructed at the current time and the current location, the first POI refers to any one of the 3 POIs. After acquiring first data corresponding to a first POI in a three-dimensional scene, the terminal device further generates corresponding 3D display information (which may be referred to as first 3D display information) according to the first data, where the first 3D display information may include at least one of a 3D text corresponding to the identifier, a 3D icon corresponding to the identifier, and a background plate of the 3D text, and it should be noted that a display position of the first 3D display information in the three-dimensional scene is determined by a target three-dimensional coordinate of the first POI included in the first data. After generating the first 3D display information corresponding to the first POI, the terminal device further generates a cross section (which may be referred to as a first cross section) of the first 3D display information, and samples the first cross section to obtain at least one sampling point. After the terminal device samples the first cross section to obtain at least one sampling point, whether a connecting line between a camera position of the terminal device and the sampling point penetrates through a second cross section or not is further judged, wherein the second cross section is a cross section of second 3D display information corresponding to a second POI, and the second POI is any other POI different from the first POI in the three-dimensional scene. If the terminal device determines that at least one connection line between the camera location of the terminal device and the sampling point passes through the second cross section, for example, if the number of the sampling points is 3, and a connection line can be correspondingly arranged between the camera location and each sampling point, the total number of the connection lines is 3, and if at least 1 connection line in the 3 connection lines passes through the cross sections of other POIs in the three-dimensional scene, it is assumed that the first cross section is blocked by other cross sections, and then the terminal device needs to adjust the first 3D display information in the three-dimensional scene.
In the above embodiments of the application, the 3D display information is generated based on the data corresponding to the POI, where the 3D display information includes at least one of 3D text, 3D icon, and a background board of the 3D text, so as to optimize the problem that the existing POI content is not displayed stereoscopically, improve the immersion feeling of the user when in use, and adjust the 3D display information corresponding to the blocked POI according to the blocking relationship of the 3D display information corresponding to the POI (where a plurality of 3D display information are simultaneously displayed in the three-dimensional scene and a certain or a plurality of 3D display information may be blocked), so as to avoid ambiguity caused by the user viewing only part of the 3D display information when browsing.
In a possible implementation manner of the first aspect, a process of generating, by the terminal device, the corresponding first 3D display information according to the first data may specifically be: first, the terminal device dynamically generates a 3D text corresponding to the identifier based on a 3D text library, and generates a background plate of the 3D text according to the size of the 3D text, it should be noted that the background plate of the 3D text refers to a background frame capable of accommodating the 3D text, and the background frame may be a rectangular frame, an oval frame, or a frame in any other shape, which is not limited in this application. It should be noted that the dynamic generation means that the 3D text can be generated anytime and anywhere as required, and is used for distinguishing the 3D text originally built in the terminal device. It should be noted that the 3D text is not limited to the language type, and may be chinese, for example, a name of a building hung on a certain building, such as "XX school", "XX hospital", "XX mall", "XX park", or english or other languages, such as "XX school", "XX hospital", and the like, which is not limited in this application. Then, the terminal device further generates a 3D icon corresponding to the identifier according to the identifier, where the 3D icon can be distinguished based on the service type, for example, the 3D icon for catering is a symbol of "bowl + chopsticks", the 3D icon for tea is a symbol of "coffee cup", and the 3D icon for shopping is a symbol of "shopping bag", where the specific representation form of the 3D icon is not limited at this time, as long as the corresponding service type can be identified by the user according to the 3D icon. And finally, the terminal equipment adjusts the relative positions of the generated 3D characters, the background plate and the 3D icons to obtain the first 3D display information.
In the above embodiments of the present application, to the problem that the three-dimensional continuity of the POI displayed in the three-dimensional scene is insufficient and the immersion feeling is weak, how to generate the 3D display information including the 3D text, the 3D icon, and the background plate is specifically set forth, the display mode of the POI content in the three-dimensional scene is optimized, and the immersion feeling when the user uses the POI display device is improved.
In a possible implementation manner of the first aspect, specific implementation manners in which the terminal device generates the 3D icon corresponding to the identifier according to the identifier include, but are not limited to: a. if the first data includes a classification code in addition to the target three-dimensional coordinate of the first POI in the constructed three-dimensional scene and the identifier of the first POI, the terminal device determines the classification code corresponding to the identifier according to the identifier, and generates a 3D icon corresponding to the first POI according to a first mapping relationship between the classification code and the icon. In the embodiment of the application, the icons and the classification codes are in one-to-one correspondence, the classification codes can be regarded as indexes of the icons (the index rules are defined in advance), the corresponding icons can be directly called according to the classification codes, and then the corresponding 3D icons are generated based on the icons. It should be noted here that, multiple types of 3D texts may be included under one icon, that is, different 3D texts may belong to the same icon, for example, the 3D texts corresponding to "store a" and "store B" are different but all belong to shopping places, and therefore, may correspond to the same icon, and the generated 3D icons may all be 3D "shopping bag" symbols. b. If the first data only comprise the target three-dimensional coordinates of the first POI in the constructed three-dimensional scene and the identification of the first POI, the terminal device can generate a 3D icon corresponding to the first POI by extracting the key features of the identification and according to the second mapping relation between the key features and the icon. This approach is similar to the above approach, except that the role of the classification code is replaced by extracted key features, each type of key feature also requiring a one-to-one correspondence with the icons. For details, reference may be made to the above-mentioned method a, which is not described herein again.
In the above embodiments of the present application, several implementations of generating a 3D icon are specifically described, which have flexibility and wide applicability.
In a possible implementation manner of the first aspect, the manner in which the terminal device samples the first cross section may be random sampling, and the number n (e.g., 3, 5, 8, etc.) of sampling points may also be preset; or sampling according to a preset rule, for example, how far to sample once and how large area to sample once; the sampling may also be performed on a special point of the first cross section, for example, a corner point, a center point, and the like of the first cross section, and the specific application does not limit how to sample to obtain at least one sampling point. In the embodiment of the present application, a typical sampling mode takes 4 corner points of the first cross section as sampling points.
In the above embodiments of the present application, a typical sampling method is specifically described, and the method has realizability.
In a possible implementation manner of the first aspect, in addition to determining whether the first 3D display information is blocked based on whether a connection line between the camera location point and the sampling point passes through other cross sections, it may also be determined whether the first 3D display information is blocked based on other manners, for example, by determining whether 4 corner points of the first cross section are visible (i.e., determining whether a connection line between any one of the 4 corner points and the camera location point penetrates through other cross sections), specifically, in a case that at least one connection line between the camera location point of the terminal device and the sampled 4 corner points passes through the second cross section, the terminal device calculates a blocked area of the first cross section based on a line segment between the 4 corner points and a target point located on the line segment, and then determines a ratio of the blocked area to a total area of the first cross section, and if the ratio of the blocked area to the area of the first cross section reaches a preset threshold, the first 3D display information is considered to be blocked too much, and the terminal device adjusts the first 3D display information in the constructed three-dimensional scene.
In the foregoing embodiment of the present application, another way of determining whether the first 3D display information is blocked is specifically set forth, that is, firstly, a blocked area of a cross section corresponding to the 3D display information is calculated, and a ratio of the blocked area to an area of the whole first cross section is calculated, if the ratio does not exceed a preset threshold, the blocked area is considered to be small, and the first 3D display information is visible and can be retained as a whole; if the ratio exceeds the preset threshold, the blocked area is considered to be large, and the continuous retention may cause ambiguity in understanding by the user, at which time the first 3D display information needs to be adjusted. The purpose of the adjustment is to ensure the overall integrity of 3D display information in the constructed three-dimensional scene and avoid ambiguity generated by a user due to the fact that the user only sees the POI content in the unobstructed portion on the screen.
In a possible implementation manner of the first aspect, the manner in which the terminal device adjusts the first 3D display information includes, but is not limited to: deleting the first 3D display information in the three-dimensional scene; or, reducing the first 3D display information according to a preset proportion; or adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information. Specifically, if at least one connecting line between the camera location of the terminal device and the sampling point passes through the second cross section, the adjustment manner may include, but is not limited to: a. the first 3D display information is deleted in the constructed three-dimensional scene. b. And reducing the first 3D display information according to a preset proportion until a connecting line between a camera position point and a sampling point of the terminal device does not pass through the second cross section, for example, reducing the connecting line to 80% or 60% of the original connecting line according to a preset gradient. c. And adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information until a connecting line between a camera position point of the terminal device and the sampling point does not pass through the second cross section. If the ratio between the blocked area of the first cross section and the area of the first cross section reaches a preset threshold, the adjustment manner may include, but is not limited to: a. the first 3D display information is deleted in the constructed three-dimensional scene. b. The first 3D display information is reduced according to a preset ratio until a ratio of the shielded area of the first cross section to the area of the first cross section does not exceed a preset threshold, for example, the first 3D display information may be reduced to 80%, 60%, and the like according to a preset gradient. c. And adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information until the ratio of the shielded area of the first cross section to the area of the first cross section does not exceed a preset threshold.
In the above embodiments of the present application, several implementation manners for adjusting the first 3D display information in the constructed three-dimensional scene are specifically set forth, and the implementation manners are simple and easy to operate and have flexibility.
In a possible implementation manner of the first aspect, the target three-dimensional coordinate of the first POI in the three-dimensional scene is obtained by extending an initial three-dimensional coordinate of the first POI along a normal direction of a vertical surface of the first POI, where the initial three-dimensional coordinate is a three-dimensional coordinate corresponding to a pixel coordinate set obtained in a front intersection manner or a ray tracing manner, and the pixel coordinate set is a set of pixel coordinates of the first POI in a picture obtained through OCR.
In the above embodiment of the application, the initial three-dimensional coordinate obtained based on the front intersection mode or the ray tracing mode is subjected to position extension along the normal direction of the vertical surface of the POI to obtain the target three-dimensional coordinate, so that collision between the 3D display information and an entity (such as a building) in the constructed three-dimensional scene can be avoided, and the visual experience is improved.
In a possible implementation manner of the first aspect, the three-dimensional scene is constructed by the terminal device based on the acquired three-dimensional data of the current position, and in some implementation manners of the present application, a specific construction process may be: firstly, the terminal device sends the current position and the orientation of the terminal device to a server corresponding to the terminal device (the orientation of the terminal device is used for representing the orientation of a user carrying the terminal device), after receiving the current position and the orientation sent by the terminal device, the server compares and matches the current position and the orientation with the stored position and orientation, after matching the corresponding position and orientation, the server sends three-dimensional scene data corresponding to the position and the orientation to the terminal device, and after receiving the three-dimensional scene data sent by the server, the terminal device can construct a three-dimensional scene of the current position and the orientation based on the three-dimensional scene data. It should be noted that the three-dimensional scene data is only used for the terminal device to construct the three-dimensional scene, in this embodiment of the application, the server needs to further obtain data (may be referred to as POI data) corresponding to each POI in the constructed three-dimensional scene, as described above, the first POI corresponds to the first data, and the terminal device not only receives the current location and orientation sent by the server, but also receives the POI data, that is, the first data, sent by the server and corresponding to the three-dimensional scene data.
In the above embodiments of the present application, it is stated that the terminal device receives, from the server, three-dimensional scene data (used for constructing a three-dimensional scene) and POI data corresponding to a current position and orientation, so as to avoid data redundancy caused by storing too much data on the terminal device.
A second aspect of the embodiments of the present application provides a terminal device, where the terminal device has a function of implementing the method of the first aspect or any one of the possible implementation manners of the first aspect. The function can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
A third aspect of the present embodiment provides a terminal device, which may include a memory, a processor, and a bus system, where the memory is configured to store a program, and the processor is configured to call the program stored in the memory to execute the method of the first aspect or any one of the possible implementation manners of the first aspect of the present embodiment.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores instructions that, when executed on a computer, enable the computer to perform the method of the first aspect or any one of the possible implementation manners of the first aspect.
A fifth aspect of embodiments of the present application provides a computer program, which, when run on a computer, causes the computer to perform the method of the first aspect or any one of the possible implementation manners of the first aspect.
A sixth aspect of embodiments of the present application provides a chip, where the chip includes at least one processor and at least one interface circuit, the interface circuit is coupled to the processor, the at least one interface circuit is configured to perform a transceiving function and send an instruction to the at least one processor, and the at least one processor is configured to execute a computer program or an instruction, where the at least one processor has a function of implementing the method according to the first aspect or any one of the possible implementations of the first aspect, and the function may be implemented by hardware, software, or a combination of hardware and software, and the hardware or software includes one or more modules corresponding to the above function. In addition, the interface circuit is used for communicating with other modules outside the chip.
Drawings
Fig. 1 is a schematic diagram of an application scenario of tiled POIs;
FIG. 2 is a schematic diagram of an application scenario of a three-dimensional tag of a POI;
FIG. 3 is a schematic diagram of a system architecture provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for displaying a POI in a three-dimensional scene according to an embodiment of the present disclosure;
fig. 5 (a) is a schematic diagram of a multi-slice front meeting 3D POI obtained according to an embodiment of the present disclosure;
fig. 5 (b) is a sub-schematic diagram illustrating a schematic diagram of acquiring a 3D POI by single-slice ray tracing according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a process in which the terminal device generates 3D display information according to data corresponding to a POI according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating the occlusion of 3D display information according to an embodiment of the present application;
FIG. 8 is another schematic diagram illustrating the occlusion of 3D display information according to an embodiment of the present application;
fig. 9 is a schematic flowchart of data generation corresponding to a previous POI according to an embodiment of the present disclosure;
fig. 10 is a schematic flowchart of generating 3D display information including 3D text, 3D icons and a background board according to an embodiment of the present application;
fig. 11 is a schematic flowchart of occlusion analysis and adjustment of 3D display information according to an embodiment of the present application;
FIG. 12 is a schematic flow chart illustrating occlusion analysis and adjustment of 3D display information according to an embodiment of the present disclosure;
FIG. 13 is a graph comparing results of a method of an embodiment of the present application with those of the prior art provided by an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 15 is another schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a POI display method and equipment, which are used for generating 3D display information based on data corresponding to the POI, wherein the 3D display information can comprise at least one of 3D characters, 3D icons and a background board of the 3D characters, the problem that the existing POI content is not displayed stereoscopically is optimized, the immersion sense of a user in use is improved, in addition, the 3D display information corresponding to the blocked POI is adjusted according to the blocking relation of the 3D display information corresponding to the POI (the situation that one or more 3D display information is possibly blocked when a plurality of 3D display information is displayed simultaneously in a three-dimensional scene), and the ambiguity caused by the fact that the user only sees part of the 3D display information in browsing is avoided.
The terms "first," "second," and the like in the description and claims of this application and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to better understand the scheme of the embodiments of the present application, the following first introduces related terms and concepts to which the embodiments of the present application may relate. It should be understood that the related conceptual explanations may be limited by the specific details of the embodiments of the present application, but do not mean that the present application is limited to the specific details, and that the specific details of the embodiments may vary from one embodiment to another, and are not limited herein.
(1) Point of interest (POI)
The POI is an important component of a navigation electronic map, generally refers to a certain landmark, a building, a scenic spot, etc. on the electronic map, and is used to mark places such as government departments represented by the place, commercial institutions of various industries (gas stations, department stores, supermarkets, restaurants, hotels, convenience stores, hospitals, etc.), tourist attractions (parks, public toilets, etc.), historic sites, traffic facilities (various stations, parking lots, over-speed cameras, speed-limiting marks), etc. The attributes of the POI data generally include information of a name, a category, a longitude, a latitude, a contact, a house configuration, and the like of the POI.
(2) Augmented Reality (AR)
The AR is a technology for skillfully fusing virtual information and a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after analog simulation, and the two kinds of information complement each other, so that the 'enhancement' of the real world is realized.
(3) Virtual Reality (VR)
The computer simulation system can create and experience a virtual world, and generates a simulation environment by using a computer to immerse a user in the environment. The virtual reality technology is to combine electronic signals generated by computer technology with data in real life to convert the electronic signals into phenomena which can be felt by people, wherein the phenomena can be true and true objects in reality or substances which can not be seen by the naked eyes, and the phenomena are expressed by a three-dimensional model.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
First, before describing the embodiments of the present application, the implementation forms of the method in the embodiments of the present application in platform software and data resources are described first, so that it is convenient to understand the embodiments of the present application in the following. Referring to fig. 3, fig. 3 is a schematic diagram of a system architecture according to an embodiment of the present disclosure. Based on the system architecture shown in fig. 3, the product implementation forms of the method for displaying a POI in a three-dimensional scene provided in the embodiment of the present application are Software Development Kit (SDK) platform software and three-dimensional scene data that depend on AR/VR, and are deployed on terminal devices that support AR/VR applications. During operation, the program code of the method for displaying a POI in a three-dimensional scene provided in the embodiments of the present application may be run in a host memory of a terminal device, may also be run in a GPU memory, and may also be run in a part of the program code in the host memory and a part of the program code in the GPU memory. And loading the periphery of the three-dimensional scene data corresponding to the current position of the terminal equipment and a POI data list from the server by the running program code, wherein the POI data list comprises POI data (acquired in advance and stored in the server) existing in the three-dimensional scene of the current position of the terminal equipment.
It should be noted here that the method for displaying a POI in a three-dimensional scene provided in this embodiment of the present application may be executed on a terminal device that supports an AR/VR application, and may also be executed on a terminal device that supports a specific type of game application (at this time, the corresponding three-dimensional scene of the virtual world is used), which is not limited to this specific application, and fig. 3 is merely an example. For convenience of illustration, in the following embodiments, a terminal device supporting an AR/VR application is taken as an example for explanation. In addition, the terminal device also can support rendering display of a three-dimensional scene and allow the current position of the terminal device to be acquired. Besides the above-mentioned related hardware basis, the method of the embodiment of the present application needs to have the following related software structure:
1) If the terminal device is a terminal device supporting an AR/VR application, a Software Development Kit (SDK) of the relevant AR/VR is needed to provide scene information, for example, information such as a current location of the terminal device and an orientation of a camera position is obtained.
2) The corresponding three-dimensional scene can be constructed based on the three-dimensional scene data issued by the server, for example, building entities, roads, public facilities and the like are constructed.
3) The POI position service can be provided based on the POI data which is transmitted by the server and is related to the current position, namely, the surrounding POI data is returned according to the current position.
It should be noted that, in the embodiment of the present application, the terminal device may be a handheld terminal device, such as a mobile phone, a tablet, a small personal computer, a notebook computer, or a wheeled mobile device, such as a robot (e.g., a sweeping robot, a robot attendant, etc.), an autonomous vehicle, an assisted driving vehicle, or an intelligent wearable device, such as a dedicated AR device, a VR device, or the like, and as long as the terminal device has the above hardware-related and software-related devices, the terminal device may be referred to as the terminal device in the present application, and the specific form of the terminal device is not limited in the present application.
Based on the system architecture described in the embodiment corresponding to fig. 3, a method for displaying a POI in a three-dimensional scene provided in the embodiment of the present application is described below, specifically referring to fig. 4, where fig. 4 is a schematic flowchart of a method for displaying a POI in a three-dimensional scene provided in the embodiment of the present application, and the method specifically includes the following steps:
401. the method comprises the steps that terminal equipment obtains first data corresponding to a first POI in a three-dimensional scene, wherein the first data comprise a target three-dimensional coordinate of the first POI in the three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the three-dimensional scene.
First, the terminal device obtains data (which may be referred to as first data) corresponding to a first POI in a three-dimensional scene (e.g., a three-dimensional scene constructed at a current time and a current location), where the first data includes target three-dimensional coordinates of the first POI in the constructed three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the constructed three-dimensional scene, for example, if the terminal device includes 3 POIs in the three-dimensional scene constructed at the current time and the current location, the first POI refers to any one of the 3 POIs.
It should be noted that, in this embodiment of the present application, the three-dimensional scene is constructed by the terminal device based on the acquired three-dimensional data of the current position, and in some embodiments of the present application, a specific construction process may be: firstly, a terminal device sends the current position of the terminal device and the orientation of the terminal device to a server corresponding to the terminal device (the orientation of the terminal device is used for representing the orientation of a user carrying the terminal device), after the server receives the current position and the orientation sent by the terminal device, the current position and the orientation are compared and matched with the stored position and orientation, after the corresponding position and orientation are matched, three-dimensional scene data corresponding to the position and the orientation are sent to the terminal device, and after the terminal device receives the three-dimensional scene data sent by the server, a three-dimensional scene of the current position and the orientation can be constructed on the basis of the three-dimensional scene data. It should be noted that the three-dimensional scene data is only used for the terminal device to construct the three-dimensional scene, in this embodiment of the present application, the server further needs to further obtain data (may be referred to as POI data) corresponding to each POI in the constructed three-dimensional scene, as described above, the first POI corresponds to the first data, and the terminal device not only receives the current located position and orientation sent by the server, but also receives the POI data corresponding to the three-dimensional scene data sent by the server, that is, the first data.
It should be further noted that, in other embodiments of the present application, the three-dimensional scene data corresponding to the current position and orientation of the terminal device and the corresponding POI data obtained by the terminal device may also be obtained from self-retrieval (i.e. not received from other devices, but stored by itself), but this approach requires a larger storage space for the terminal device. For convenience of explanation, in the embodiment of the present application, the three-dimensional scene data of the terminal device and the first data corresponding to the first POI are both issued by the server for example, and are not further described in the following.
As can be seen from the above, when the terminal device needs to obtain the three-dimensional scene data corresponding to the current position and orientation of the terminal device and the POI data in the three-dimensional scene from the server, the POI data need to be generated in advance and then stored in the server, and the subsequent server needs to update and maintain the three-dimensional scene data and the POI data. Thus, in other embodiments of the present application, how POI data is generated is illustrated as follows:
firstly, a set of pixel coordinates of a corresponding identifier of a POI in a picture (e.g., a panoramic image acquired by an acquisition device at a certain position and in a certain orientation) is acquired through Optical Character Recognition (OCR) (because there are a plurality of pixel coordinates of the corresponding identifier of the POI), the identifier may be in any expression form, for example, the identifier may be a text (e.g., a name of a building hung on a certain building, such as "xx school" or "xx hospital"), the text does not limit a language type, and may be chinese, english or other languages, and the application is not limited specifically; the identifier may also be a trademark or logo, such as hua be logo, xx school badge, and the like, which is not limited in the present application. And then obtaining an initial three-dimensional coordinate of the POI corresponding to the pixel coordinate set through a front intersection mode or a ray tracing mode, wherein the initial three-dimensional coordinate is generally used for representing the central position of the pixel coordinate set and is a three-dimensional coordinate of a mark corresponding to the POI on the facade of the building. Then, obtaining elevation information of the 3D POI by spatial calculation, where the elevation of the 3D POI is substantially an elevation of a corresponding building, and in AR/VR, the three-dimensional scene is rendered, so the building elevation is generally rendered into a plane (i.e., not having an arc), and the calculation method may be a method of obtaining the elevation of the 3D POI by forward intersection multipoint fitting as shown in a sub-diagram (a) in fig. 5, where S1 and S2 in the sub-diagram (a) in fig. 5 are two different camera positions, respectively, or may be a method of obtaining the elevation of the 3D POI by single-chip ray tracing as shown in a sub-diagram (b) in fig. 5; and finally, performing position extension on the initial three-dimensional coordinate of the POI along the normal direction of the vertical surface of the POI (namely extending the initial three-dimensional coordinate of the POI outwards by a preset distance in the principle that the POI characters have a certain distance with a building so as not to generate collision), so as to adjust the three-dimensional coordinate of the POI point to obtain the final target three-dimensional coordinate of the POI and effectively avoid the display information of the POI from colliding with the building in a subsequent constructed three-dimensional scene. Similarly, the target three-dimensional coordinates corresponding to the POI are collected for each three-dimensional scene in the preset range according to the method, and then the target three-dimensional coordinates of the POI in each three-dimensional scene and the corresponding three-dimensional scene are correspondingly stored in the server, so that the terminal equipment can be conveniently served subsequently. In addition, in other embodiments of the present application, the size of the POI may also be recorded and stored in the server.
402. The terminal equipment generates first 3D display information according to the first data, the first 3D display information comprises at least one of a 3D character corresponding to the identification, a 3D icon corresponding to the identification and a background plate of the 3D character, and the display position of the first 3D display information in the three-dimensional scene is determined according to the target three-dimensional coordinate.
After acquiring first data corresponding to a first POI in a three-dimensional scene, the terminal device further generates corresponding 3D display information (which may be referred to as first 3D display information) according to the first data, where the first 3D display information may include at least one of a 3D text corresponding to the identifier, a 3D icon corresponding to the identifier, and a background plate of the 3D text, and it should be noted that the background plate of the 3D text refers to a background frame capable of accommodating the 3D text, and the background frame may be a rectangular frame, an oval frame, or a frame in any other shape. It should be further noted that the display position of the first 3D display information in the three-dimensional scene is determined by the target three-dimensional coordinates of the first POI included in the first data.
It should be noted that, in some embodiments of the present application, a process of generating, by the terminal device, the corresponding first 3D display information according to the first data may specifically be: firstly, the terminal device dynamically generates 3D characters corresponding to the identification based on the 3D character library, and generates a background plate covering the 3D characters according to the size of the 3D characters. It should be noted that the dynamic generation means that the 3D text can be generated anytime and anywhere as required, and is used for distinguishing the 3D text originally built in the terminal device. It should be noted that the 3D text is not limited to the language type, and may be chinese, for example, a name of a building hung on a certain building, such as "XX school", "XX hospital", "XX mall", "XX park", or english or other languages, such as "XX school", "XX hospital", and the like, which is not limited in this application. Then, the terminal device further generates a 3D icon corresponding to the identifier according to the identifier, where the 3D icon can be distinguished based on the service type, for example, the 3D icon for catering is a symbol of "bowl + chopsticks", the 3D icon for tea is a symbol of "coffee cup", and the 3D icon for shopping is a symbol of "shopping bag", where the specific representation form of the 3D icon is not limited at this time, as long as the corresponding service type can be identified by the user according to the 3D icon. And finally, the terminal equipment adjusts the relative positions of the generated 3D characters, the background plate and the 3D icons to obtain the first 3D display information.
It should be noted that, in some embodiments of the present application, specific implementation manners of generating, by a terminal device according to an identifier, a 3D icon corresponding to the identifier include but are not limited to:
a. if the first data includes a classification code in addition to the target three-dimensional coordinate of the first POI in the constructed three-dimensional scene and the identifier of the first POI, the terminal device determines the classification code corresponding to the identifier according to the identifier, and generates a 3D icon corresponding to the first POI according to a first mapping relationship between the classification code and the icon. In the embodiment of the application, the icons and the classification codes are in one-to-one correspondence, the classification codes can be regarded as indexes of the icons (the index rules are defined in advance), the corresponding icons can be directly called according to the classification codes, and then the corresponding 3D icons are generated based on the icons. It should be noted here that, multiple types of 3D texts may be included under one icon, that is, different 3D texts may belong to the same icon, for example, the 3D texts corresponding to "store a" and "store B" are different but all belong to shopping places, and therefore, may correspond to the same icon, and the generated 3D icons may all be 3D "shopping bag" symbols.
b. If the first data only comprise the target three-dimensional coordinates of the first POI in the constructed three-dimensional scene and the identification of the first POI, the terminal equipment can generate the 3D icon corresponding to the first POI by extracting the key features of the identification and according to the second mapping relation between the key features and the icon. This approach is similar to the above approach, except that the role of the classification code is replaced by extracted key features, each type of key feature also requiring a one-to-one correspondence with the icons. For details, reference may be made to the above-mentioned method a, which is not described herein again.
To facilitate understanding of the step 402, a specific example is taken as an example to illustrate, specifically referring to fig. 6, and fig. 6 is a schematic flowchart of generating, by a terminal device, 3D display information according to data corresponding to a POI according to an embodiment of the present disclosure. The process comprises the following steps:
first, the terminal device dynamically generates a 3D text (for example, the text may be a name or a logo, and this is not specifically limited) of the POI identifier using a 3D text library (existing), a process of generating the 3D text may also be referred to as 3D text initialization, and then initializes a corresponding 3D icon according to a classification code or a key feature of the POI, and a process of generating the 3D icon may also be referred to as 3D icon initialization. It should be noted here that after the 3D text is initialized, the background plate covering the size of the 3D text may be generated directly according to the size of the text, or the background plate covering the size of the 3D text may be generated according to the size of the text after the 3D text and the 3D icon are both initialized. Then, the display directions of the 3D characters and the 3D icons are determined according to the direction of the camera location of the terminal device (i.e., the direction of the line of sight of the user carrying the terminal device), and then the normal directions of the 3D characters and the classification model can be made to be consistent with the direction of the line of sight according to the orientation of the camera in the adjustment of the display directions, or the display turning adjustment is performed according to the self direction and position of the 3D POI, and the specific point is not limited herein. And finally, adjusting the relative positions of the generated 3D characters, the 3D icons and the background plate, so that the design of the 3D display information corresponding to the POI is completed.
It should be noted here that, in some embodiments of the present application, the process of generating the 3D display information by the terminal device according to the data corresponding to the POI may also consider a display position of the target three-dimensional coordinates in the constructed three-dimensional scene. As an example, in the constructed three-dimensional scene, for the sake of better clarity of the POI, the 3D text and/or 3D icon in the display information of the POI with a far position may be larger, while the 3D text and/or 3D icon in the display information of the POI with a near position may be smaller, and the scaling up/scaling down ratio may be set based on the distance between the display position and the camera position, which is not described herein again. As another example, in the constructed three-dimensional scene, in order to be more integrated with the actual scene, the 3D text and/or the 3D icon in the display information of the POI with a far position may be smaller, the 3D text and/or the 3D icon in the display information of the POI with a near position may be larger, and the ratio of the increase/decrease may also be set based on the distance between the display position and the camera position, which is not described herein again.
403. The terminal equipment generates a first cross section of the first 3D display information, and samples the first cross section to obtain at least one sampling point.
After generating the first 3D display information corresponding to the first POI, the terminal device further generates a cross section (which may be referred to as a first cross section) of the first 3D display information, and samples the first cross section to obtain at least one sampling point.
It should be noted that, in some embodiments of the present application, the terminal device may sample the first cross section in a random manner, and the number n (e.g., 3, 5, 8, etc.) of sampling points may also be preset; or sampling according to a preset rule, for example, how far to sample once and how large area to sample once; the sampling may also be performed on a special point of the first cross section, for example, a corner point, a center point, and the like of the first cross section, and the specific application does not limit how to sample to obtain at least one sampling point. In the embodiment of the present application, a typical sampling manner is to use 4 corner points of the first cross section as sampling points.
404. And under the condition that at least one connecting line between the camera position and the sampling point of the terminal equipment passes through a second cross section, the terminal equipment adjusts the first 3D display information in the three-dimensional scene, wherein the second cross section is a cross section of the second 3D display information corresponding to a second POI, and the second POI is any other POI different from the first POI in the three-dimensional scene.
After the terminal device samples the first cross section to obtain at least one sampling point, whether a connecting line between a camera position of the terminal device and the sampling point passes through a second cross section or not is further judged, wherein the second cross section is a cross section of second 3D display information corresponding to a second POI, and the second POI is any other POI different from the first POI in the constructed three-dimensional scene. If the terminal device determines that at least one connecting line between the camera position and the sampling point of the terminal device passes through the second cross section, for example, if the sampling points are 3, and a connecting line can be correspondingly arranged between the camera position and each sampling point, the total number of the connecting lines is 3, and if at least 1 cross section passing through other POIs in the constructed three-dimensional scene exists in the 3 connecting lines, it is assumed that the first cross section is shielded by other cross sections, and then the terminal device needs to adjust the first 3D display information in the constructed three-dimensional scene.
As an example, fig. 7 provides a schematic diagram of blocked 3D display information, when 3D display information corresponding to a plurality of POIs in a scene is displayed simultaneously, an overlapping situation may occur, for example, the display content of "starbucks selection" (i.e. the second 3D display information) in fig. 7 blocks the display content of "tea with intrinsic taste WE" (i.e. the first 3D display information), and if no adjustment is made, a user will only see a part of "tea with intrinsic taste WE" on the screen of the terminal device, and the user may have ambiguity in understanding.
It should be noted that, in some embodiments of the present application, if at least one connection line between the camera site and the sampling point of the terminal device passes through the second cross section, the adjustment manner of the terminal device may include, but is not limited to:
a. and the terminal equipment deletes the first 3D display information in the constructed three-dimensional scene.
b. The terminal device reduces the first 3D display information according to a preset ratio until a connection line between the camera position of the terminal device and the sampling point does not pass through the second cross section, for example, the connection line can be reduced to 80% or 60% of the original connection line according to a preset gradient.
c. The terminal equipment adjusts the display position of the first 3D display information towards the direction far away from the display position of the second 3D display information until a connecting line between a camera position point of the terminal equipment and the sampling point does not penetrate through the second cross section.
In the embodiment of the present application, the purpose of the adjustment is to ensure the integrity of 3D display information in the constructed three-dimensional scene, and avoid ambiguity generated by the user due to seeing only part of POI content on the screen.
It should be noted that, in the embodiment of the present application, in addition to determining whether the first 3D display information is blocked based on whether a connection line between the camera location and the sampling point passes through other cross sections, it may also be determined whether the first 3D display information is blocked based on other manners, for example, it may be determined by determining whether 4 corner points of the first cross section are visible (i.e., determining whether a connection line between any one of the 4 corner points and the camera location passes through other cross sections), specifically, in a case that at least one connection line between the camera location of the terminal device and the sampled 4 corner points passes through the second cross section, the terminal device calculates a blocked area of the first cross section based on a line segment between the 4 corner points and a target point located on the line segment, and then determines a ratio of the blocked area to a total area of the first cross section, and if the ratio of the blocked area to the area of the first cross section reaches a preset threshold, it is considered that the first 3D display information is blocked too much, and the terminal device adjusts the first 3D display information in the constructed three-dimensional scene.
It should be noted that, in some embodiments of the present application, one implementation that the terminal device calculates the blocked area of the first cross section based on a line segment between 4 corner points of the first cross section and a target point located on the line segment may be as shown in fig. 8, if adjacent corner points of the first cross section are visible, it indicates that the line segment between the 2 adjacent corner points is visible; if one corner point of the 2 adjacent corner points is visible and the other corner point is not visible, it indicates that the line segment between the 2 adjacent corner points is not visible, in this case, a target point may be taken from the 2 adjacent corner points by bisection (or other selection methods, which are not limited here), and the invisible corner point is updated as the selected target point, and the above process is repeated until a critical point is determined on the line segment (points within the allowable error range up to the true critical point may be calculated as the critical point described in this application, which is not limited here), as shown in (1) and (2) in fig. 8 are determined critical points, where the left side in fig. 8 is a shielded part and the right side is a non-shielded part. Then, the terminal device calculates the ratio of the shielded area to the area of the whole first cross section, if the ratio does not exceed a preset threshold value, the shielded area is considered to be small, and the first 3D display information is visible and can be reserved on the whole; if the ratio exceeds the preset threshold, the shielded area is considered to be large, and the continuous retention may cause ambiguity in understanding by the user, and at this time, the first 3D display information needs to be adjusted.
It should be further noted that, in some embodiments of the present application, if the ratio between the blocked area of the first cross section and the area of the first cross section reaches a preset threshold, the adjusting manner of the terminal device may include, but is not limited to:
a. and the terminal equipment deletes the first 3D display information in the constructed three-dimensional scene.
b. The terminal device reduces the first 3D display information according to a preset ratio until a ratio of a blocked area of the first cross section to an area of the first cross section does not exceed a preset threshold, for example, the first 3D display information may be reduced to 80% or 60% of the original area according to a preset gradient.
c. The terminal equipment adjusts the display position of the first 3D display information towards the direction far away from the display position of the second 3D display information until the ratio of the shielded area of the first cross section to the area of the first cross section does not exceed a preset threshold value.
In the embodiment of the present application, the purpose of the adjustment is to ensure the overall integrity of 3D display information in a constructed three-dimensional scene, and avoid ambiguity generated by a user due to seeing only the POI content in the unoccluded part on the screen.
In the above embodiments of the application, the 3D display information is generated based on the data corresponding to the POI, where the 3D display information includes 3D characters and 3D icons, so as to optimize the problem that the existing POI content is not displayed stereoscopically, improve the immersion feeling when the user uses the system, and adjust the 3D display information corresponding to the blocked POI according to the blocking relationship of the 3D display information corresponding to the POI (where a plurality of 3D display information are displayed simultaneously in the three-dimensional scene and a certain or a plurality of 3D display information may be blocked), so as to avoid ambiguity caused by the user viewing only part of the 3D display information during browsing.
To facilitate understanding of the process described in the embodiment corresponding to fig. 4, the following description takes an SDK of AR/VR as a three-dimensional rendering engine Unity software as an example, where the Unity software provides a 3D rendering pipeline after encapsulation, provides functions such as collision detection and three-dimensional rendering, and additionally loads three-dimensional scene data; according to the calculation process described in the embodiment corresponding to fig. 4, data of the POI is generated, and a location service and an application of displaying the POI in a three-dimensional scene are provided. In order to display a POI in a three-dimensional scene, the embodiment of the application has the following specific implementation steps:
step 1, generating data corresponding to the POI at the previous stage, wherein the generation process is shown in fig. 9, which is not described herein again, and the step is performed after the camera position posture of the acquisition device is acquired.
And 2, generating 3D display information comprising the 3D characters, the 3D icons and the background board, and calculating the adaptive layout of the content of the POI according to the display design of the POI content, wherein the generation process is shown in FIG. 10 and is not repeated herein. After POI data extraction is completed and position service is formed, the method is started to be executed when a user experiences a 3D POI on the terminal equipment.
And 3, shielding analysis and adjustment of the 3D display information. And judging whether the sampling point on the cross section is visible on a camera of the terminal equipment according to the cross section of the generated 3D display information to analyze the shielding of the 3D display information, and determining the visibility of the 3D display information by different shielding judgment rules. In the embodiment of the present application, there are two specific determination rules as follows:
rule one, as shown in fig. 11, if the sampling point on the cross section is invisible (there is occlusion), then choose to adjust the 3D display information in the constructed three-dimensional scene, for example, to eliminate the occluded 3D display information (this way is illustrated in fig. 11), or to reduce the occluded 3D display information in equal proportion, or to move the occluded 3D display information.
And a second rule, as shown in fig. 12, judging whether to adjust the 3D display information in the constructed three-dimensional scene according to the size of the cross-section masked region range, if the cross-section masked region range exceeds a preset threshold, adjusting, otherwise, not adjusting, and if the adjustment mode illustrated in fig. 12 is to reject.
In order to more intuitively recognize the beneficial effects brought by the embodiments of the present application, the following further compares the technical effects brought by the embodiments of the present application. As shown in fig. 13, fig. 13 is a comparison diagram of a result of the method in the embodiment of the present application and a result in the prior art provided by the embodiment of the present application, where (a) a sub-schematic diagram in fig. 13 is a display result of the method in the embodiment of the present application, and (b) a sub-schematic diagram in fig. 13 is a display result in the prior art, as can be seen from fig. 13, in the present invention, 3D display information of a POI is displayed in a three-dimensional scene in a fusion manner by using 3D characters and 3D icons, so as to provide a better browsing experience for a user, the fusion degree of the 3D display information and a labeled three-dimensional structure entity is better, the user can simply associate the 3D display information with the labeled three-dimensional entity, and POI content can be provided to the user for a change of distance and can be displayed and adjusted along with a line of sight of the user. Further, the method in the embodiment of the application has the following characteristics in an AR/VR scene: 1) When a user browses, the number of POI contents of a current frame cannot be too large, and the POI contents are displayed according to the distance from the user, the shielding relation and the weight of the POI. 2) Different display combinations of 3D characters and three-dimensional symbols can be designed, and the characteristics and style of the three-dimensional structural entity can be expressed in a user-defined mode.
On the basis of the above embodiments, in order to better implement the above aspects of the embodiments of the present application, the following also provides related equipment for implementing the above aspects. Specifically referring to fig. 14, fig. 14 is a schematic diagram of a terminal device according to an embodiment of the present application, where the terminal device 1400 specifically includes: an obtaining module 1401, configured to obtain first data corresponding to a first POI in a three-dimensional scene, where the first data includes a target three-dimensional coordinate of the first POI in the three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the three-dimensional scene; a generating module 1402, configured to generate first 3D display information according to the first data, where the first 3D display information includes at least one of a 3D text corresponding to the identifier, a 3D icon corresponding to the identifier, and a background plate of the 3D text, and a display position of the first 3D display information in the three-dimensional scene is determined according to the target three-dimensional coordinate; a sampling module 1403, configured to generate a first cross section of the first 3D display information, and sample the first cross section to obtain at least one sampling point; an adjusting module 1404, configured to adjust the first 3D display information in the three-dimensional scene when at least one connection line between the camera location of the terminal device 1400 and the sampling point passes through a second cross section, where the second cross section is a cross section of second 3D display information corresponding to a second POI, and the second POI is any other POI different from the first POI in the three-dimensional scene.
In the above embodiment of the present application, the terminal device 1400 generates the 3D display information based on the data corresponding to the POI, where the 3D display information may include at least one of a 3D text, a 3D icon, and a background board of the 3D text, so as to optimize the problem that the existing POI content is not displayed stereoscopically, improve the immersion feeling of the user when in use, and adjust the 3D display information corresponding to the blocked POI according to the blocking relationship of the 3D display information corresponding to the POI (where a plurality of 3D display information in the three-dimensional scene are displayed simultaneously, where one or more 3D display information may be blocked), so as to avoid ambiguity caused by the user only seeing part of the 3D display information when browsing.
In a possible design, the generating module 1402 is specifically configured to: generating a 3D character corresponding to the identifier based on a 3D character library, and generating a background plate of the 3D character according to the size of the 3D character; generating a 3D icon corresponding to the identifier according to the identifier; and adjusting the relative positions of the 3D characters, the background plate and the 3D icons to obtain the first 3D display information.
In the foregoing embodiment of the present application, to the problem that the three-dimensional continuity of the POI displayed in the three-dimensional scene is insufficient and the immersion feeling is weak, how the generation module 1402 is used to generate the 3D display information including the 3D text, the 3D icon, and the background plate is specifically set forth, the display mode of the POI content in the three-dimensional scene is optimized, and the immersion feeling when the user uses the POI is improved.
In a possible design, the generating module 1402 is specifically configured to: determining a classification code corresponding to the identifier according to the identifier, and generating the 3D icon according to a first mapping relation between the classification code and the icon, wherein the classification code is included in the first data; or extracting the key features of the identifier, and generating the 3D icon according to the second mapping relation between the key features and the icon.
In the above embodiments of the present application, several implementations of the generation module 1402 for generating the 3D icon are specifically described, which have flexibility and wide applicability.
In one possible design, the sampling module 1403 is specifically configured to: the 4 corner points of this first cross section are taken as sampling points.
In the above embodiments of the present application, a typical sampling manner of the sampling module 1403 is specifically described, which is implementable.
In one possible design, the terminal device 1400 further includes: a calculating module 1405, configured to calculate an occluded area of the first cross section based on a line segment between the 4 corner points and a target point located on the line segment, in a case where at least one connecting line passes through the second cross section between the camera location of the terminal device 1400 and the 4 corner points; the adjusting module 1404 is further configured to adjust the first 3D display information in the three-dimensional scene when a ratio between the blocked area and the area of the first cross section reaches a preset threshold.
In the above embodiment of the present application, it is specifically stated that the calculating module 1405 further calculates a blocked area of a cross section corresponding to the 3D display information, and calculates a ratio of the blocked area to an area of the whole first cross section, and if the ratio does not exceed a preset threshold, the blocked area is considered to be small, and the first 3D display information is visible and can be retained as a whole; if the ratio exceeds the preset threshold, the blocked area is considered to be large, and the continuous retention may cause ambiguity in understanding by the user, at which time the first 3D display information needs to be adjusted. The purpose of the adjustment is to ensure the overall integrity of 3D display information in the constructed three-dimensional scene and avoid ambiguity generated by a user due to the fact that the user only sees the POI content in the unobstructed portion on the screen.
In one possible design, the adjustment module 1404 is specifically configured to: deleting the first 3D display information in the three-dimensional scene; or, the first 3D display information is reduced according to a preset proportion; or adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information. Specifically, if at least one connection line between the camera site and the sampling point of the terminal device 1400 passes through the second cross section, the adjustment manner of the adjustment module 1404 may include, but is not limited to: a. the first 3D display information is deleted in the constructed three-dimensional scene. b. And reducing the first 3D display information according to a preset proportion until a connecting line between a camera position point and a sampling point of the terminal device does not pass through the second cross section, for example, reducing the connecting line to 80% or 60% of the original connecting line according to a preset gradient. c. And adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information until a connecting line between the camera position point of the terminal device 1400 and the sampling point does not pass through the second cross section. If the ratio between the blocked area of the first cross section and the area of the first cross section reaches a preset threshold, the adjustment manner of the adjusting module 1404 may include, but is not limited to: a. the first 3D display information is deleted in the constructed three-dimensional scene. b. The first 3D display information is reduced according to a preset ratio until a ratio of the shielded area of the first cross section to the area of the first cross section does not exceed a preset threshold, for example, the first 3D display information may be reduced to 80%, 60%, and the like according to a preset gradient. c. And adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information until the ratio of the shielded area of the first cross section to the area of the first cross section does not exceed a preset threshold.
In the above embodiments of the present application, several implementation manners of adjusting the first 3D display information in the constructed three-dimensional scene by the adjusting module 1404 are specifically set forth, and are simple and easy to operate, and have flexibility.
In a possible design, the target three-dimensional coordinate of the first POI in the three-dimensional scene is obtained by extending an initial three-dimensional coordinate of the first POI along a normal direction of a vertical surface of the first POI, the initial three-dimensional coordinate is a three-dimensional coordinate corresponding to a pixel coordinate set obtained in a front intersection manner or a ray tracing manner, and the pixel coordinate set is a set of pixel coordinates of the first POI in a picture obtained through OCR.
In the above embodiment of the application, the initial three-dimensional coordinate obtained based on the front intersection mode or the ray tracing mode is subjected to position extension along the normal direction of the vertical surface of the POI to obtain the target three-dimensional coordinate, so that collision between the 3D display information and an entity (such as a building) in the constructed three-dimensional scene can be avoided, and the visual experience is improved.
In one possible design, the obtaining module 1401 is specifically configured to: sending the current position of the terminal device 1400 and the orientation of the terminal device 1400 to a server, so that the server determines three-dimensional scene data corresponding to the current position and first data corresponding to a first POI; and receiving the three-dimensional scene data and the first data sent by the server, and constructing the three-dimensional scene based on the three-dimensional scene data.
In the above embodiment of the present application, it is stated that the obtaining module 1401 receives, from the server, three-dimensional scene data (used for constructing a three-dimensional scene) and POI data corresponding to a current position and orientation, and avoids data redundancy caused by storing too much data on the terminal device.
It should be noted that, the contents of information interaction, execution process, and the like between the modules/units in the terminal device 1400 are based on the same concept as the method embodiments described in the present application, and specific contents may refer to the description in the method embodiments described in the foregoing description of the present application, and are not described herein again.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a terminal device provided in this embodiment, where the terminal device 1500 may be disposed with the terminal device 1400 described in the embodiment corresponding to fig. 14, and is used to implement the functions of the terminal device 1400 in the embodiment corresponding to fig. 14, specifically, the terminal device 1500 is implemented by one or more servers, and the terminal device 1500 may generate relatively large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1522 and a memory 1532, and one or more storage media 1530 (e.g., one or more mass storage devices) for storing an application program 1542 or data 1544. Memory 1532 and storage media 1530 may be, among other things, transient or persistent storage. The program stored in the storage medium 1530 may include one or more modules (not shown), and each module may include a series of instructions for operating the terminal device 1500. Further, the central processor 1522 may be provided to communicate with the storage medium 1530, and execute a series of instruction operations in the storage medium 1530 on the terminal device 1500.
Terminal apparatus 1500 can also include one or more power supplies 1526, one or more wired or wireless network interfaces 1550, one or more input-output interfaces 1558, and/or one or more operating systems 1541, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
In this embodiment, the central processing unit 1522 is configured to execute steps executed by the terminal device in the embodiment corresponding to fig. 4. For example, central processor 1522 may be used to: first, data (which may be referred to as first data) corresponding to a first POI in a three-dimensional scene is obtained, where the first data includes a target three-dimensional coordinate of the first POI in the constructed three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the constructed three-dimensional scene, for example, if the three-dimensional scene constructed by the terminal device at the current time and the current position includes 3 POIs, the first POI refers to any one of the 3 POIs. After first data corresponding to a first POI in the three-dimensional scene is acquired, corresponding 3D display information (which may be referred to as first 3D display information) is further generated according to the first data, where the first 3D display information may include at least one of a 3D text corresponding to the identifier, a 3D icon corresponding to the identifier, and a background board of the 3D text, and it should be noted that a display position of the first 3D display information in the three-dimensional scene is determined by a target three-dimensional coordinate of the first POI included in the first data. After generating the first 3D display information corresponding to the first POI, a cross section (which may be referred to as a first cross section) of the first 3D display information is generated again, and the first cross section is sampled to obtain at least one sampling point. After the first cross section is sampled to obtain at least one sampling point, whether a connecting line between a camera position of the terminal device and the sampling point passes through a second cross section or not is further judged, wherein the second cross section is a cross section of second 3D display information corresponding to a second POI, and the second POI is any one POI different from the first POI in the three-dimensional scene. If the terminal device determines that at least one connecting line between the camera location and the sampling point of the terminal device passes through the second cross section, for example, if the sampling points are 3, and a connecting line can be correspondingly arranged between the camera location and each sampling point, the total number of the connecting lines is 3, and if at least 1 cross section passing through other POIs in the three-dimensional scene exists in the 3 connecting lines, it is assumed that the first cross section is blocked by other cross sections, and then the first 3D display information needs to be adjusted in the three-dimensional scene.
It should be noted that, the specific manner in which the central processing unit 1522 executes the above steps is based on the same concept as that of the method embodiment corresponding to fig. 4 in the present application, and the technical effect brought by the specific manner is also the same as that of the above embodiment in the present application, and specific contents may refer to the description in the foregoing method embodiment in the present application, and are not described herein again.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, an exercise device, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, training device, or data center to another website site, computer, training device, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a training device, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.

Claims (20)

1. A method for displaying a POI, comprising:
the method comprises the steps that terminal equipment obtains first data corresponding to a first point of interest (POI) in a three-dimensional scene, wherein the first data comprise a target three-dimensional coordinate of the first POI in the three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the three-dimensional scene;
the terminal equipment generates first three-dimensional (3D) display information according to the first data, the first 3D display information comprises at least one of 3D characters corresponding to the identification, 3D icons corresponding to the identification and background plates of the 3D characters, and the display position of the first 3D display information in the three-dimensional scene is determined according to the target three-dimensional coordinate;
the terminal equipment generates a first cross section of the first 3D display information, and samples the first cross section to obtain at least one sampling point;
and under the condition that at least one connecting line between the camera position of the terminal equipment and the sampling point passes through a second cross section, the terminal equipment adjusts the first 3D display information in the three-dimensional scene, wherein the second cross section is a cross section of second 3D display information corresponding to a second POI, and the second POI is any one of other POIs which are different from the first POI in the three-dimensional scene.
2. The method of claim 1, wherein generating, by the terminal device, first three-dimensional (3D) display information from the first data comprises:
the terminal equipment generates 3D characters corresponding to the identification based on a 3D character library, and generates a background plate of the 3D characters according to the size of the 3D characters;
the terminal equipment generates a 3D icon corresponding to the identifier according to the identifier;
and the terminal equipment adjusts the relative positions of the 3D characters, the background plate and the 3D icons to obtain the first 3D display information.
3. The method of claim 2, wherein the terminal device generating the 3D icon corresponding to the identifier according to the identifier comprises:
the terminal equipment determines a classification code corresponding to the identifier according to the identifier, and generates the 3D icon according to a first mapping relation between the classification code and the icon, wherein the classification code is included in the first data;
or the like, or, alternatively,
and the terminal equipment extracts the key features of the identification and generates the 3D icon according to a second mapping relation between the key features and the icon.
4. The method according to any one of claims 1-3, wherein said sampling said first cross-section resulting in at least one sampled point comprises:
taking 4 angular points of the first cross section as sampling points.
5. The method of claim 4, further comprising:
under the condition that at least one connecting line passes through the second cross section between the camera position of the terminal equipment and the 4 angular points, the terminal equipment calculates the shielded area of the first cross section based on a line segment between the 4 angular points and a target point on the line segment;
and under the condition that the ratio of the shielded area to the area of the first cross section reaches a preset threshold value, the terminal equipment adjusts the first 3D display information in the three-dimensional scene.
6. The method according to any one of claims 1-5, wherein the terminal device adjusting the first 3D display information in the three-dimensional scene comprises:
the terminal equipment deletes the first 3D display information in the three-dimensional scene;
or the like, or, alternatively,
the terminal equipment reduces the first 3D display information according to a preset proportion;
or the like, or, alternatively,
and the terminal equipment adjusts the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information.
7. The method according to any one of claims 1 to 6, wherein the target three-dimensional coordinates of the first POI in the three-dimensional scene are obtained by extending an initial three-dimensional coordinate of the first POI along a normal direction of a vertical surface of the first POI, the initial three-dimensional coordinate is a three-dimensional coordinate corresponding to a set of pixel coordinates obtained by a front intersection method or a ray tracing method, and the set of pixel coordinates is a set of pixel coordinates of the first POI in a picture obtained by Optical Character Recognition (OCR).
8. The method according to any one of claims 1-7, wherein the terminal device obtaining first data corresponding to a first point of interest (POI) in a three-dimensional scene comprises:
the terminal equipment sends the current position of the terminal equipment and the orientation of the terminal equipment to a server, so that the server determines three-dimensional scene data corresponding to the current position and first data corresponding to a first POI (point of interest);
the terminal device receives the three-dimensional scene data and the first data sent by the server, and constructs the three-dimensional scene based on the three-dimensional scene data.
9. A terminal device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first data corresponding to a first point of interest (POI) in a three-dimensional scene, the first data comprises a target three-dimensional coordinate of the first POI in the three-dimensional scene and an identifier of the first POI, and the first POI is any one POI in the three-dimensional scene;
a generating module, configured to generate first three-dimensional (3D) display information according to the first data, where the first 3D display information includes at least one of a 3D text corresponding to the identifier, a 3D icon corresponding to the identifier, and a background plate of the 3D text, and a display position of the first 3D display information in the three-dimensional scene is determined according to the target three-dimensional coordinate;
the sampling module is used for generating a first cross section of the first 3D display information and sampling the first cross section to obtain at least one sampling point;
an adjusting module, configured to adjust the first 3D display information in the three-dimensional scene when at least one connection line between the camera location of the terminal device and the sampling point passes through a second cross section, where the second cross section is a cross section of second 3D display information corresponding to a second POI, and the second POI is any other POI different from the first POI in the three-dimensional scene.
10. The device according to claim 9, wherein the generating module is specifically configured to:
generating 3D characters corresponding to the identification based on a 3D character library, and generating a background plate of the 3D characters according to the size of the 3D characters;
generating a 3D icon corresponding to the identifier according to the identifier;
and adjusting the relative positions of the 3D characters, the background plate and the 3D icons to obtain the first 3D display information.
11. The device according to claim 10, wherein the generating module is further specifically configured to:
determining a classification code corresponding to the identifier according to the identifier, and generating the 3D icon according to a first mapping relation between the classification code and the icon, wherein the classification code is included in the first data;
or the like, or, alternatively,
and extracting the identified key features, and generating the 3D icon according to a second mapping relation between the key features and the icon.
12. The device according to any one of claims 9 to 11, wherein the sampling module is specifically configured to:
taking 4 angular points of the first cross section as sampling points.
13. The apparatus of claim 12, further comprising:
a calculating module, configured to calculate an occluded area of the first cross section based on a line segment between the 4 corner points and a target point located on the line segment when at least one connecting line passing through the second cross section exists between the camera position of the terminal device and the 4 corner points;
the adjusting module is further configured to adjust the first 3D display information in the three-dimensional scene when a ratio between the blocked area and the area of the first cross section reaches a preset threshold.
14. The device according to any one of claims 9 to 13, wherein the adjustment module is specifically configured to:
deleting the first 3D display information in the three-dimensional scene;
or the like, or, alternatively,
zooming out the first 3D display information according to a preset proportion;
or the like, or a combination thereof,
and adjusting the display position of the first 3D display information towards a direction far away from the display position of the second 3D display information.
15. The apparatus according to any one of claims 9-14, wherein the target three-dimensional coordinates of the first POI in the three-dimensional scene are obtained by extending an initial three-dimensional coordinate of the first POI along a normal direction of a vertical surface of the first POI, the initial three-dimensional coordinate is a three-dimensional coordinate corresponding to a set of pixel coordinates obtained by a front intersection method or a ray tracing method, and the set of pixel coordinates is a set of pixel coordinates of the first POI in a picture obtained by Optical Character Recognition (OCR).
16. The device according to any one of claims 9 to 15, wherein the obtaining module is specifically configured to:
sending the current position of the terminal equipment and the orientation of the terminal equipment to a server, so that the server determines three-dimensional scene data corresponding to the current position and first data corresponding to a first POI (point of interest);
receiving the three-dimensional scene data and the first data sent by the server, and constructing the three-dimensional scene based on the three-dimensional scene data.
17. A terminal device comprising a processor and a memory, said processor being coupled to said memory,
the memory is used for storing programs;
the processor configured to execute the program in the memory to cause the terminal device to perform the method of any one of claims 1-8.
18. A computer-readable storage medium comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-8.
20. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface to perform the method of any one of claims 1-8.
CN202110741983.4A 2021-06-30 2021-06-30 POI display method and device Pending CN115544390A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110741983.4A CN115544390A (en) 2021-06-30 2021-06-30 POI display method and device
PCT/CN2022/101799 WO2023274205A1 (en) 2021-06-30 2022-06-28 Poi display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110741983.4A CN115544390A (en) 2021-06-30 2021-06-30 POI display method and device

Publications (1)

Publication Number Publication Date
CN115544390A true CN115544390A (en) 2022-12-30

Family

ID=84690374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110741983.4A Pending CN115544390A (en) 2021-06-30 2021-06-30 POI display method and device

Country Status (2)

Country Link
CN (1) CN115544390A (en)
WO (1) WO2023274205A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084878B (en) * 2012-12-21 2023-07-07 苹果公司 Method for representing virtual information in a real environment
CN104102678B (en) * 2013-04-15 2018-06-05 腾讯科技(深圳)有限公司 The implementation method and realization device of augmented reality
US9396697B2 (en) * 2013-06-01 2016-07-19 Apple Inc. Intelligently placing labels
EP3050030B1 (en) * 2013-09-24 2020-06-24 Apple Inc. Method for representing points of interest in a view of a real environment on a mobile device and mobile device therefor
CN105183862B (en) * 2015-09-11 2018-12-07 百度在线网络技术(北京)有限公司 A kind of mask method and device of POI
EP4134626A1 (en) * 2017-06-02 2023-02-15 Apple Inc. Venues map application and system
CN110609878A (en) * 2018-06-14 2019-12-24 百度在线网络技术(北京)有限公司 Interest point information display method, device, server and storage medium

Also Published As

Publication number Publication date
WO2023274205A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US20220019344A1 (en) Integrating Maps and Street Views
CN107450088B (en) Location-based service LBS augmented reality positioning method and device
US9317133B2 (en) Method and apparatus for generating augmented reality content
US9767610B2 (en) Image processing device, image processing method, and terminal device for distorting an acquired image
US20160063671A1 (en) A method and apparatus for updating a field of view in a user interface
US20150356763A1 (en) Method and apparatus for grouping and de-overlapping items in a user interface
CN109146954A (en) Augmented reality interface for being interacted with shown map
JP5652097B2 (en) Image processing apparatus, program, and image processing method
US20120194547A1 (en) Method and apparatus for generating a perspective display
CN110473293B (en) Virtual object processing method and device, storage medium and electronic equipment
CN111311756B (en) Augmented reality AR display method and related device
KR102108488B1 (en) Contextual Map View
KR20150075532A (en) Apparatus and Method of Providing AR
CN107084740B (en) Navigation method and device
CN113359986B (en) Augmented reality data display method and device, electronic equipment and storage medium
WO2022068364A1 (en) Information exchange method, first terminal device, server and second terminal device
Zollmann et al. VISGIS: Dynamic situated visualization for geographic information systems
KR20210086834A (en) System and method for providing AR based tour information via smart glasses
KR101568741B1 (en) Information System based on mobile augmented reality
US20130235028A1 (en) Non-photorealistic Rendering of Geographic Features in a Map
CN112468970A (en) Campus navigation method based on augmented reality technology
JP7405920B2 (en) Map information processing methods, devices, equipment and storage media
US20160321524A1 (en) Location based print controller with external data for amenities
CN115544390A (en) POI display method and device
CN115731370A (en) Large-scene element universe space superposition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination