CN112857391A - Route display processing method and device based on AR - Google Patents

Route display processing method and device based on AR Download PDF

Info

Publication number
CN112857391A
CN112857391A CN202110068710.8A CN202110068710A CN112857391A CN 112857391 A CN112857391 A CN 112857391A CN 202110068710 A CN202110068710 A CN 202110068710A CN 112857391 A CN112857391 A CN 112857391A
Authority
CN
China
Prior art keywords
live
action image
destination
user
route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110068710.8A
Other languages
Chinese (zh)
Inventor
叶芊
陈思
朱家慧
洪仕闵
金华静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202110068710.8A priority Critical patent/CN112857391A/en
Publication of CN112857391A publication Critical patent/CN112857391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3644Landmark guidance, e.g. using POIs or conspicuous other objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • G01C21/3673Labelling using text of road map data items, e.g. road names, POI names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The embodiment of the specification provides a route display processing method and device based on an AR (augmented reality), wherein the route display processing method based on the AR comprises the following steps: performing AR scanning on an AR entity object configured at a preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server; receiving and displaying a destination identification list returned by the server aiming at the position code; submitting a user destination identification selected by a user through the destination identification list to the server; and receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.

Description

Route display processing method and device based on AR
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a route display processing method and apparatus based on AR.
Background
With the development of internet technology and navigation technology, many applications on mobile terminals (such as navigators, mobile phones, tablet computers, etc.) can provide navigation services for users. For example, when a user drives a vehicle, the applications can calculate a reasonable driving route and navigate the user according to the current position and the destination of the user, and great convenience is brought to the user for traveling. The common AR navigation is one of skillfully fusing virtual information and real information, and the virtual information such as characters, images and the like generated by a computer is applied to the real world after analog simulation, so that the real world is enhanced.
Disclosure of Invention
One or more embodiments of the present specification provide an AR-based route presentation processing method. The route display processing method based on the AR is applied to a user terminal and comprises the following steps: and performing AR scanning on the AR entity object configured at the preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server. And receiving and displaying a destination identification list returned by the server for the position code. And submitting the user destination identification selected by the user through the destination identification list to the server. And receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
One or more embodiments of the present specification provide an AR-based route guidance processing method, applied to a server, including: and receiving a position code submitted by the user terminal after AR scanning and identification are carried out on the AR entity object configured on the preset position point. And determining a destination identification list corresponding to the preset position point based on the position code, sending the destination identification list to the user terminal, and receiving the user destination identification submitted by the user terminal. And determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification. Rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
One or more embodiments of the present specification provide an AR-based route presentation processing apparatus, operating in a user terminal, including: and the position submitting module is configured to perform AR scanning on the AR entity object configured at the preset position point, identify an object image obtained by scanning and submit the position code obtained by identification to the server. A list presentation module configured to receive and present a list of destination identifiers returned by the server for the location code. A destination submission module configured to submit the user destination identification selected by the user through the destination identification list to the server. And the image display module is configured to receive and display the AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
One or more embodiments of the present specification provide an AR-based route guidance processing apparatus, operating on a server, including: and the position receiving module is configured to receive a position code submitted by the user terminal after the AR scanning and the identification are carried out on the AR entity object configured by the preset position point. And the list sending module is configured to determine a destination identification list corresponding to the preset position point based on the position code, send the destination identification list to the user terminal, and receive the user destination identification submitted by the user terminal. A route determination module configured to determine a live-action map of the preset location point and determine a target guidance route according to the preset location point and the user destination identifier. And the image rendering module is configured to perform rendering processing based on the live-action image and the target guide route, and send the AR live-action image obtained through rendering to the user terminal.
One or more embodiments of the present specification provide an AR-based route presentation processing device, applied to a user terminal, including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to: and performing AR scanning on the AR entity object configured at the preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server. And receiving and displaying a destination identification list returned by the server for the position code. And submitting the user destination identification selected by the user through the destination identification list to the server. And receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
One or more embodiments of the present specification provide an AR-based route guidance processing apparatus applied to a server, including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to: and receiving a position code submitted by the user terminal after AR scanning and identification are carried out on the AR entity object configured on the preset position point. And determining a destination identification list corresponding to the preset position point based on the position code, sending the destination identification list to the user terminal, and receiving the user destination identification submitted by the user terminal. And determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification. Rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed, implement the following: and performing AR scanning on the AR entity object configured at the preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server. And receiving and displaying a destination identification list returned by the server for the position code. And submitting the user destination identification selected by the user through the destination identification list to the server. And receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
One or more embodiments of the present specification provide another storage medium storing computer-executable instructions that, when executed, implement the following: and receiving a position code submitted by the user terminal after AR scanning and identification are carried out on the AR entity object configured on the preset position point. And determining a destination identification list corresponding to the preset position point based on the position code, sending the destination identification list to the user terminal, and receiving the user destination identification submitted by the user terminal. And determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification. Rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a flowchart illustrating a processing method of an AR-based route display processing method according to one or more embodiments of the present disclosure;
FIG. 2 is a schematic diagram of an AR entity object provided in one or more embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a destination recommendation page provided in one or more embodiments of the present description;
fig. 4 is a schematic diagram of an AR live-action image according to one or more embodiments of the present disclosure;
fig. 5 is a processing flow diagram of a processing method for displaying a route based on AR applied to a subway station scene according to one or more embodiments of the present specification;
fig. 6 is a flowchart illustrating a processing method of an AR-based route guidance processing method according to one or more embodiments of the present disclosure;
fig. 7 is a process flow diagram of an AR-based route guidance processing method applied to a subway station scene according to one or more embodiments of the present specification;
FIG. 8 is a schematic diagram of an AR-based route display processing apparatus according to one or more embodiments of the present disclosure;
fig. 9 is a schematic diagram of an AR-based route guidance processing apparatus according to one or more embodiments of the present disclosure;
fig. 10 is a schematic structural diagram of an AR-based route display processing device according to one or more embodiments of the present specification;
fig. 11 is a schematic structural diagram of an AR-based route guidance processing device according to one or more embodiments of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
An embodiment of a route display processing method based on AR provided in this specification:
referring to fig. 1, which shows a processing flow chart of an AR-based route presentation processing method provided in this embodiment, referring to fig. 2, which shows an AR entity object schematic diagram provided in this embodiment, referring to fig. 3, which shows a destination recommendation page schematic diagram provided in this embodiment, referring to fig. 4, which shows an AR live-action image schematic diagram provided in this embodiment; referring to fig. 5, it shows a processing flow chart of an AR-based route presentation processing method applied to a subway station scene according to this embodiment.
An execution main body of the route display processing method based on the AR provided by the embodiment is a user terminal, an execution main body of an embodiment of the route guide processing method based on the AR provided by the present specification is a server, and the route display processing method based on the AR provided by the embodiment is applied to the user terminal and the route guide processing method based on the AR provided by the embodiment is matched with each other in an execution process, so that the corresponding contents of the embodiment of the method are read.
Referring to fig. 1, the route display processing method based on AR provided in this embodiment is applied to a user terminal, and specifically includes the following steps S102 to S108.
Step S102, AR scanning is carried out on the AR entity object configured at the preset position point, the object image obtained by scanning is identified, and the position code obtained by identification is submitted to a server.
In practical applications, common navigation is affected by GPS positioning signals and in some cases also by data networks, for example, navigation may be delayed in areas with weak GPS signals, so that a user needs to find a position with strong GPS signals or a distance recommended to the user is deviated, which brings a poor use experience to the user.
The route display processing method based on the AR provided by this embodiment provides clear and accurate indoor and outdoor information and route guidance for a user through a live-action picture shot in advance and an augmented reality mode, specifically, determines recommended destination information according to location information obtained after the user terminal performs AR scanning on an AR entity object configured at a preset location point, determines an AR live-action picture and other display contents displayed for the user according to the user destination information and displays the AR live-action picture and other display contents to the user through the user terminal under the condition that the user selects the user destination through the user terminal, so as to provide AR navigation for the user, improve the effectiveness of guiding people flow, save the time of user navigation, and further improve user experience.
The preset position point (i.e. the preset anchor point) in this embodiment is a specific position where the AR entity object is configured, that is, a shooting position of the live-action view; it should be noted that the preset location point in this embodiment includes an indoor location point determined according to a proper location in a room, and in addition, the preset location point may also be set up in some special environments, such as a subway station, a high-speed rail station, a shopping mall, and the like. The AR entity object is a material for marking the position of the preset position point, and includes an identification sticker which can be posted in a subway station and carries a identification map, as shown in fig. 2; it should be noted that the embodiment saves the cost by posting the AR entity object. The position code of the AR entity object refers to a code of a position corresponding to the preset position point.
In specific implementation, AR scanning is carried out on an AR entity object at a preset position point, an object image obtained by scanning is identified, an identified position code obtained by scanning is submitted to a server, the server generates a query request according to the position code and sends the query request to a third-party server, the third-party server queries a destination identification list corresponding to the position code based on the query request and returns the destination identification list to the server, and the server receives the destination identification list and sends the destination identification list to a user terminal.
For example, in a subway station, AR material posted at an AR scanning anchor point 1 is scanned, a position code obtained by scanning is recognized, the position code is sent to a server, the server receives the position code and then generates a query request according to the position code and sends the query request to a third-party server, the third-party server queries a destination identification list corresponding to the anchor point 1 according to the query request and returns the destination identification list to the server, and the server receives the destination identification list and sends the destination identification list to a user terminal.
In addition, in the process of performing AR scanning, the user terminal may perform AR scanning on the AR entity object to obtain scanning data, send the scanning data to the server, and the server obtains the position code according to the scanning data, where the scanning data may include an image of the object obtained by scanning; after receiving the scanning data, the server detects whether the scanning data is associated with a prestored identification object, if so, the server processes the data based on the associated identification object, and if not, the server indicates that the scanning data submitted by the user terminal does not have the associated identification object, and sends a warning of scanning failure to the user terminal.
And step S104, receiving and displaying a destination identification list returned by the server aiming at the position code.
The destination identification list comprises a list formed by recommended destination identifications corresponding to the position codes determined according to the position codes.
In specific implementation, the receiving server determines and sends a destination identifier list based on the position code, in order to improve the perception degree of the user on the destination identifier, further improve user experience, and perform classified display on the destination identifier, in an optional implementation manner provided in this embodiment, the destination identifier list is specifically received and displayed in the following manner:
receiving the destination identification list;
classifying the destination identifiers contained in the destination identifier list according to preset classification conditions;
and generating and displaying a destination recommendation page based on the classification result of the classification processing.
Specifically, a destination identification list sent by the server is received, classification is performed according to destination types, after the types are switched, the corresponding contents are preferentially displayed, and the rest of the contents are collected.
As shown in fig. 3, the destination identifier list determined and transmitted by the server according to the position code of the anchor point 1 in the subway station includes 12 destination identifiers of a toilet, a service center, a convenience store, a barrier-free elevator, a 4 # exhibition hall, a 5 # exhibition hall, a western login hall, a spectator signing place, a participant signing place, an exhibition business district, a public transportation hub and a ticket-buying recharging, in order to improve the visual degree of the displayed content, the destinations are divided into two categories, one category is in-station service, and the other category is exhibition information, wherein the toilet, the service center, the convenience store and the barrier-free elevator are in-station service categories, the 4 # exhibition hall, the 5 # exhibition hall, the western login hall, the spectator signing place, the participant signing place, the exhibition business district, the public transportation hub and the ticket-buying recharging are in-station information categories, the classified result is generated into a destination recommendation page for displaying, after the user selects the corresponding category, and displaying the corresponding destination identification.
Specifically, the server receives the destination identifier list sent by the third-party server, classifies the destination identifiers according to preset classification conditions, and then generates a destination recommendation page based on the classification result and sends the destination recommendation page to the user terminal.
Step S106, the user destination identification selected by the user through the destination identification list is submitted to the server.
The user destination identifier refers to an identifier of a destination selected by the user through a destination recommendation page displayed by the user terminal.
In specific implementation, after scanning an AR entity object at a preset position point by using AR scanning, a user jumps to a sub-application carried on an application program and displays a destination recommendation page, namely a destination selection page, the user can select a destination to be visited through the destination selection page, a user terminal submits a destination identifier selected by the user to a server, the server generates a query request by taking the preset position point as a starting point and the user destination identifier as an end point and sends the query request to a third-party server, then a route data packet returned by the third-party server is received, and a target guide route is determined according to the route data packet.
For example, if the user selects a public transportation junction as a destination at an anchor point 1, submitting an identifier of the public transportation junction to a server, generating a query request by the server with the anchor point 1 as a starting point and the public transportation junction as a destination to send the query request to a third-party server, querying a route data packet with the anchor point 1 as a starting point and the public transportation junction as a destination by the third-party server and sending the route data packet to the server, and receiving the route data packet by the server.
And step S108, receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
The live-action picture comprises a 360-degree live-action picture shot at the preset position point; in addition, live-action pictures taken at other angles may also be included. The target guiding route is an identification line with a direction identification, which is determined by taking the preset position point as a starting point and taking a position corresponding to the user destination identification as an end point, as shown by 407 in fig. 4; optionally, the target guidance route is determined based on the preset location point and the user destination identifier. In order to make the target guiding route more accurate, rendering the target guiding route in the live-action figure according to the determined route data, in an optional implementation manner provided by this embodiment, receiving and displaying an AR live-action image obtained by rendering the live-action figure and the target guiding route determined according to the route data packet of the target guiding route by the server; the target guiding route is an identification line with a direction identification, and the direction identification points to the user destination identification from the preset position point.
The AR live-action image is a live-action image obtained by rendering according to the live-action image and the identification line, and can display an image in a corresponding direction according to the direction of the user terminal. The AR live-action image is rendered with at least one of the following: in addition, in order to enable a user to have a more intuitive perception of a destination in the whole travel, in an optional implementation manner provided by this embodiment, the AR live-action image rendered with a destination display area corresponding to the user destination identifier is received and displayed; as shown in fig. 4, 401 is a destination display area; wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination. It should be noted that other prompt information may be rendered in the AR live-action image, and the display content may further include other guidance information, such as the nearest exit.
In order to improve the perception degree of the user on the AR live-action image and enable the user to perceive the AR live-action image more accurately in all directions, an optional implementation manner provided in this embodiment specifically adopts the following manner to receive and display the AR live-action image: firstly, receiving the AR live-action image; and then adjusting the display area of the AR real image based on the direction change data of the gyroscope of the user terminal, and displaying the adjusted AR real image. During specific implementation, the current direction of the user terminal is judged by using a gyroscope plug-in and a compass plug-in embedded in the user terminal, an AR live-action image in the direction is loaded, and the user is reminded to turn when the positive direction of the user terminal is opposite to the direction of the target guide route.
In practical applications, a user may change a destination or have another requirement in a process of moving forward, and in order to enable the user to change the destination as needed in a process of displaying an AR live-view image, and further improve a perception degree of the user to an environment, an optional implementation manner provided in this embodiment specifically adopts the following manner to receive and display the AR live-view image:
receiving and displaying the AR live-action image rendered with the corresponding recommended destination identifier in the corresponding direction;
submitting a destination change request to the server if it is detected that the recommended destination identifier is triggered;
and receiving and displaying a recommended AR live-action image obtained by rendering the live-action image and the recommended guide route by the server.
In specific implementation, other destination identifications planned to be recommended are marked in the AR live-action image, and the recommended destination identification is required to be weaker than the destination identification of the user or the next preset position point in vision; a recommended destination identifier is pre-stored in the live-action image, and is displayed in a corresponding direction and a corresponding position, and the recommended destination identifier gradually appears or gradually disappears according to the change of the direction of the user terminal, a user can switch destinations by clicking the recommended destination identifier displayed in the AR live-action image, when the user clicks any recommended destination identifier, the user is displayed with a distance to the recommended point and a destination switching reminder, when the user submits a confirmation instruction based on the switching reminder, the server is provided with a destination change request based on the destination change request, the server renders the live-action image and a recommended guide route determined by taking the preset position point as a starting point, taking the recommended destination identifier as an end point, and sends the rendered recommended AR live-action image to the user terminal, and the user terminal receives and displays the recommended AR live-action image according to the direction of the user terminal. In the case where the recommended destination includes a user destination, the recommended destination is not displayed.
As shown in fig. 4, a user selects an accessible elevator as a destination by performing AR scanning on an AR entity object at an anchor point 1 in a subway station by using a user terminal, and a data resource package 1 which is preset and takes the anchor point 1 as a starting point and takes the accessible elevator as a terminal point comprises four recommended destinations of a toilet, a ticket for sale, a bus station for sale and a exhibition for sale, so that identifiers of the four recommended destinations are marked in corresponding directions and positions in an AR live-action image displayed to the user, and when the user triggers a destination identifier 402 of the toilet, a distance from the toilet and a replacement prompt are displayed to the user, and when the user triggers a confirmation instruction, a destination change request is submitted to a server, and the server inquires a data resource package which takes the anchor point 1 as a starting point and takes the toilet as a terminal point from a third-party server based on the change request, wherein the data resource package comprises a distance, a destination, the method comprises the steps of obtaining specific information, a route, an arrow marked in the route and the like, rendering according to the data resource packet and a live-action picture at an anchor point 1, sending a recommended AR live-action image obtained through rendering to a user terminal, and determining the content of the displayed recommended AR live-action image by the user terminal according to the direction of a gyroscope.
In practical application, due to building shielding or other influence factors, the live-action image shot at one preset position point cannot cover the whole area, in order to avoid incomplete display of a target guide route to a user through the live-action image of a single preset position point and further bring poor route guide experience to the user, the corner is also used as the preset position point for shooting the live-action image, the identification of AR materials at the corner is displayed in the process of displaying the AR live-action image to the user, the position point is switched to the corner under the condition that the user submits a switching instruction, and the AR live-action image of the first position point is graded. Specifically, in an optional implementation manner provided by this embodiment, the AR live-action image is received and displayed in the following manner:
receiving and displaying a first identifier of an AR entity object rendered with a first position point and the AR live-action image of a first switching control; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
submitting a live-action change request to the server if it is detected that the first switching control is triggered;
receiving and displaying a first AR live-action image returned by the server; the first AR live-action image is obtained by rendering, by the server, a live-action map of the first location point and a first guide route determined based on the first location point and the user destination identification.
In specific implementation, when a user drives along a target guide route, if the user passes through other preset position points, the identifier of another preset position point closest to the current position point needs to be displayed in the AR live-action image, and the next AR live-action image can be switched to by triggering a first switching control (namely, a forward control); specifically, when the first switching control is triggered, a live-action change request is sent to the server, and a first AR live-action image obtained by rendering based on the live-action change request is received by the server. After switching, the display content of the destination display area in the AR live view image is not changed, and the preset position point at the departure time, that is, the information of the preset position point at the time of AR scanning is retained.
For example, a user selects a barrier-free elevator as a destination by performing AR scanning on an AR entity object at an anchor point 1 in a subway station by using a user terminal, takes the position of the anchor point 1 as a starting point, and a guide route line taking the barrier-free elevator as a terminal passes through the anchor point 2, so that a guide route going to the barrier-free elevator is more visually displayed, a material identifier of the anchor point 2 is displayed for the user, and the user can switch to an AR real image corresponding to the anchor point 2 by triggering an advancing switching control.
In practical application, in the process of going from the preset location point to the location corresponding to the user destination identifier, one or more intermediate location points may need to pass through, where the intermediate location points refer to relay location points that a user passes through in a path from the preset location point to the user destination, and in order to enable the user to update to the intermediate location points in time and improve the perception degree of the user on a guidance route, in an optional implementation manner provided in this embodiment, when the target guidance route passes through the intermediate location points, the intermediate identifier of the AR entity object of the intermediate location point and the first switching control are rendered in the AR live-action image;
submitting a first switching instruction to the server if it is detected that a first switching control is triggered;
and receiving and displaying an intermediate AR real image returned by the server based on the first switching instruction, wherein the intermediate AR real image is obtained by rendering a first guide route determined by taking the user destination identification as a terminal point based on a real image of an intermediate position point and taking the intermediate position point as a starting point.
In order to avoid that the AR live-action image is changed due to a user error or misoperation, and further the use experience of the user in the AR live-action image navigation is affected, in an optional implementation manner provided in this embodiment, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
submitting a second switching instruction to the server under the condition that the second switching control is detected to be triggered;
and receiving and displaying an AR (augmented reality) live-action image obtained by rendering the live-action image of the intermediate position point and a second guide route determined based on the intermediate position point and a second position point by the server.
In specific implementation, if a plurality of intermediate position points exist in the target guide route, a text prompt needs to be rendered in the AR live-action image, and a user is reminded to switch to view the AR live-action image of the intermediate position points; specifically, when a user terminal displays a starting AR live-action image, a prompt is required to guide the starting AR live-action image to a next intermediate position point, a first switching control is configured, and the starting AR live-action image is switched to an intermediate AR live-action image after being clicked; when the user terminal displays the intermediate AR live-action image, a prompt is needed to guide the intermediate AR live-action image to a next intermediate position point or a termination position point, the first switching control and the second switching control (namely, a backward control) are configured, the first switching control is switched to the next intermediate AR live-action image or the termination AR live-action image after being clicked, and the second switching control is switched to the previous intermediate AR live-action image or the starting AR live-action image after being clicked.
As shown in fig. 4, a user selects an unobstructed elevator as a destination by performing AR scanning on an AR entity object at an anchor point 1 in a subway station by using a user terminal, determines that an anchor point 1 is taken as a starting point, and a target guide route with the unobstructed elevator as a destination passes through three anchor points, namely an anchor point 1, an anchor point 3 and an anchor point 4, displays an AR material identifier 403 and an advance control 404 of the anchor point 3 to the user at corresponding positions when displaying a starting AR live-action image of the anchor point 1 to the user, submits an advance switching instruction to a server under the condition that the user triggers the advance control 404 on a starting AR live-action image display page, and receives a middle AR live-action image of the anchor point 3 returned by the server based on the advance switching instruction; displaying an AR material identifier of an anchor point 4, a forward control 404 and a backward control 405 to a user at a position corresponding to an intermediate AR live-action image, submitting a forward switching instruction to a server under the condition that the user triggers the forward control 404 on an intermediate AR live-action image display page, receiving an AR live-action image ending of the anchor point 4 returned by the server based on the forward switching instruction, submitting a backward switching instruction to the server under the condition that the user triggers the backward control 405 on the intermediate AR live-action image display page, and receiving an initial AR live-action anchor point image of an anchor point 1 returned by the server based on the backward switching instruction; the back control 405 is displayed in the terminal AR live-action image, and when the user triggers the back control on the terminal AR live-action image display page, a back switching instruction is submitted to the server, and the intermediate AR live-action image of the anchor point 3 returned by the server based on the back switching instruction is received.
It should be noted that, in order to avoid that multiple recommended destinations in the starting AR live-action image and the intermediate AR live-action image cause user discomfort and reduce user experience, in an optional implementation manner provided in this embodiment, in the case that the AR live-action image is the starting AR live-action image, the AR live-action image rendered with the recommended destination identifier is received and displayed; and receiving and displaying the AR real image without rendering the recommended destination identifier under the condition that the AR real image is the middle AR real image.
For example, a recommended destination identifier is rendered in the starting AR live-action image, such as displaying four recommended destinations including a washroom, a ticket at a first time, a bus stop at a first time, and an exhibition hall at a corresponding position, and only displaying the display content, the route and the arrow information in the destination display area and not displaying the recommended destination information when switching to the middle AR live-action image or the ending AR live-action image.
In addition, when receiving the AR live-action image, at least one of the following sent by the server is also received: the intermediate AR live-action image, the recommended AR live-action image and the first AR live-action image, so that the recommended AR live-action image obtained by rendering the live-action image and the recommended guide route is displayed under the condition that the recommended destination identifier rendered in the AR live-action image is triggered; displaying a first AR live-action image of a first location point under the condition that a first switching control rendered in the AR live-action image is triggered; the first AR live-action image is obtained by rendering a live-action image of the first location point and a first guide route determined based on the first location point and the user destination identifier; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value.
It should be noted that, as shown in fig. 4, a control 406 for closing navigation is provided at the bottom of the AR live-action image, and the user clicks and then jumps to a new page, and gives the AR a scan entry control and an exit control.
The following describes, with reference to fig. 5, the route display processing method based on AR provided in this embodiment further by taking an application of the route display processing method based on AR provided in this embodiment in a subway station scene as an example. Referring to fig. 5, the AR-based route display processing method applied to the subway station scene specifically includes steps S502 to S516.
And step S502, performing AR scanning on the AR materials configured by the preset anchor points in the subway station.
In step S504, the scanned object image is identified and the position code obtained by the identification is submitted to the server.
In step S506, the destination identifier list sent by the server is received.
Step S508, performs classification processing on the destination identifiers included in the destination identifier list.
In step S510, a destination recommendation page is generated based on the classification result of the classification processing and displayed.
Step S512, the user destination identification selected by the user through the destination recommendation page is submitted to the server.
Step S514, receiving an AR live-action image obtained by rendering the live-action image of the preset anchor point and the target guiding route by the server.
And step S516, adjusting and displaying the AR live-action image according to the direction change data of the gyroscope.
To sum up, the route display processing method based on AR provided in this embodiment performs AR scanning on an AR entity object configured at a preset location point, identifies an object image obtained by the scanning, and submits a location code obtained by the identification to a server, then receives and presents the destination identification list determined and returned by the server for the position code, submitting a user destination identifier selected by a user through a destination identifier list to a server, finally receiving and displaying an AR live-action image obtained by rendering a live-action image of a preset position point and a target guide route by the server, therefore, the pre-stored information is utilized to provide AR navigation for the user terminal, so that the user can timely and accurately perceive the route information in the area with weak GPS signals, the convenience of the user is improved, the cost is saved, and the effectiveness of the people flow management and control is improved to a certain extent.
An embodiment of a route guidance processing method based on an AR provided in this specification:
referring to fig. 6, which shows a flowchart of a processing method of an AR-based route guidance processing method provided by the present embodiment, and referring to fig. 7, which shows a flowchart of a processing method of an AR-based route guidance processing method applied to a subway station scene provided by the present embodiment.
Referring to fig. 6, the route guidance processing method based on AR according to the present embodiment is applied to a server, and specifically includes the following steps S602 to S608.
Step S602, receiving a position code submitted by the user terminal after AR scanning and identification are performed on the AR entity object configured at the preset position point.
The route guidance processing method based on the AR provided in this embodiment provides clear and accurate indoor and outdoor information and route guidance for a user through a pre-shot live-action picture and an augmented reality mode, specifically, after AR scanning is performed on an AR entity object configured at a preset location point according to a user terminal, location information submitted by the user terminal is received, recommended destination information is determined to a third-party server according to the location information, and when the user selects a user destination through the user terminal, an AR live-action image and other display contents displayed for the user are determined to the third-party server according to the user destination information and are displayed to the user through the user terminal, so that AR navigation is provided for the user, the effectiveness of guiding people flow is improved, the time for user navigation is saved, and the user experience is further improved.
The preset position point (i.e. the preset anchor point) in this embodiment is a specific position where the AR entity object is configured, that is, a shooting position of the live-action view; it should be noted that the preset location point in this embodiment includes an indoor location point determined according to a proper location in a room, and in addition, the preset location point may also be set up in some special environments, such as a subway station, a high-speed rail station, a shopping mall, and the like. The AR entity object is a material for marking the position of the preset position point, and includes an identification sticker which can be posted in a subway station and carries a identification map, as shown in fig. 2; it should be noted that the embodiment saves the cost by posting the AR entity object. The position code of the AR entity object refers to a code of a position corresponding to the preset position point.
In specific implementation, the user terminal performs AR scanning on an AR entity object at a preset position point, identifies an object image obtained by scanning and submits an identified position code to the server, and receives a position code which is obtained and submitted after the user terminal identifies the object image obtained by performing AR scanning on the AR entity object.
For example, the user terminal scans an AR material posted at an anchor point 1 in an AR in a subway station, and receives a position code obtained and submitted after the user terminal identifies an object image obtained by AR scanning of the AR material.
In addition, the method can also receive that the user terminal performs AR scanning on the AR entity object to obtain and submit scanning data, and position codes are obtained according to the scanning data, wherein the scanning data can comprise an object image obtained by scanning; after receiving the scanning data, detecting whether the scanning data is associated with a prestored identification object, if so, processing the data based on the associated identification object, and if not, indicating that the scanning data submitted by the user terminal does not have the associated identification object, and sending a warning of scanning failure to the user terminal.
Step S604, based on the position code, determining that the destination identifier list corresponding to the preset position point is sent to the user terminal, and receiving the user destination identifier submitted by the user terminal.
The destination identification list comprises a list formed by recommended destination identifications corresponding to the position codes determined according to the position codes; the user destination identifier refers to an identifier of a destination selected by the user through a destination recommendation page displayed by the user terminal.
In a specific implementation, the destination identifier list is established and stored by a third-party server, and in order to implement data access and enable the user terminal to perceive the destination identifier list, in an optional implementation manner provided in this embodiment, based on the location code, the destination identifier list corresponding to the preset location point is determined and sent to the user terminal, and in the execution process, the following operations are specifically executed:
generating a query request based on the position code and sending the query request to a third-party server; the third-party server inquires a destination identification list corresponding to the position code based on the inquiry request and returns the destination identification list to the server;
and receiving the destination identification list returned by the third-party server and sending the destination identification list to the user terminal.
For example, an inquiry request is generated according to the position code of the anchor point 1 in the subway station and is sent to a third party server, the third party server inquires that a destination identification list corresponding to the anchor point 1 comprises 12 destination identifications of toilets, service centers, convenience stores, barrier-free elevators, exhibition halls No. 4, exhibition halls No. 5, West login halls, audience sign-in places, exhibition halls sign-in places, exhibition business areas, public transportation hubs and ticket buying recharging, the 12 destination identifications are sent to the server in a list form, and the server sends the destination identification list to the user terminal when receiving the destination identification list.
When the user terminal receives the destination identifier list, in order to improve the perception degree of the user on the destination identifier and further improve the user experience, the destination identifier is displayed in a classified mode; and classifying according to the destination type, preferentially displaying the corresponding content after the types are switched, and collecting the rest content. It should be noted that, the server may also perform the classification processing on the destination identifier in the destination identifier list, specifically, receive the destination identifier list sent by the third-party server and perform the classification processing on the destination identifier according to the preset classification condition, and then generate the destination recommendation page based on the result of the classification processing and send the destination recommendation page to the user terminal.
In specific implementation, after scanning an AR entity object at a preset location point by using AR scanning, a user jumps to a sub-application loaded on an application program and displays a destination recommendation page, that is, a destination selection page, and the user can select a destination to be reached through the destination selection page.
Step S606, determining a live-action map of the preset position point, and determining a target guiding route according to the preset position point and the user destination identifier.
The live-action picture comprises a 360-degree live-action picture shot at the preset position point; in addition, live-action pictures taken at other angles may also be included. In an optional implementation manner provided by this embodiment, the live-action map is determined in the following manner: firstly, generating a live-action picture query request based on the position code and sending the live-action picture query request to a third-party server; and then receiving the live-action image returned by the third-party server based on the live-action image query request.
The target guide route is determined based on the preset position point and the user destination identifier, and is a mark line with a direction identifier, which is determined by taking the preset position point as a starting point and taking a position corresponding to the user destination identifier as an end point; the target guiding route is an identification line with a direction identification, which is determined by taking the preset position point as a starting point and taking a position corresponding to the user destination identification as an end point. In an optional implementation manner provided by this embodiment, the target guidance route is determined specifically by using the following method: firstly, taking the preset position point as a starting point, taking the user destination identification as a terminal point to generate a query request and sending the query request to a third-party server; then receiving a route data packet of the target guide route returned by the third-party server; finally, the target guidance route is determined based on the route data packet.
According to the use example, a user selects a public transport hub as a destination, receives a preset position code and a user destination identifier sent by a user terminal, generates a query request by taking an anchor point 1 as a starting point, generates a real image query request by taking the public transport hub as a destination, and sends the query request to a third-party server by taking the position code of the anchor point 1 as a real image query request, the third-party server queries a route data packet by taking the anchor point 1 as a starting point, takes the public transport hub as a destination and sends the real image query request to the server, the server receives the route data packet and determines a target guide route according to the route data packet, and in addition, the query request can be generated by the anchor point number of the anchor point 1 and sent to the third-party server to receive a corresponding real.
It should be noted that, the determining of the live-action map and the determining of the target guiding route may be performed by querying a third-party server through a query request, for example, generating a query request that takes a preset location point as a starting point and a location corresponding to a user destination identifier as an end point, sending the query request to the third-party server, receiving the live-action map and the route data packet of the preset location point returned by the third-party server, and determining the target guiding route according to the route data packet. For another example, a query request is generated by the position number of the preset position point and the user destination identifier and sent to the third-party server, a live-action image and a route data packet of the preset position point returned by the third-party server are received, and a target guide route is determined according to the route data packet.
In addition, the information of the preset position points exists in the third-party server in the form of a plurality of data resource packets, for each preset position point, the corresponding destination identification and at least one data resource packet corresponding to each destination identification are stored, and if an intermediate position point exists, an intermediate data resource packet is also stored; the data resource packet includes at least one of the following items: the method comprises the steps that destination description, distance, specific information, a navigation line, an arrow and a recommended destination identification are carried out, in the process of inquiring a destination identification list corresponding to a preset position point from a third-party server, all information corresponding to the preset position point returned by the third-party server can be received, and then a corresponding target guiding route is determined according to a user destination identification submitted by a user terminal.
Step S608, performing rendering processing based on the live-action image and the target guidance route, and sending an AR live-action image obtained by rendering to the user terminal.
The AR live-action image is a live-action image obtained by rendering according to the live-action image and the identification line, and can display an image in a corresponding direction according to the direction of the user terminal. The AR live-action image is rendered with at least one of the following: recommending a destination identifier and recommending an identifier of a location point, wherein in order to enable a user to have a more intuitive perception of a destination in a whole travel, in an optional implementation manner provided by this embodiment, the AR live-action image rendered with a destination display area corresponding to the user destination identifier is sent to the user terminal; wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination. It should be noted that other prompt information may be rendered in the AR live-action image, and the display content may further include other guidance information, such as a recent exit.
In practical applications, a user may change a destination or have another requirement in a process of moving forward, and in order to enable the user to change the destination as needed in an AR live-action image display process, and further improve a perception degree of the user to an environment, in an optional implementation manner provided in this embodiment, an AR live-action image obtained by rendering is sent to the user terminal through the following steps:
sending the AR live-action image rendered in the corresponding direction with the corresponding recommended destination identifier to the user terminal;
receiving a destination change request submitted by the user terminal under the condition that the recommended destination identifier is triggered;
rendering processing is carried out based on the live-action picture and the recommended guide route, and the recommended AR live-action picture obtained through rendering is sent to the user terminal.
In specific implementation, other destination identifications planned to be recommended are marked in the AR live-action image, and the recommended destination identification is weakened visually compared with the destination identification of the user or the next preset position point; a recommended destination identifier is pre-stored in the live-action image, and is displayed in a corresponding direction and a corresponding position, and the recommended destination identifier gradually appears or gradually disappears according to the change of the direction of the user terminal, a user can switch destinations by clicking the recommended destination identifier displayed in the AR live-action image, when the user clicks any recommended destination identifier, the user is displayed with a distance to the recommended point and a destination switching prompt, when the user submits a confirmation instruction based on the switching prompt, the user terminal receives a destination change request submitted by the user terminal, based on the destination change request, the live-action image and a recommended guide route determined by taking the preset position point as a starting point and the recommended destination identifier as an end point are rendered, and the rendered recommended AR live-action image is sent to the user terminal, and the user terminal receives and displays the recommended AR live-action image according to the direction of the user terminal. In the case where the recommended destination includes a user destination, the recommended destination is not displayed.
For example, a user selects an accessible elevator as a destination by performing AR scanning on an AR entity object at an anchor point 1 in a subway station by using a user terminal, a preset data resource package 1 which takes the anchor point 1 as a starting point and takes the accessible elevator as a destination comprises four recommended destinations of a toilet, a ticket office, a bus station and a exhibition hall, so that the corresponding directions and positions in an AR live-action image displayed to the user are marked with identifications of the four recommended destinations, and in the case that the user triggers the destination identification of the toilet, a destination change request submitted by the user terminal is received, based on the change request, a data resource package which takes the anchor point 1 as a starting point and takes the toilet as a destination is inquired from a third-party server, the data resource package comprises distance, special information, a route, an arrow marked in the route and the like, and rendering processing is performed according to the data resource package and the live-action image at the anchor point 1, and sending the rendered recommended AR live-action image to a user terminal, and determining the content of the displayed recommended AR live-action image by the user terminal according to the direction of the gyroscope.
In practical application, due to building occlusion or other influence factors, a live-action image shot at one preset position point cannot cover the whole area, so as to avoid that a target guide route is displayed to a user incompletely through the live-action image of a single preset position point, and further bring a bad route guide experience to the user, in an optional implementation manner provided by this embodiment, a first identifier of an AR entity object at a first position point and the AR live-action image of a first switching control are rendered in the AR image; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
receiving a live-action picture change request submitted by the user terminal under the condition that a first switching control is triggered;
and rendering the live-action image of the first position point and the first guide route determined based on the first position point and the user destination identifier based on the live-action image change request, and sending a first AR live-action image obtained by rendering to the user terminal.
In specific implementation, when a user drives along a target guide route, if the user passes through other preset position points, an identifier of another preset position point closest to the current position point needs to be rendered in the AR live-action image, and the next AR live-action image can be switched to by triggering a first switching control (namely, a forward control); specifically, under the condition that the first switching control is triggered, a live-action image change request sent by the user terminal is received, and a first AR live-action image is obtained through rendering based on the live-action image change request and sent to the user terminal. After switching, the display content of the destination display area in the AR live view image is not changed, and the preset position point at the departure time, that is, the information of the preset position point at the time of AR scanning is retained.
For example, a user selects an accessible elevator as a destination by performing AR scanning on an AR entity object at an anchor point 1 in a subway station by using a user terminal, takes the position of the anchor point 1 as a starting point, passes through the anchor point 2 in a guide route taking the accessible elevator as a terminal point, and displays a material identifier of the anchor point 2 to the user in order to more intuitively display the guide route going to the accessible elevator, and the user can switch to an AR real image corresponding to the anchor point 2 by triggering an advancing switching control.
In practical application, in the process of going from the preset location point to the location corresponding to the user destination identifier, one or more intermediate location points may need to pass through, where the intermediate location points refer to relay location points that a user passes through in a path from the preset location point to the user destination, and in order to enable the user to update to the intermediate location points in time and improve the perception degree of the user on a guiding route, in an optional implementation manner provided in this embodiment, in a case where the target guiding route passes through the intermediate location points, the intermediate identifier of the AR entity object at the intermediate location point and the first switching control are rendered in the AR live-action image;
receiving a first switching instruction submitted by the user terminal under the condition that the first switching control is triggered;
and rendering the live-action image of the middle position point and a first guide route determined by taking the middle position point as a starting point and the user destination identifier as an end point based on the first switching instruction, and sending an intermediate AR live-action image obtained by rendering to the user terminal.
In order to avoid that the AR live-action image is changed due to a user error or misoperation, and further the use experience of the user in the AR live-action image navigation is affected, in an optional implementation manner provided in this embodiment, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
receiving a second switching instruction submitted by the user terminal under the condition that a second switching control is detected to be triggered;
and rendering the live-action image of the middle position point and a second guide route determined based on the middle position point and a second position point based on the second switching instruction, and sending an AR live-action image obtained by rendering to the user terminal.
In specific implementation, if a plurality of intermediate position points exist in the target guide route, a text prompt needs to be rendered in the AR live-action image, and a user is reminded to switch to view the AR live-action image of the intermediate position points; specifically, when a user terminal displays a starting AR live-action image, a prompt is required to guide the starting AR live-action image to a next intermediate position point, a first switching control is configured, and the starting AR live-action image is switched to an intermediate AR live-action image after being clicked; when the user terminal displays the intermediate AR live-action image, a prompt is needed to guide the intermediate AR live-action image to a next intermediate position point or a termination position point, the first switching control and the second switching control (namely, a backward control) are configured, the first switching control is switched to the next intermediate AR live-action image or the termination AR live-action image after being clicked, and the second switching control is switched to the previous intermediate AR live-action image or the starting AR live-action image after being clicked.
For example, a user selects an accessible elevator as a destination by performing AR scanning on an AR entity object at an anchor point 1 in a subway station by using a user terminal, determines that the anchor point 1 serves as a starting point, and a target guide route which takes the accessible elevator as a destination passes through three anchor points, namely an anchor point 1, an anchor point 3 and an anchor point 4, displays an AR material identifier and a forward control of the anchor point 3 to the user at a corresponding position when displaying a starting AR scene image of the anchor point 1 to the user, receives a forward switching instruction submitted by the user terminal under the condition that the user triggers the forward control on a starting AR scene image display page, and sends a middle AR scene image of the anchor point 3 determined based on the forward switching instruction to the user terminal; displaying an AR material identifier, a forward control and a backward control of an anchor point 4 to a user at a position corresponding to an intermediate AR live-action image, receiving a forward switching instruction submitted by a user terminal under the condition that the forward control is triggered by the user at an intermediate AR live-action image display page, sending an AR live-action image ending of the anchor point 4 determined based on the forward switching instruction to the user terminal, receiving a backward switching instruction submitted by the user terminal under the condition that the backward control is triggered by the user at the intermediate AR live-action image display page, and sending an initial AR live-action image of the anchor point 1 determined based on the backward switching instruction to the user terminal; and displaying the back control in the AR real image, receiving a back switching instruction submitted by the user terminal under the condition that the user triggers the back control on the AR real image display termination page, and sending the intermediate AR real image of the anchor point 3 determined based on the back switching instruction to the user terminal.
It should be noted that, in order to avoid that multiple recommended destinations in the starting AR live-action image and the intermediate AR live-action image cause user discomfort and reduce user experience, in an optional implementation manner provided in this embodiment, in a case that the AR live-action image is the starting AR live-action image, the AR live-action image rendered with the recommended destination identifier is sent to the user terminal; and sending the AR real scene image without the rendered recommended destination identifier to the user terminal under the condition that the AR real scene image is the middle AR real scene image.
Along the above example, the recommended destination identifier is rendered in the initial AR live-action image, for example, when four recommended destinations of a washroom, a ticket sale station, a bus station and a exhibition hall are displayed at corresponding positions, but the intermediate AR live-action image or the terminal AR live-action image is switched to, only the display content, the route and the arrow information of the destination display area are displayed, and the recommended destination information is not displayed.
The following describes, with reference to fig. 7, the route display processing method based on AR provided in this embodiment by taking an application of the route display processing method based on AR in a subway station scene as an example. Referring to fig. 7, the AR-based route display processing method applied to the subway station scene specifically includes steps S702 to S714.
Step S702, receiving the position code of the preset anchor point submitted by the user terminal.
Step S704, querying a destination identification list from the third-party server according to the position code.
Step S706, sends the destination identification list to the user terminal.
Step S708, receiving the user destination identifier submitted by the user terminal.
Step S710, using the preset anchor point as a starting point and the user destination identifier as an ending point, querying the live-action map and the route data packet from the third-party server.
In step S712, a target guidance route is determined from the route packet.
Step S714, performing rendering processing on the live-action image and the target guidance route, and sending the AR live-action image obtained by rendering to the user terminal.
To sum up, in the route display processing method based on AR provided in this embodiment, first, a position code submitted by a user terminal after AR scanning and recognition of an AR entity object configured at a preset position point is received, then, based on the position code, a destination identifier list corresponding to the preset position point is determined to be sent to the user terminal, a user destination identifier submitted by the user terminal is received, then, a live-action image of the preset position point is determined, a target guide route is determined according to the preset position point and the user destination identifier, finally, rendering processing is performed based on the live-action image and the target guide route, and an AR live-action image obtained by rendering is sent to the user terminal, so that AR navigation is provided for a user in an area with weak GPS signals, the use experience of the user on AR navigation is improved, and the cost is saved by shooting the live-action image in advance.
The embodiment of the route display processing device based on the AR provided in the present specification is as follows:
in the foregoing embodiment, an AR-based route display processing method applied to a user terminal is provided, and correspondingly, an AR-based route display processing apparatus operating in the user terminal is also provided, which is described below with reference to the accompanying drawings.
Referring to fig. 8, a schematic diagram of an AR-based route display processing apparatus provided in this embodiment is shown.
Since the device embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions only need to refer to the corresponding description of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides an AR-based route display processing apparatus, operating in a user terminal, including:
a location submission module 802 configured to perform AR scanning on an AR entity object configured at a preset location point, identify an object image obtained by the scanning, and submit a location code obtained by the identification to a server;
a list presentation module 804 configured to receive and present a destination identification list returned by the server for the position code;
a destination submission module 806 configured to submit the user destination identification selected by the user through the destination identification list to the server;
an image display module 808, configured to receive and display an AR live-action image obtained by rendering the live-action map of the preset location point and the target guiding route by the server.
Optionally, the target guidance route is determined based on the preset location point and the user destination identifier; the target guiding route is an identification line with a direction identification, wherein the identification line takes the preset position point as a starting point and the user destination identification as an end point.
Optionally, the image display module 808 includes:
the data packet determining submodule is configured to receive and display an AR (augmented reality) live-action image obtained by rendering the live-action image and a target guide route determined according to the route data packet of the target guide route by the server; the target guiding route is an identification line with a direction identification, and the direction identification points to the user destination identification from the preset position point.
Optionally, the image display module 808 includes:
a display area display submodule configured to receive and display the AR live-action image rendered with a destination display area corresponding to the user destination identifier; wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination.
Optionally, the image display module 808 includes:
an image receiving sub-module configured to receive the AR live view image;
and the direction determining submodule is configured to adjust a display area of the AR real image based on direction change data of a gyroscope of the user terminal and display the adjusted AR real image.
Optionally, the list displaying module 804 includes:
a list receiving submodule configured to receive the destination identification list;
the classification submodule is configured to classify the destination identifiers contained in the destination identifier list according to preset classification conditions;
and the recommendation page generation sub-module is configured to generate and display a destination recommendation page based on the classification result of the classification processing.
Optionally, the image display module 808 includes:
a recommendation identifier presentation sub-module configured to receive and present the AR live-action image rendered in the corresponding direction with the corresponding recommendation destination identifier;
a destination change request submitting submodule configured to submit a destination change request to the server if it is detected that the recommended destination identifier is triggered;
and the recommended image receiving submodule is configured to receive and display a recommended AR live-action image obtained by rendering the live-action image and the recommended guide route by the server.
Optionally, the image display module 808 includes:
the position point identification display submodule is configured to receive and display a first identification of the AR entity object rendered with the first position point and the AR live-action image of the first switching control; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
an image change request submission sub-module configured to submit a live-action change request to the server if it is detected that the first switching control is triggered;
the first image receiving submodule is configured to receive and display a first AR real scene image returned by the server; the first AR live-action image is obtained by rendering, by the server, a live-action map of the first location point and a first guide route determined based on the first location point and the user destination identification.
Optionally, the target guidance route passes through the intermediate location point; rendering an intermediate identifier of the AR entity object of the intermediate position point and a first switching control in the AR live-action image;
submitting a first switching instruction to the server if it is detected that a first switching control is triggered;
and receiving and displaying an intermediate AR real image returned by the server based on the first switching instruction, wherein the intermediate AR real image is obtained by rendering a first guide route determined by taking the user destination identification as a terminal point based on a real image of an intermediate position point and taking the intermediate position point as a starting point.
Optionally, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
submitting a second switching instruction to the server under the condition that the second switching control is detected to be triggered;
and receiving and displaying an AR (augmented reality) live-action image obtained by rendering the live-action image of the intermediate position point and a second guide route determined based on the intermediate position point and a second position point by the server.
Optionally, the image display module 808 includes:
a start image receiving submodule configured to receive and display an AR live-action image rendered with a recommended destination identifier, when the AR live-action image is a start AR live-action image;
and the intermediate image receiving submodule is configured to receive and display the AR real image without rendering the recommended destination identifier under the condition that the AR real image is the intermediate AR real image.
Optionally, the AR-based route display processing apparatus further includes:
a recommended image presentation module configured to present a recommended AR live-action image obtained by rendering the live-action image and the recommended guidance route, in a case where a recommended destination identifier rendered in the AR live-action image is triggered;
a first image presentation module configured to present a first AR live view image of a first location point if a first switching control rendered in the AR live view image is triggered; the first AR live-action image is obtained by rendering a live-action image of the first location point and a first guide route determined based on the first location point and the user destination identifier; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value.
An embodiment of an AR-based route guidance processing apparatus provided in this specification is as follows:
in the above embodiments, an AR-based route guidance processing method applied to a server is provided, and correspondingly, an AR-based route guidance processing apparatus operating on the server is also provided, which is described below with reference to the accompanying drawings.
Referring to fig. 9, a schematic diagram of an AR-based route guidance processing apparatus according to the present embodiment is shown.
Since the device embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions only need to refer to the corresponding description of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides an AR-based route guidance processing apparatus, running in a server, including:
a location receiving module 902 configured to receive a location code submitted by a user terminal after AR scanning and identification are performed on an AR entity object configured at a preset location point;
a list sending module 904, configured to determine a destination identifier list corresponding to the preset location point based on the location code, send the destination identifier list to the user terminal, and receive a user destination identifier submitted by the user terminal;
a route determination module 906 configured to determine a live-action map of the preset location point and determine a target guide route according to the preset location point and the user destination identifier;
an image rendering module 908 configured to perform rendering processing based on the live-action map and the target guidance route, and send an AR live-action image obtained by rendering to the user terminal.
Optionally, the position receiving module 902 is specifically configured to receive a position code obtained and submitted after the user terminal identifies an object image obtained by AR scanning of the AR entity object.
Optionally, the list sending module 904 includes:
the request generation submodule is configured to generate a query request based on the position code and send the query request to a third-party server; the third-party server inquires a destination identification list corresponding to the position code based on the inquiry request and returns the destination identification list to the server;
and the list receiving submodule is configured to receive the destination identification list returned by the third-party server and send the destination identification list to the user terminal.
Optionally, the route determining module 906 includes:
the live-action picture query request sending sub-module is configured to generate a live-action picture query request based on the position code and send the live-action picture query request to a third-party server;
and the live-action receiving submodule is configured to receive the live-action returned by the third-party server based on the live-action query request.
Optionally, the route determining module 906 includes:
the query request sending sub-module is configured to generate a query request by taking the preset position point as a starting point and the user destination identification as an end point and send the query request to a third-party server;
a data packet receiving submodule configured to receive a route data packet of the target guidance route returned by the third-party server;
a route generation submodule configured to determine the target guidance route based on the route data packet.
Optionally, a first identifier of the AR entity object of the first location point and the first switching control are rendered in the AR image; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
receiving a live-action picture change request submitted by the user terminal under the condition that a first switching control is triggered;
and rendering the live-action image of the first position point and the first guide route determined based on the first position point and the user destination identifier based on the live-action image change request, and sending a first AR live-action image obtained by rendering to the user terminal.
Optionally, the target guidance route passes through the intermediate location point;
rendering an intermediate identifier of an AR entity object of an intermediate position point and a first switching control in the AR live-action image;
receiving a first switching instruction submitted by the user terminal under the condition that the first switching control is triggered;
and rendering the live-action image of the middle position point and a first guide route determined by taking the middle position point as a starting point and the user destination identifier as an end point based on the first switching instruction, and sending an intermediate AR live-action image obtained by rendering to the user terminal.
Optionally, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
receiving a second switching instruction submitted by the user terminal under the condition that a second switching control is detected to be triggered;
and rendering the live-action image of the middle position point and a second guide route determined based on the middle position point and a second position point based on the second switching instruction, and sending an AR live-action image obtained by rendering to the user terminal.
Optionally, the image rendering module 908 includes:
a starting image rendering submodule configured to send an AR live-action image rendered with a recommended destination identifier to the user terminal, when the AR live-action image is a starting AR live-action image;
and the intermediate image rendering submodule is configured to send the AR real image without the rendered recommended destination identifier to the user terminal under the condition that the AR real image is the intermediate AR real image.
Optionally, the image rendering module 908 includes:
a recommendation identifier rendering submodule configured to send the AR live-action image rendered in the corresponding direction with the corresponding recommendation destination identifier to the user terminal;
a destination change submodule configured to receive a destination change request submitted by the user terminal in a case where the recommended destination identifier is triggered;
and the recommended image rendering submodule is configured to perform rendering processing based on the live-action picture and the recommended guide route, and send the rendered recommended AR live-action picture to the user terminal.
Optionally, the image rendering module 908 includes:
a display area rendering submodule configured to send the AR live-action image rendered with a destination display area corresponding to the user destination identifier to the user terminal; wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination.
The embodiment of the route display processing device based on the AR provided by the present specification is as follows:
corresponding to the above-described route display processing method based on AR, based on the same technical concept, one or more embodiments of the present specification further provide an AR-based route display processing device, where the AR-based route display processing device is configured to execute the above-described route display processing method based on AR, and fig. 10 is a schematic structural diagram of an AR-based route display processing device provided in one or more embodiments of the present specification.
As shown in fig. 10, the AR-based route show processing apparatus may have a relatively large difference due to different configurations or performances, and may include one or more processors 1001 and a memory 1002, and one or more stored applications or data may be stored in the memory 1002. Memory 1002 may be, among other things, transient storage or persistent storage. The application stored in memory 1002 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in an AR-based route presentation processing device. Still further, the processor 1001 may be configured to communicate with the memory 1002 to execute a series of computer-executable instructions in the memory 1002 on the AR-based route presentation processing device. The AR-based route show processing apparatus may also include one or more power supplies 1003, one or more wired or wireless network interfaces 1004, one or more input-output interfaces 1005, one or more keyboards 1006, and the like.
In one particular embodiment, an AR-based route presentation processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the AR-based route presentation processing apparatus, and execution of the one or more programs by one or more processors includes computer-executable instructions for:
performing AR scanning on an AR entity object configured at a preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server;
receiving and displaying a destination identification list returned by the server aiming at the position code;
submitting a user destination identification selected by a user through the destination identification list to the server;
and receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
Optionally, the receiving and displaying the AR live-action image obtained by rendering the live-action image of the preset position point and the target guiding route by the server includes:
receiving and displaying the AR live-action image rendered with the corresponding recommended destination identifier in the corresponding direction;
submitting a destination change request to the server if it is detected that the recommended destination identifier is triggered;
and receiving and displaying a recommended AR live-action image obtained by rendering the live-action image and the recommended guide route by the server.
Optionally, the receiving and displaying the AR live-action image obtained by rendering the live-action image of the preset position point and the target guiding route by the server includes:
receiving and displaying a first identifier of an AR entity object rendered with a first position point and the AR live-action image of a first switching control; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
submitting a live-action change request to the server if it is detected that the first switching control is triggered;
receiving and displaying a first AR live-action image returned by the server; the first AR live-action image is obtained by rendering, by the server, a live-action map of the first location point and a first guide route determined based on the first location point and the user destination identification.
Optionally, the target guidance route passes through the intermediate location point; rendering an intermediate identifier of the AR entity object of the intermediate position point and a first switching control in the AR live-action image;
submitting a first switching instruction to the server if it is detected that a first switching control is triggered;
and receiving and displaying an intermediate AR real image returned by the server based on the first switching instruction, wherein the intermediate AR real image is obtained by rendering a first guide route determined by taking the user destination identification as a terminal point based on a real image of an intermediate position point and taking the intermediate position point as a starting point.
Optionally, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
submitting a second switching instruction to the server under the condition that the second switching control is detected to be triggered;
and receiving and displaying an AR (augmented reality) live-action image obtained by rendering the live-action image of the intermediate position point and a second guide route determined based on the intermediate position point and a second position point by the server.
An embodiment of an AR-based route guidance processing apparatus provided in this specification is as follows:
corresponding to the above-described AR-based route guidance processing method, based on the same technical concept, one or more embodiments of the present specification further provide an AR-based route guidance processing device for executing the above-described AR-based route guidance processing method, and fig. 11 is a schematic structural diagram of the AR-based route guidance processing device provided in one or more embodiments of the present specification.
As shown in fig. 11, the AR-based route guidance processing apparatus may have a relatively large difference due to a difference in configuration or performance, and may include one or more processors 1101 and a memory 1102, and the memory 1102 may have one or more stored applications or data stored therein. Wherein memory 1102 may be transient or persistent. The application stored in memory 1102 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in an AR-based route guidance processing device. Still further, processor 1101 may be configured to communicate with memory 1102 to execute a series of computer-executable instructions in memory 1102 on an AR-based route guidance processing device. The AR-based route guidance processing apparatus may also include one or more power supplies 1103, one or more wired or wireless network interfaces 1104, one or more input-output interfaces 1105, one or more keyboards 1106, and the like.
In one particular embodiment, an AR-based route guidance processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the AR-based route guidance processing apparatus, and execution of the one or more programs by one or more processors includes computer-executable instructions for:
receiving a position code submitted by a user terminal after AR scanning and identification are carried out on an AR entity object configured on a preset position point;
based on the position code, determining that a destination identification list corresponding to the preset position point is sent to the user terminal, and receiving a user destination identification submitted by the user terminal;
determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification;
rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
Optionally, a first identifier of the AR entity object of the first location point and the first switching control are rendered in the AR image; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
receiving a live-action picture change request submitted by the user terminal under the condition that a first switching control is triggered;
and rendering the live-action image of the first position point and the first guide route determined based on the first position point and the user destination identifier based on the live-action image change request, and sending a first AR live-action image obtained by rendering to the user terminal.
Optionally, the target guidance route passes through the intermediate location point;
rendering an intermediate identifier of an AR entity object of an intermediate position point and a first switching control in the AR live-action image;
receiving a first switching instruction submitted by the user terminal under the condition that the first switching control is triggered;
and rendering the live-action image of the middle position point and a first guide route determined by taking the middle position point as a starting point and the user destination identifier as an end point based on the first switching instruction, and sending an intermediate AR live-action image obtained by rendering to the user terminal.
Optionally, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
receiving a second switching instruction submitted by the user terminal under the condition that a second switching control is detected to be triggered;
and rendering the live-action image of the middle position point and a second guide route determined based on the middle position point and a second position point based on the second switching instruction, and sending an AR live-action image obtained by rendering to the user terminal.
Optionally, the sending the AR live-action image obtained by rendering to the user terminal includes:
sending the AR live-action image rendered with the destination display area corresponding to the user destination identifier to the user terminal;
wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination.
An embodiment of a storage medium provided in this specification is as follows:
on the basis of the same technical concept, one or more embodiments of the present specification further provide a storage medium corresponding to the above-described AR-based route display processing method.
The storage medium provided in this embodiment is used to store computer-executable instructions, and when executed, the computer-executable instructions implement the following processes:
performing AR scanning on an AR entity object configured at a preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server;
receiving and displaying a destination identification list returned by the server aiming at the position code;
submitting a user destination identification selected by a user through the destination identification list to the server;
and receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
Optionally, the receiving and displaying the AR live-action image obtained by rendering the live-action image of the preset position point and the target guiding route by the server includes:
receiving and displaying the AR live-action image rendered with the corresponding recommended destination identifier in the corresponding direction;
submitting a destination change request to the server if it is detected that the recommended destination identifier is triggered;
and receiving and displaying a recommended AR live-action image obtained by rendering the live-action image and the recommended guide route by the server.
Optionally, the receiving and displaying the AR live-action image obtained by rendering the live-action image of the preset position point and the target guiding route by the server includes:
receiving and displaying a first identifier of an AR entity object rendered with a first position point and the AR live-action image of a first switching control; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
submitting a live-action change request to the server if it is detected that the first switching control is triggered;
receiving and displaying a first AR live-action image returned by the server; the first AR live-action image is obtained by rendering, by the server, a live-action map of the first location point and a first guide route determined based on the first location point and the user destination identification.
Optionally, the target guidance route passes through the intermediate location point; rendering an intermediate identifier of the AR entity object of the intermediate position point and a first switching control in the AR live-action image;
submitting a first switching instruction to the server if it is detected that a first switching control is triggered;
and receiving and displaying an intermediate AR real image returned by the server based on the first switching instruction, wherein the intermediate AR real image is obtained by rendering a first guide route determined by taking the user destination identification as a terminal point based on a real image of an intermediate position point and taking the intermediate position point as a starting point.
Optionally, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
submitting a second switching instruction to the server under the condition that the second switching control is detected to be triggered;
and receiving and displaying an AR (augmented reality) live-action image obtained by rendering the live-action image of the intermediate position point and a second guide route determined based on the intermediate position point and a second position point by the server.
Another storage medium embodiment provided in this specification is as follows:
in correspondence to the above-described AR-based route guidance processing method, based on the same technical idea, one or more embodiments of the present specification further provide a storage medium.
The storage medium provided in this embodiment is used to store computer-executable instructions, and when executed, the computer-executable instructions implement the following processes:
receiving a position code submitted by a user terminal after AR scanning and identification are carried out on an AR entity object configured on a preset position point;
based on the position code, determining that a destination identification list corresponding to the preset position point is sent to the user terminal, and receiving a user destination identification submitted by the user terminal;
determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification;
rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
Optionally, a first identifier of the AR entity object of the first location point and the first switching control are rendered in the AR image; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
receiving a live-action picture change request submitted by the user terminal under the condition that a first switching control is triggered;
and rendering the live-action image of the first position point and the first guide route determined based on the first position point and the user destination identifier based on the live-action image change request, and sending a first AR live-action image obtained by rendering to the user terminal.
Optionally, the target guidance route passes through the intermediate location point;
rendering an intermediate identifier of an AR entity object of an intermediate position point and a first switching control in the AR live-action image;
receiving a first switching instruction submitted by the user terminal under the condition that the first switching control is triggered;
and rendering the live-action image of the middle position point and a first guide route determined by taking the middle position point as a starting point and the user destination identifier as an end point based on the first switching instruction, and sending an intermediate AR live-action image obtained by rendering to the user terminal.
Optionally, the first switching control and the second switching control are rendered in the intermediate AR live-action image;
receiving a second switching instruction submitted by the user terminal under the condition that a second switching control is detected to be triggered;
and rendering the live-action image of the middle position point and a second guide route determined based on the middle position point and a second position point based on the second switching instruction, and sending an AR live-action image obtained by rendering to the user terminal.
Optionally, the sending the AR live-action image obtained by rendering to the user terminal includes:
sending the AR live-action image rendered with the destination display area corresponding to the user destination identifier to the user terminal;
wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination.
It should be noted that the embodiment of the storage medium in this specification and the embodiment of the user resource processing method in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the foregoing corresponding method, and repeated details are not described here.
Characteristic embodiments of the present specification have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. Additionally, the processes depicted in the accompanying figures do not necessarily require the order in which features are shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 30 s of the 20 th century, improvements in a technology could clearly be distinguished between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a characteristic Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardbylangue (Hardware Description Language), vhjhdul, and vhigh-Language, which are currently used in most general. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in multiple software and/or hardware when implementing the embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of this document and is not intended to limit this document. Various modifications and changes may occur to those skilled in the art from this document. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.

Claims (29)

1. A route display processing method based on AR is applied to a user terminal and comprises the following steps:
performing AR scanning on an AR entity object configured at a preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server;
receiving and displaying a destination identification list returned by the server aiming at the position code;
submitting a user destination identification selected by a user through the destination identification list to the server;
and receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
2. The AR-based route presentation processing method according to claim 1, the target guidance route being determined based on the preset location point and the user destination identification;
the target guiding route is an identification line with a direction identification, wherein the identification line takes the preset position point as a starting point and the user destination identification as an end point.
3. The AR-based route presentation processing method according to claim 1, wherein the receiving and presenting an AR live-action image obtained by rendering a live-action image of the preset location point and a target guidance route by the server includes:
receiving and displaying an AR live-action image obtained by rendering the live-action image and a target guide route determined according to a route data packet of the target guide route by the server;
the target guiding route is an identification line with a direction identification, and the direction identification points to the user destination identification from the preset position point.
4. The AR-based route presentation processing method according to claim 1, wherein the receiving and presenting an AR live-action image obtained by rendering a live-action image of the preset location point and a target guidance route by the server includes:
receiving and displaying the AR live-action image rendered with a destination display area corresponding to the user destination identifier;
wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination.
5. The AR-based route presentation processing method according to claim 1, wherein the receiving and presenting an AR live-action image obtained by rendering a live-action image of the preset location point and a target guidance route by the server includes:
receiving the AR live-action image;
and adjusting the display area of the AR live-action image based on the direction change data of the gyroscope of the user terminal, and displaying the adjusted AR live-action image.
6. The AR-based route presentation processing method according to claim 1, said receiving and presenting a list of destination identifications returned by said server for said location code, comprising:
receiving the destination identification list;
classifying the destination identifiers contained in the destination identifier list according to preset classification conditions;
and generating and displaying a destination recommendation page based on the classification result of the classification processing.
7. The AR-based route presentation processing method according to claim 1, wherein the receiving and presenting an AR live-action image obtained by rendering a live-action image of the preset location point and a target guidance route by the server includes:
receiving and displaying the AR live-action image rendered with the corresponding recommended destination identifier in the corresponding direction;
submitting a destination change request to the server if it is detected that the recommended destination identifier is triggered;
and receiving and displaying a recommended AR live-action image obtained by rendering the live-action image and the recommended guide route by the server.
8. The AR-based route presentation processing method according to claim 1, wherein the receiving and presenting an AR live-action image obtained by rendering a live-action image of the preset location point and a target guidance route by the server includes:
receiving and displaying a first identifier of an AR entity object rendered with a first position point and the AR live-action image of a first switching control; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value;
submitting a live-action change request to the server if it is detected that the first switching control is triggered;
receiving and displaying a first AR live-action image returned by the server; the first AR live-action image is obtained by rendering, by the server, a live-action map of the first location point and a first guide route determined based on the first location point and the user destination identification.
9. The AR-based route presentation processing method of claim 1, further comprising:
submitting a first switching instruction to the server if it is detected that a first switching control is triggered;
receiving and displaying an intermediate AR live-action image returned by the server based on the first switching instruction, wherein the intermediate AR live-action image is obtained by rendering a first guide route determined by taking the user destination identification as a terminal point based on a live-action image of an intermediate position point and taking the intermediate position point as a starting point;
wherein the target guidance route passes through the intermediate location point; and rendering the middle mark of the AR entity object of the middle position point and the first switching control in the AR live-action image.
10. The AR-based route presentation processing method of claim 9, further comprising:
submitting a second switching instruction to the server under the condition that the second switching control is detected to be triggered;
receiving and displaying an AR (augmented reality) live-action image obtained by rendering the live-action image of the intermediate position point and a second guide route determined based on the intermediate position point and a second position point by the server;
wherein the first toggle control and the second toggle control are rendered in the intermediate AR live view image.
11. The AR-based route presentation processing method according to claim 1, wherein the receiving and presenting an AR live-action image obtained by rendering a live-action image of the preset location point and a target guidance route by the server includes:
receiving and displaying an AR live-action image rendered with a recommended destination identifier under the condition that the AR live-action image is a starting AR live-action image;
and receiving and displaying the AR real image without rendering the recommended destination identifier under the condition that the AR real image is the middle AR real image.
12. The AR-based route presentation processing method according to claim 1, after the step of receiving and presenting the AR live-action image obtained by rendering the live-action image of the preset location point and the target guidance route by the server is executed, further comprising:
displaying a recommended AR live-action image obtained by rendering the live-action image and the recommended guide route under the condition that a recommended destination identifier rendered in the AR live-action image is triggered;
displaying a first AR live-action image of a first location point under the condition that a first switching control rendered in the AR live-action image is triggered; the first AR live-action image is obtained by rendering a live-action image of the first location point and a first guide route determined based on the first location point and the user destination identifier;
the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value.
13. An AR-based route guidance processing method is applied to a server and comprises the following steps:
receiving a position code submitted by a user terminal after AR scanning and identification are carried out on an AR entity object configured on a preset position point;
based on the position code, determining that a destination identification list corresponding to the preset position point is sent to the user terminal, and receiving a user destination identification submitted by the user terminal;
determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification;
rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
14. The AR-based route guidance processing method according to claim 13, wherein the receiving of the location code submitted by the user terminal after AR scanning and recognition of the AR entity object configured by the preset location point comprises:
and receiving a position code which is obtained and submitted after the user terminal identifies an object image obtained by AR scanning of the AR entity object.
15. The AR-based route guidance processing method according to claim 13, wherein the determining, based on the location code, that the destination identification list corresponding to the preset location point is transmitted to the user terminal includes:
generating a query request based on the position code and sending the query request to a third-party server; the third-party server inquires a destination identification list corresponding to the position code based on the inquiry request and returns the destination identification list to the server;
and receiving the destination identification list returned by the third-party server and sending the destination identification list to the user terminal.
16. The AR-based route guidance processing method according to claim 13, said determining a live-action map of the preset position point, comprising:
generating a live-action picture query request based on the position code and sending the live-action picture query request to a third-party server;
and receiving the live-action image returned by the third-party server based on the live-action image query request.
17. The AR-based route guidance processing method according to claim 13, said determining a target guidance route from the preset location point and the user destination identification, comprising:
generating a query request by taking the preset position point as a starting point and the destination identifier of the user as a terminal point, and sending the query request to a third-party server;
receiving a route data packet of the target guiding route returned by the third-party server;
determining the target guidance route based on the route data packet.
18. The AR-based route guidance processing method according to claim 13, further comprising:
receiving a live-action picture change request submitted by the user terminal under the condition that a first switching control is triggered;
rendering the live-action image of the first position point and the first guide route determined based on the first position point and the user destination identifier based on the live-action image changing request, and sending a first AR live-action image obtained by rendering to the user terminal;
wherein the AR image is rendered with a first identifier of the AR entity object of the first location point and the first switching control; the first position point is a preset position point, and the distance between the first position point and the target guide route is smaller than a preset threshold value.
19. The AR-based route guidance processing method according to claim 13, further comprising:
rendering an intermediate identifier of an AR entity object of an intermediate position point and a first switching control in the AR live-action image;
receiving a first switching instruction submitted by the user terminal under the condition that the first switching control is triggered;
based on the first switching instruction, rendering the live-action image of the middle position point and a first guide route determined by taking the middle position point as a starting point and the user destination identifier as an end point, and sending an intermediate AR live-action image obtained by rendering to the user terminal;
wherein the target guidance route passes through the intermediate position point.
20. The AR-based route guidance processing method according to claim 19, further comprising;
receiving a second switching instruction submitted by the user terminal under the condition that a second switching control is detected to be triggered;
rendering the live-action image of the middle position point and a second guide route determined based on the middle position point and a second position point based on the second switching instruction, and sending an AR live-action image obtained by rendering to the user terminal;
wherein the first toggle control and the second toggle control are rendered in the intermediate AR live view image.
21. The AR-based route guidance processing method according to claim 1, wherein the sending of the AR live-action image obtained by rendering to the user terminal includes:
sending the AR live-action image rendered with the recommended destination identifier to the user terminal under the condition that the AR live-action image is the initial AR live-action image;
and sending the AR real scene image without the rendered recommended destination identifier to the user terminal under the condition that the AR real scene image is the middle AR real scene image.
22. The AR-based route guidance processing method according to claim 1, wherein the sending of the AR live-action image obtained by rendering to the user terminal includes:
sending the AR live-action image rendered in the corresponding direction with the corresponding recommended destination identifier to the user terminal;
receiving a destination change request submitted by the user terminal under the condition that the recommended destination identifier is triggered;
rendering processing is carried out based on the live-action picture and the recommended guide route, and the recommended AR live-action picture obtained through rendering is sent to the user terminal.
23. The AR-based route guidance processing method according to claim 1, wherein the sending of the AR live-action image obtained by rendering to the user terminal includes:
sending the AR live-action image rendered with the destination display area corresponding to the user destination identifier to the user terminal;
wherein the display content of the destination display area comprises at least one of the following items: the user destination identification, the specific information of the user destination corresponding to the user destination, and the distance between the user destination.
24. An AR-based route display processing device, which runs on a user terminal, comprises:
the position submitting module is configured to perform AR scanning on an AR entity object configured at a preset position point, identify an object image obtained by scanning and submit a position code obtained by identification to a server;
the list display module is configured to receive and display a destination identification list returned by the server for the position codes;
a destination submitting module configured to submit a user destination identifier selected by a user through the destination identifier list to the server;
and the image display module is configured to receive and display the AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
25. An AR-based route guidance processing device, operating on a server, comprising:
the position receiving module is configured to receive a position code submitted by the user terminal after AR scanning and identification are carried out on an AR entity object configured by a preset position point;
the list sending module is configured to determine a destination identification list corresponding to the preset position point based on the position code, send the destination identification list to the user terminal and receive a user destination identification submitted by the user terminal;
a route determination module configured to determine a live-action map of the preset location point and determine a target guide route according to the preset location point and the user destination identifier;
and the image rendering module is configured to perform rendering processing based on the live-action image and the target guide route, and send the AR live-action image obtained through rendering to the user terminal.
26. An AR-based route display processing device applied to a user terminal comprises:
a processor; and the number of the first and second groups,
a memory configured to store computer-executable instructions that, when executed, cause the processor to:
performing AR scanning on an AR entity object configured at a preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server;
receiving and displaying a destination identification list returned by the server aiming at the position code;
submitting a user destination identification selected by a user through the destination identification list to the server;
and receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
27. An AR-based route guidance processing device applied to a server, comprising:
a processor; and the number of the first and second groups,
a memory configured to store computer-executable instructions that, when executed, cause the processor to:
receiving a position code submitted by a user terminal after AR scanning and identification are carried out on an AR entity object configured on a preset position point;
based on the position code, determining a destination identification list corresponding to the preset position point, sending the destination identification list to the user terminal, and receiving a user destination identification submitted by the user terminal;
determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification;
rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
28. A storage medium storing computer-executable instructions that when executed implement the following:
performing AR scanning on an AR entity object configured at a preset position point, identifying an object image obtained by scanning, and submitting a position code obtained by identification to a server;
receiving and displaying a destination identification list returned by the server aiming at the position code;
submitting a user destination identification selected by a user through the destination identification list to the server;
and receiving and displaying an AR live-action image obtained by rendering the live-action image of the preset position point and the target guide route by the server.
29. A storage medium storing computer-executable instructions that when executed implement the following:
receiving a position code submitted by a user terminal after AR scanning and identification are carried out on an AR entity object configured on a preset position point;
based on the position code, determining a destination identification list corresponding to the preset position point, sending the destination identification list to the user terminal, and receiving a user destination identification submitted by the user terminal;
determining a live-action picture of the preset position point, and determining a target guide route according to the preset position point and the user destination identification;
rendering processing is carried out based on the live-action image and the target guide route, and an AR live-action image obtained through rendering is sent to the user terminal.
CN202110068710.8A 2021-01-19 2021-01-19 Route display processing method and device based on AR Pending CN112857391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110068710.8A CN112857391A (en) 2021-01-19 2021-01-19 Route display processing method and device based on AR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110068710.8A CN112857391A (en) 2021-01-19 2021-01-19 Route display processing method and device based on AR

Publications (1)

Publication Number Publication Date
CN112857391A true CN112857391A (en) 2021-05-28

Family

ID=76007259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110068710.8A Pending CN112857391A (en) 2021-01-19 2021-01-19 Route display processing method and device based on AR

Country Status (1)

Country Link
CN (1) CN112857391A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113483781A (en) * 2021-06-02 2021-10-08 深圳市御嘉鑫科技股份有限公司 Intelligent multidimensional stereo space GPS navigation system and method
CN113538703A (en) * 2021-06-30 2021-10-22 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
CN113834495A (en) * 2021-08-20 2021-12-24 阿里巴巴新加坡控股有限公司 Route generation method and device
CN114646320A (en) * 2022-02-09 2022-06-21 江苏泽景汽车电子股份有限公司 Path guiding method and device, electronic equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101641568A (en) * 2007-01-10 2010-02-03 通腾科技股份有限公司 Improved search function for portable navigation device
CN104596499A (en) * 2014-06-27 2015-05-06 腾讯科技(深圳)有限公司 Method, apparatus and system for navigation through image acquiring
US20150319335A1 (en) * 2014-04-30 2015-11-05 Hiroyuki Baba Image processing apparatus, image processing method, and recording medium storing an image processing program
CN105431708A (en) * 2013-06-17 2016-03-23 索尼公司 Image processing device, image processing method, and program
CN106338291A (en) * 2016-09-28 2017-01-18 珠海市魅族科技有限公司 Information display method and device
CN107782314A (en) * 2017-10-24 2018-03-09 张志奇 A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
CN108663060A (en) * 2017-04-01 2018-10-16 北京搜狗科技发展有限公司 It is a kind of navigation processing method and device, a kind of for the device handled that navigate
CN108697934A (en) * 2016-04-29 2018-10-23 奥瑞斯玛有限公司 Guidance information related with target image
US10467518B1 (en) * 2018-11-28 2019-11-05 Walgreen Co. System and method for generating digital content within an augmented reality environment
CN112202894A (en) * 2020-09-30 2021-01-08 支付宝(杭州)信息技术有限公司 Information acquisition method and device and data processing method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101641568A (en) * 2007-01-10 2010-02-03 通腾科技股份有限公司 Improved search function for portable navigation device
CN105431708A (en) * 2013-06-17 2016-03-23 索尼公司 Image processing device, image processing method, and program
US20150319335A1 (en) * 2014-04-30 2015-11-05 Hiroyuki Baba Image processing apparatus, image processing method, and recording medium storing an image processing program
CN104596499A (en) * 2014-06-27 2015-05-06 腾讯科技(深圳)有限公司 Method, apparatus and system for navigation through image acquiring
CN108697934A (en) * 2016-04-29 2018-10-23 奥瑞斯玛有限公司 Guidance information related with target image
CN106338291A (en) * 2016-09-28 2017-01-18 珠海市魅族科技有限公司 Information display method and device
CN108663060A (en) * 2017-04-01 2018-10-16 北京搜狗科技发展有限公司 It is a kind of navigation processing method and device, a kind of for the device handled that navigate
CN107782314A (en) * 2017-10-24 2018-03-09 张志奇 A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
US10467518B1 (en) * 2018-11-28 2019-11-05 Walgreen Co. System and method for generating digital content within an augmented reality environment
CN112202894A (en) * 2020-09-30 2021-01-08 支付宝(杭州)信息技术有限公司 Information acquisition method and device and data processing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113483781A (en) * 2021-06-02 2021-10-08 深圳市御嘉鑫科技股份有限公司 Intelligent multidimensional stereo space GPS navigation system and method
CN113538703A (en) * 2021-06-30 2021-10-22 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
CN113834495A (en) * 2021-08-20 2021-12-24 阿里巴巴新加坡控股有限公司 Route generation method and device
CN114646320A (en) * 2022-02-09 2022-06-21 江苏泽景汽车电子股份有限公司 Path guiding method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN112857391A (en) Route display processing method and device based on AR
KR101740435B1 (en) Mobile terminal and Method for managing object related information thererof
TW201828233A (en) Method for displaying service object and processing map data, client and server
US20110161856A1 (en) Directional animation for communications
US20100115459A1 (en) Method, apparatus and computer program product for providing expedited navigation
KR20180072833A (en) Gallery of messages with a shared interest
KR20170046675A (en) Providing in-navigation search results that reduce route disruption
CN103226729A (en) Virtual reality-based reservation method and system
CN104135716A (en) Push method and system of interest point information
US20160065529A1 (en) Display control device, display control method, and program
US20110055204A1 (en) Method and apparatus for content tagging in portable terminal
CN107656961B (en) Information display method and device
US20220043164A1 (en) Positioning method, electronic device and storage medium
CN111666029A (en) Vehicle-mounted machine map operation method, device, equipment and readable storage medium
CN113532456A (en) Method and device for generating navigation route
CN111609863A (en) Navigation information generation method and device, electronic equipment and storage medium
US20150058462A1 (en) Content delivery system with content navigation mechanism and method of operation thereof
WO2024037285A1 (en) Place service processing method and apparatus
CN112468970A (en) Campus navigation method based on augmented reality technology
KR101141303B1 (en) Method of advertising based on location search service and system peforming the same
CN113190365B (en) Information processing method and device and electronic equipment
CN110660313A (en) Information presentation method and device
CN114841604A (en) Cooperative task processing method, device and equipment and computer readable storage medium
CN113535285A (en) Interface display method, device, equipment and storage medium
CN114066547A (en) Resource display method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination