CN114237438A - Map data processing method, device, terminal and medium - Google Patents

Map data processing method, device, terminal and medium Download PDF

Info

Publication number
CN114237438A
CN114237438A CN202111528378.5A CN202111528378A CN114237438A CN 114237438 A CN114237438 A CN 114237438A CN 202111528378 A CN202111528378 A CN 202111528378A CN 114237438 A CN114237438 A CN 114237438A
Authority
CN
China
Prior art keywords
function
dimensional map
map model
event
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111528378.5A
Other languages
Chinese (zh)
Inventor
于越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202111528378.5A priority Critical patent/CN114237438A/en
Publication of CN114237438A publication Critical patent/CN114237438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

The application relates to a map data processing method, device, terminal and medium. According to the method and the device, the OSG engine is transplanted through the target rendering tool class, so that a three-dimensional map model based on the OSG engine can be displayed on a mobile device such as a terminal, a user can perform interactive operation on the displayed three-dimensional map model, the terminal can perform the interactive operation based on the user, a screen touch event corresponding to the interactive operation is obtained through a screen event obtaining function included in the target rendering tool class, the event type of the screen touch event is determined through an event type judging function included in the target rendering tool class, the displayed three-dimensional map model can be updated through a touch event processing function corresponding to the event type, and therefore the displayed three-dimensional map model can be enlarged, reduced, rotated, moved and the like.

Description

Map data processing method, device, terminal and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a terminal, and a medium for processing map data.
Background
The map is used as a graphic language form for recording geographic information, which not only provides convenience for people going out, but also promotes the development of related technologies such as computer graphics, three-dimensional simulation technology, virtual reality technology and the like. With the development of the related technology, new vitality is injected into the traditional two-dimensional electronic map, and a three-dimensional electronic map with more vivid simulation effect appears. The three-dimensional electronic map can vividly and truly show the environment of the user through an intuitive map real-scene simulation expression mode, and provide a plurality of functions such as map query, travel navigation and the like for the user. It can be said that the three-dimensional electronic map has become an important direction for the development of the electronic map.
Disclosure of Invention
The application provides a method, a device, a terminal and a medium for processing map data, so as to promote research on a three-dimensional electronic map in the related art.
According to a first aspect of an embodiment of the present application, there is provided a method for processing map data, which is applied to a terminal, the method including:
displaying a three-dimensional map model based on an open source three-dimensional rendering OSG engine through a target rendering tool class;
responding to interactive operation on the three-dimensional map model, and acquiring a screen touch event corresponding to the interactive operation through a screen event acquisition function included in the target rendering tool class;
determining the event type of the screen touch event through an event type judgment function included in the target rendering tool class;
and updating the displayed three-dimensional map model through a touch event processing function corresponding to the event type.
In one embodiment of the present application, displaying, by a target rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine includes:
starting a drawing thread of a target rendering tool class;
initializing an OSG engine through a view initialization function in a rendering method subclass of a target rendering tool class;
loading the three-dimensional map model based on the OSG through a resource loading function of the target rendering tool class;
rendering the three-dimensional map model through an effect rendering function in a rendering method subclass of the target rendering tool class;
and displaying the rendered three-dimensional map model.
In an embodiment of the application, a terminal uses an Android operating system, a target rendering tool class is a GLSurfaceView class, a screen event obtaining function is an ontouchvent function, an event type judging function is a changeMode function, a touch event processing function is a preformMode function, a drawing thread is a GLThread, a rendering method subclass is a Render subclass, a view initializing function is an onsurface changed function, a resource loading function is a loadObject function, and an effect rendering function is an onDrawFrame function.
In one embodiment of the present application, displaying, by a target rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine includes:
acquiring first position information of a terminal;
and displaying the three-dimensional map model based on the open source three-dimensional rendering OSG engine through the target rendering tool class under the condition that the first position information meets the target condition.
In one embodiment of the application, the updating of the displayed three-dimensional map model comprises at least one of:
magnifying the displayed three-dimensional map model;
zooming out the displayed three-dimensional map model;
rotating the displayed three-dimensional map model;
and moving the displayed three-dimensional map model.
In one embodiment of the present application, the method further comprises:
calling back a register listener function to an interface event processing class in an OSG function through a click event listener function included in a target rendering tool class;
and responding to the click operation on the three-dimensional map model monitored by the register listener function, and acquiring a click event corresponding to the click operation through the event transparent transmission function.
In an embodiment of the present application, the click event monitoring function is a setoncomodelcicklistener function, the interface event processing class is a GUIEventHandler class, the registration listener function is an oncomodelcicklistener, and the event transparent transmission function is an oncomodelclicked function.
In an embodiment of the present application, after obtaining, through an event transparent transfer function, a click event corresponding to the click operation, the method further includes:
and playing the video data corresponding to the clicked object.
In one embodiment of the present application, the method further comprises:
and acquiring the number of queuing people corresponding to the clicked object, and displaying the acquired number of queuing people, wherein the number of queuing people is determined based on the video data corresponding to the clicked object.
In one embodiment of the present application, the process of determining the number of people in line comprises:
acquiring video data corresponding to a clicked object;
acquiring a video picture corresponding to the current time in the video data;
determining the number of faces included in the video picture, and taking the number of faces as the number of people in queue.
In one embodiment of the present application, the method further comprises:
displaying a first function option on the three-dimensional map model;
responding to the triggering operation of the first function option, and determining a first time length based on first position information for displaying the terminal, second position information of each candidate object and a preset moving speed;
determining a second time length based on the preset time period, the number of queuing people, the preset passing time length and the first time length, wherein the queuing people number corresponds to each candidate object in two adjacent preset time periods;
determining a target object based on the first duration and the second duration;
and displaying a route from the position of the terminal to the position of the target object on the three-dimensional map model.
In an embodiment of the present application, determining the first time length based on the first position information of the terminal, the second position information of each candidate object, and the preset moving speed includes:
determining the distance between the terminal and each candidate object based on the first position information and each second position information;
and determining the first time length based on the distance between the terminal and each candidate object and the preset moving speed.
In an embodiment of the application, the determining the second duration based on the preset time period, the number of queuing people of each candidate object in two adjacent preset time periods, the preset passing duration and the first duration includes:
determining the change rate of the number of queued people based on the preset time period and the number of queued people respectively corresponding to each candidate object in two adjacent preset time periods;
and determining a second time length based on the number of the queued people, the change rate of the number of the queued people, the preset passing time length and the first time length of each candidate object in a later preset time period of two adjacent preset time periods.
In one embodiment of the present application, determining the target object based on the first duration and the second duration includes:
and determining the candidate object with the minimum sum of the first duration and the second duration as the target object.
In one embodiment of the present application, the method further comprises:
displaying a second function option on the three-dimensional map model;
responding to the triggering operation of the second function option, and displaying a first function inlet;
responding to the triggering operation of the first function entrance, and displaying a first information filling interface, wherein the first information filling interface is used for providing a function of reporting the information of the lost object;
and issuing the lost object information in response to the submission operation on the first information filling interface.
In one embodiment of the present application, the method further comprises:
and in response to receiving the missing object clues fed back based on the missing object information, displaying the positions of the missing objects corresponding to the missing object clues on the three-dimensional map model.
In one embodiment of the present application, the method further comprises:
in response to receiving a missing object clue fed back based on the missing object information, a route from a position where the terminal is located to a position of the missing object is displayed on the three-dimensional map model.
In one embodiment of the application, after displaying the second function option on the three-dimensional map model, the method further comprises:
responding to the triggering operation of the second function option, and displaying a second function inlet;
responding to the triggering operation of the second function entrance, and displaying a second information filling interface, wherein the second information filling interface is used for providing a function of uploading a clue of the lost object;
and issuing the missing object clue in response to the submitting operation on the second information filling interface.
According to a second aspect of embodiments of the present application, there is provided a processing apparatus for map data, the apparatus including:
the display unit is used for displaying a three-dimensional map model based on the open source three-dimensional rendering OSG engine through the target rendering tool class;
the acquisition unit is used for responding to the interactive operation on the three-dimensional map model and acquiring a screen touch event corresponding to the interactive operation through a screen event acquisition function included in the target rendering tool class;
the determining unit is used for determining the event type of the screen touch event through an event type judging function included in the target rendering tool class;
and the updating unit is used for updating the displayed three-dimensional map model through the touch event processing function corresponding to the event type.
In an embodiment of the present application, the display unit, when configured to display, through the object rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine, is configured to:
starting a drawing thread of a target rendering tool class;
initializing an OSG engine through a view initialization function in a rendering method subclass of a target rendering tool class;
loading the three-dimensional map model based on the OSG through a resource loading function of the target rendering tool class;
rendering the three-dimensional map model through an effect rendering function in a rendering method subclass of the target rendering tool class;
and displaying the rendered three-dimensional map model.
In an embodiment of the application, a terminal uses an Android operating system, a target rendering tool class is a GLSurfaceView class, a screen event obtaining function is an ontouchvent function, an event type judging function is a changeMode function, a touch event processing function is a preformMode function, a drawing thread is a GLThread, a rendering method subclass is a Render subclass, a view initializing function is an onsurface changed function, a resource loading function is a loadObject function, and an effect rendering function is an onDrawFrame function.
In an embodiment of the present application, the display unit, when configured to display, through the object rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine, is configured to:
acquiring first position information of a terminal;
and displaying the three-dimensional map model based on the open source three-dimensional rendering OSG engine through the target rendering tool class under the condition that the first position information meets the target condition.
In an embodiment of the application, the updating unit, when being used for updating the displayed three-dimensional map model, is used for any one of:
magnifying the displayed three-dimensional map model;
zooming out the displayed three-dimensional map model;
rotating the displayed three-dimensional map model;
and moving the displayed three-dimensional map model.
In one embodiment of the present application, the apparatus further comprises:
the call-back unit is used for calling back the register listener function from the interface event processing class in the OSG function through the click event listener function included in the target rendering tool class;
the obtaining unit is further configured to respond to that the click operation on the three-dimensional map model is monitored through the registration listener function, and obtain a click event corresponding to the click operation through the event transparent transmission function.
In an embodiment of the present application, the click event monitoring function is a setoncomodelcicklistener function, the interface event processing class is a GUIEventHandler class, the registration listener function is an oncomodelcicklistener, and the event transparent transmission function is an oncomodelclicked function.
In one embodiment of the present application, the apparatus further comprises:
and the playing unit is used for playing the video data corresponding to the clicked object.
In an embodiment of the application, the obtaining unit is further configured to obtain the number of queued people corresponding to the clicked object;
the display unit is also used for displaying the obtained queuing number;
wherein the number of people queued is determined based on the video data corresponding to the clicked object.
In one embodiment of the present application, the process of determining the number of people in line comprises:
acquiring video data corresponding to a clicked object;
acquiring a video picture corresponding to the current time in the video data;
determining the number of faces included in the video picture, and taking the number of faces as the number of people in queue.
In one embodiment of the application, the display unit is further configured to display a first function option on the three-dimensional map model;
the determining unit is further used for responding to the triggering operation of the first function option, and determining a first time length based on first position information used for displaying the terminal, second position information of each candidate object and a preset moving speed;
the determining unit is further used for determining a second time length based on the preset time period, the number of queuing people of each candidate object in two adjacent preset time periods, the preset passing time length and the first time length;
the determining unit is further used for determining the target object based on the first time length and the second time length;
the display unit is also used for displaying a route from the position where the terminal is located to the position where the target object is located on the three-dimensional map model.
In an embodiment of the present application, the determining unit, when configured to determine the first time duration based on the first position information of the terminal, the second position information of each candidate object, and the preset moving speed, is configured to:
determining the distance between the terminal and each candidate object based on the first position information and each second position information;
and determining the first time length based on the distance between the terminal and each candidate object and the preset moving speed. In an embodiment of the application, the determining unit, when configured to determine the second time duration based on the preset time period, the number of queuing people, the preset passing time duration and the first time duration, where each candidate object corresponds to two adjacent preset time periods, is configured to:
determining the change rate of the number of queued people based on the preset time period and the number of queued people respectively corresponding to each candidate object in two adjacent preset time periods;
and determining a second time length based on the number of the queued people, the change rate of the number of the queued people, the preset passing time length and the first time length of each candidate object in a later preset time period of two adjacent preset time periods.
In one embodiment of the present application, the determining unit, when configured to determine the target object based on the first duration and the second duration, is configured to:
and determining the candidate object with the minimum sum of the first duration and the second duration as the target object.
In one embodiment of the application, the display unit is further configured to display a second function option on the three-dimensional map model;
the display unit is also used for responding to the triggering operation of the second function option and displaying the first function entrance;
the display unit is also used for responding to the triggering operation of the first function entrance and displaying a first information filling interface, and the first information filling interface is used for providing a function of reporting the lost object information;
the device also includes:
and the release unit is used for responding to the submission operation on the first information filling interface and releasing the lost object information.
In an embodiment of the application, the display unit is further configured to, in response to receiving a missing object cue fed back based on the missing object information, display a position of the missing object corresponding to the missing object cue on the three-dimensional map model.
In one embodiment of the application, the display unit is further configured to display a route from the location where the terminal is located to the location of the missing object on the three-dimensional map model in response to receiving a clue of the missing object fed back based on the information of the missing object.
In an embodiment of the application, the display unit is further configured to display a second function entry in response to a triggering operation of the second function option;
the display unit is also used for responding to the triggering operation of the second function entrance and displaying a second information filling interface, and the second information filling interface is used for providing the function of uploading the clue of the lost object;
the issuing unit is further configured to issue a thread of the missing object in response to the submitting operation at the second information filling interface.
According to a third aspect of embodiments of the present application, a terminal is provided, where the terminal includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement operations performed by the map data processing method provided in any one of the first aspect and the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having a program stored thereon, where the program, when executed by a processor, implements the operations performed by the map data processing method provided in any one of the first aspect and the first aspect.
According to a fifth aspect of embodiments of the present application, there is provided a computer program product, which includes a computer program that, when executed by a processor, implements the operations performed by the map data processing method provided in any one of the first aspect and the first aspect.
According to the method and the device, the OSG engine is transplanted through the target rendering tool class, so that a three-dimensional map model based on the OSG engine can be displayed on a mobile device such as a terminal, a user can perform interactive operation on the displayed three-dimensional map model, the terminal can perform the interactive operation based on the user, a screen touch event corresponding to the interactive operation is obtained through a screen event obtaining function included in the target rendering tool class, the event type of the screen touch event is determined through an event type judging function included in the target rendering tool class, the displayed three-dimensional map model can be updated through a touch event processing function corresponding to the event type, and therefore the displayed three-dimensional map model can be enlarged, reduced, rotated, moved and the like.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of an implementation environment of a method for processing map data according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for processing map data according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a process of displaying a three-dimensional map model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a display interface of a three-dimensional map model provided according to an embodiment of the application;
fig. 5 is a flowchart of a face detection process according to an embodiment of the present application;
FIG. 6 is a flow chart of an interactive function based on a three-dimensional map model according to an embodiment of the present application;
FIG. 7 is a flowchart of a processing device for map data according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The application provides a map data processing method which can be used for displaying a three-dimensional map model based on an open source three-dimensional rendering engine (OSG) and providing a series of human-computer interaction functions based on the three-dimensional map model, including a live view function, a three-dimensional navigation function, an information reporting function and the like. In an exemplary scenario, when a user visits a scenic spot, the data processing method provided in the present application may be used to view the number of people in line for each item in the scenic spot, or intelligently plan a path to an item with the shortest waiting time (including the time for walking to reach the item and the time for queuing), or report lost object information, a lost object clue, and the like, so as to help the user better visit the scenic spot.
The map data processing method may be executed by a terminal, such as a terminal using an Android operating system (Android). Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment of a method for processing map data according to an embodiment of the present application, where the implementation environment includes: a display apparatus 101, an image pickup apparatus 102, a server 103, and a terminal 104.
The display device 101 may be a display screen, the camera device 102 may be a camera, the server 103 may be a server, a plurality of servers, a server cluster, a cloud computing platform, and the like, the terminal 104 may be a smart phone, a tablet computer, a smart watch, and the like, the display device 101 and the server 103, the camera device 102 and the server 103, and the terminal 104 and the server 103 may all communicate in a wired or wireless communication manner, and the terminal 104 and the server 103 may communicate in a wireless communication manner.
Taking the application of the map data processing method provided by the present application to scenic spots as an example, the entrances of each item in the scenic spot may be provided with the camera device 102, so that the video data at the entrances of each item is collected in real time by the camera device 102, and the camera device 102 sends the collected video data to the server 103.
The server 103 processes the received video data to determine the number of people queuing in each project, and sends the determined number of people queuing to the display device 101, so that the number of people queuing is displayed through the display device 101, the display device 101 can be arranged at the entrance of each project in the scenic spot, so that a user can see the number of people queuing in the current project at each entrance of the project, and the user can know the queuing condition of the project in time. The above process is described by taking the example that the display device 101 only displays the number of people in line for the corresponding item as an example, alternatively, the display device may also simultaneously display the number of people in line for multiple items, so that the number of people in line for multiple items in the scenic spot can be seen at each item entrance, and the tour order can be determined according to the line situation of each item.
Further, the server 103 may also acquire a scene advertisement, weather information, and the like, and thereby transmit the acquired scene advertisement, weather information, and the like to the display apparatus 101 so as to display the scene advertisement, weather information, and the like through the display apparatus 101. Wherein, the scenic spot announcement may include lost object information (such as lost article information, lost person information, etc.), suggested tour time, tour notice, and so on.
The terminal 104 may be installed with a target application, a user triggers an icon corresponding to the target application on a visual interface of the terminal 104, and the terminal 104 may respond to a triggering operation of the user to run the target application, so that the three-dimensional map model based on the OSG engine is displayed through the target application, and a plurality of function options are provided on an interface for displaying the three-dimensional map model, so that a plurality of human-computer interaction functions (including a live view function, an intelligent navigation function, a search function, and the like) are provided for the user through the plurality of function options.
The above is merely an exemplary application scenario introduction, and does not constitute a limitation on the application scenario of the present application, and in more possible implementation manners, the map data processing method provided by the present application may also be applied to other types of scenarios, and the present application does not limit a specific application scenario.
Having introduced the application scenarios and implementation environments of the present application, the following describes a method for processing map data provided by the present application in conjunction with several alternative embodiments.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing map data according to an embodiment of the present application, and the method is applied to a terminal, and includes:
step 201, displaying a three-dimensional map model based on an open source three-dimensional rendering OSG engine through a target rendering tool class.
The OSG engine is a cross-platform Graphics development kit provided by an Open Graphics Library (OpenGL), and is designed for development of high-performance Graphics application programs such as scientific computing visualization. The OSG engine provides an object-oriented framework over OpenGL that frees developers from calls to implement and optimize underlying graphics, and can provide many additional utilities for rapid development of graphics applications, suitable for three-dimensional modeling on lower-performance mobile terminals.
OpenGL is a cross-platform graphics Application Programming Interface (API) for specifying software and hardware Programming interfaces in Three-Dimensional (3D) graphics processing hardware, but because of performance and portability, OpenGL is relatively cumbersome to use in mobile devices such as terminals. In order to facilitate the use of OpenGL on the terminal, a subset OpenGL ES (OpenGL for Embedded System) is created under OpenGL, so that a cross-platform and function-complete 3D graphics library interface API is provided for the terminal through the OpenGL ES.
Generally, OpenGL ES has two versions, including OpenGL ES 1.0 and OpenGL ES 2.0, and the application can adopt OpenGL ES of version 2.0 to provide a cross-platform and fully-functional 3D graphics library interface API. In more possible implementation manners, OpenGL ES of version 1.0 may also be used, and the application does not limit which version is specifically used.
Step 202, responding to the interactive operation on the three-dimensional map model, and acquiring a screen touch event corresponding to the interactive operation through a screen event acquisition function included in the target rendering tool class.
It should be noted that, when a user touches the screen, a screen touch event corresponding to the interactive operation of the touch screen is generated, and the terminal can obtain the screen touch event corresponding to the interactive operation of the user through the screen event obtaining function.
Step 203, determining the event type of the screen touch event through the event type judgment function included in the target rendering tool class.
The event type of the screen touch event may include a single click event, a double click event, a single finger touch event, a double finger touch event, a slide event, and the like, and optionally, the screen touch event may further include other types of events, which is not limited in this application.
It should be noted that the screen touch events of different event types correspond to different processing logics, and the event type of the screen touch event is determined, so that a corresponding event processing function can be determined according to the determined event type.
And step 204, updating the displayed three-dimensional map model through a touch event processing function corresponding to the event type.
Alternatively, when updating the display of the three-dimensional map model, the displayed three-dimensional map model may be zoomed in, zoomed out, rotated, moved, and so on.
According to the method and the device, the OSG engine is transplanted through the target rendering tool class, so that a three-dimensional map model based on the OSG engine can be displayed on a mobile device such as a terminal, a user can perform interactive operation on the displayed three-dimensional map model, the terminal can perform the interactive operation based on the user, a screen touch event corresponding to the interactive operation is obtained through a screen event obtaining function included in the target rendering tool class, the event type of the screen touch event is determined through an event type judging function included in the target rendering tool class, the displayed three-dimensional map model can be updated through a touch event processing function corresponding to the event type, and therefore the displayed three-dimensional map model can be enlarged, reduced, rotated, moved and the like.
In addition, the display of the three-dimensional map model is more consistent with actual road conditions, so that a user can more accurately determine the position of the user, and the accuracy of the subsequent navigation process is further improved.
Having described the basic principles of the present application, various non-limiting embodiments of the present application are described in detail below.
The terminal related in the application can be a terminal using an Android operating system, and when an OSG engine is used on the Android operating system, the terminal can be realized depending on a GLSurfaceView class, so that the target rendering tool class can be the GLSurfaceView class.
The GLSurfaceView class is a display class function provided by the OSG engine and is used for presenting the rendering effect of the OSG scene in the Android operating system. Since the use of the OSG engine on the Android operating system is realized by a GLSurfaceView class, in some embodiments, a GLSurfaceView class may be customized before rendering the three-dimensional map model.
In addition, at least one subclass can be defined in the GLSurfaceView class, so that the initialization and model rendering of the OSG engine are completed through the defined subclass, and further the display of the three-dimensional map model can be realized. The at least one subclass may include a rendering method subclass, and the rendering method subclass may be a glsurfaceview.
In summary, for the step 101, when displaying the three-dimensional map model based on the open source three-dimensional rendering OSG engine through the object rendering tool class, the method may include the following steps:
step 2011, start the drawing thread of the target rendering tool class.
The drawing thread can be a GLThread thread, the GLThread thread is a drawing thread carried by GLSurfaceView, and the GLThread thread can be synchronously executed with the main thread without blocking the main thread, and is usually used for executing drawing work of OpenGL.
Step 2012, the OSG engine is initialized through the view initialization function in the rendering method subclass of the target rendering tool class.
The rendering method subclass is also called Render subclass, and the view initialization function may be an onsurface changed function.
In one possible implementation, when the OSG engine is initialized by the onsurface changed function, the OSG engine may be initialized by calling a function of the OSG engine by the onsurface changed function. The function of the OSG engine may be stored in a dynamic library corresponding to the OSG engine.
And 2013, loading the three-dimensional map model based on the OSG through a resource loading function of the target rendering tool class.
Wherein the resource loading function may be a loadObject function. By using the loadObject function, the loading of the three-dimensional map model can be synchronously performed under the condition that the GLthread is started.
Step 2014, rendering the three-dimensional map model through the effect rendering function in the rendering method sub-class of the target rendering tool class.
Wherein the effect rendering function may be an onDrawFrame function. When the three-dimensional map model is rendered through the onDrawFrame function, the function of the OSG engine can be called through the onDrawFrame function to render the three-dimensional map model.
And step 2015, displaying the rendered three-dimensional map model.
The processes in the steps 2011 to 2015 can refer to fig. 3, and fig. 3 is a flowchart of a display process of a three-dimensional map model shown in this embodiment of the present application, where a custom GLSurfaceView class is instantiated through an android application, so that a GLThread of the custom GLSurfaceView class instance is started, so that an OSG engine can be initialized by calling back an onfacechange function of a Render subclass instance, and thus, the three-dimensional map model is loaded through a loadObject function, and further, the rendered three-dimensional map model is rendered by calling back an onDrawFrame function of the Render subclass instance, so that the rendered three-dimensional map model can be displayed.
The above process introduces how to transplant the OSG engine to the terminal using the Android operating system through the GLSurfaceView class, so that the terminal using the Android operating system can display the process of the three-dimensional map model based on the OSG engine, wherein both the onsurfacchanged function and the onDrawFrame function can perform corresponding processing by calling the function in the dynamic library corresponding to the OSG engine.
It should be noted that the dynamic library corresponding to the OSG engine may be generated in advance. An OSG engine entity of the Android operating system is a static library, and in a possible implementation mode, a Native Development Kit (NDK) is matched with an external construction tool (Cpeak) to compile a source code of the OSG engine according to the characteristics of the Android operating system so as to obtain a CMakeList. The Android operating system can call a static library of the OSG engine through a Java Native Interface (JNI), the JNI can be integrated with a Native layer code (C + + code) and an application layer code (Java code), and a dynamic library which can be used by the Android operating system can be obtained by compiling the Native layer code of the JNI and the OSG static library, so that an onsurface changed function and an onDrawFrame function can call the OSG function in the dynamic library.
It should be noted that, steps 2011 to 2015 corresponding to the step 201 and the step 201 only relate to a process of rendering and displaying the three-dimensional map model, and in a possible implementation manner, the three-dimensional map model may be rendered and displayed through the step 201 and the corresponding steps when the position of the user meets a certain condition.
That is, in some embodiments, before step 201, the following process may be further included:
acquiring first position information of a terminal, and displaying a three-dimensional map model based on an open source three-dimensional rendering OSG engine through a GLSurfaceView class under the condition that the first position information meets a target condition.
The terminal may have a positioning function, so that the terminal may obtain the first location information of the terminal through the positioning function as the first location information of the user, thereby determining whether the first location information satisfies a target condition, and if the first location information satisfies the target condition, the step 201 is executed again, and the three-dimensional map model based on the open source three-dimensional rendering OSG engine is displayed through the target rendering tool class.
Alternatively, the target condition may be that the first position information is within a preset range, or the target condition may also be other types of conditions, which is not limited in this application.
After the three-dimensional map model is displayed in step 101, the user may perform an interactive operation such as a single-finger rotation, a double-finger zoom-in, a double-finger zoom-out, a three-finger movement, and the like on the three-dimensional map model to interact with the three-dimensional map model.
Alternatively, the screen event obtaining function may be an onTouchEvent function, the event type determining function may be a changeMode function, and the touch event processing function may be a preformMode function.
The onTouchEvent function can intercept touch events occurring on the screen, so that the changeMode function can judge the event type of the screen touch event based on the screen touch event intercepted by the onTouchEvent function.
It should be noted that the screen touch events of different event types correspond to different processing logics, and the different processing logics correspond to codes corresponding to different values of the preformMode function, that is, different time types correspond to different values of the preformMode function, and after the event type of the screen touch event is determined, the response to the interactive operation can be realized by executing the code corresponding to the value of the preformMode function corresponding to the event type.
Optionally, when the displayed three-dimensional map model is updated based on the interactive operation of the user, at least one of the following updating methods may be included:
magnifying the displayed three-dimensional map model;
zooming out the displayed three-dimensional map model;
rotating the displayed three-dimensional map model;
and moving the displayed three-dimensional map model.
For example, after the click event of the user is determined to be a single-finger rotation event by the changeMode function, the displayed three-dimensional map model may be rotated by the preformMode function.
For example, after the click event of the user is determined to be a two-finger zoom-in event by the changeMode function, the displayed three-dimensional map model can be zoomed in by the preformMode function.
For example, after the click event of the user is determined to be a two-finger zoom-out event by the changeMode function, the displayed three-dimensional map model can be zoomed out by the preformMode function.
For another example, after the click event of the user is determined to be a three-finger movement event through the changeMode function, the displayed three-dimensional map model may be moved through the preformMode function.
It should be noted that the terminal may provide a plurality of human-computer interaction functions based on the three-dimensional map model in addition to displaying the three-dimensional map model, for example, the terminal may display at least one function option on the three-dimensional map model, so that the user may use the corresponding function by triggering the at least one function option.
Referring to fig. 4, fig. 4 is a schematic view of a display interface of a three-dimensional map model provided according to an embodiment of the present application, and as shown in fig. 4, a terminal provides three function options for a user, including a first function option (i.e., the three-dimensional navigation function option in fig. 4), a second function option (i.e., the information reporting function option in fig. 4), and a third function option (i.e., the scenery spot live function option in fig. 4), so that a plurality of services are provided for the user through the three function options.
In a possible implementation manner, after the terminal displays the three-dimensional map model, the terminal automatically triggers a third function option, and the third function option can provide a live viewing function for viewing a real-time queuing situation for a user, so that the user can view the queuing situation of a clicked object by clicking any object in the three-dimensional map model (i.e., any place in the three-dimensional map model, any tour item in the three-dimensional map model, and the like).
In a possible implementation manner, when a click event performed by a user on the three-dimensional map model is acquired, a register listener function may be called back to an interface event processing class in the OSG function through a click event monitoring function included in the target rendering tool class, and then, in response to the click operation on the three-dimensional map model being monitored through the register listener function, a click event corresponding to the click operation is acquired through the event transparent transfer function.
The click event monitoring function can be a setoncomodel clicktriener function, the interface event processing class can be a GUIIEventHandler class, the registration monitor function can be an oncomodel clicktriener function, and the event transparent transmission function can be an oncomodel clicken function.
Accordingly, when the register listener function is called back to the interface event processing class in the OSG function by clicking the event listener function, the register listener function can be called back by using the setoncomodelcliclisterner function and the Handle callback function of the GUIEventHandler class in the OSG function.
After the click event is obtained, the clicked object can be determined based on the obtained click event, so that the video data corresponding to the clicked object is obtained, and then the video data corresponding to the clicked object is played, so that a user can know the real-time queuing condition of each item in the scenic spot by watching the played video data.
In a possible implementation manner, when playing video data corresponding to the clicked object, the terminal may display a dialog box on the three-dimensional map model, and then play the video data in the dialog box. Optionally, jumping from a display interface of the three-dimensional map model to a video playing interface can be performed, and then playing of video data is performed in the video playing interface.
The terminal can be connected with the camera shooting equipment in a wireless connection mode, so that the terminal and the camera shooting equipment can communicate in a wireless communication mode. Therefore, when video data corresponding to the clicked object is acquired, the terminal can directly acquire the video data acquired by the camera device, optionally, the camera device can also upload the acquired video data to the server, so that the terminal can acquire the video data uploaded by the camera device from the server, and the method for acquiring the video data specifically by adopting which mode is not limited in the application.
It should be noted that, when the image capturing apparatus transmits video data to the terminal, or when the image capturing apparatus transmits video data to the server, the video data may be transmitted through a Real Time Streaming Protocol (RTSP), so that Real-Time transmission of the video data may be implemented.
By providing the live viewing function, the user can know the queuing condition of each project in the scenic spot in advance, the information amount of the map data processing process is increased, and the user can plan the tour process of the user as soon as possible.
Optionally, the terminal may further obtain the number of queuing people corresponding to the clicked object, and display the obtained number of queuing people, so as to increase the amount of information in the interaction process, so that the user can more intuitively know the queuing condition of each item.
Wherein the number of people queued is determined based on the video data corresponding to the clicked object, the process of determining the number of people queued may be performed by the server, and the process of determining the number of people queued may include the steps of:
step one, video data corresponding to a clicked object is obtained.
For a specific implementation process of the first step, reference may be made to the video data corresponding to the clicked object obtained in the above process, which is not described herein again.
And step two, acquiring a video picture corresponding to the current time in the video data.
The current time is also the time when the user clicks the object.
In a possible implementation manner, a video picture in the acquired video data may be intercepted by an FFmpeg library, so as to obtain a corresponding video picture in the video data at the current time.
And step three, determining the number of the faces included in the video picture, and taking the determined number of the faces as the number of people in line.
In a possible implementation manner, the number of faces included in the video frame may be determined by a face Detection module (setface Detection) included in the setface face recognition engine.
The face detection module is implemented by a face detection method combining a classical Cascade structure and a Multilayer neural network, and may include a Funnel-shaped Cascade structure (fuel), a Multilayer Perceptron (MLP) Cascade structure, where the fuel Cascade structure may be composed of multiple fast LAB Cascade classifiers, and the Multilayer MLP Cascade structure may include two layers, where the first layer is a plurality of MLP Cascade structures based on accelerated Up Robust Features (SURFs), and the second layer is a unified MLP Cascade structure to process Features output by a plurality of MLP Cascade structures of the previous layer.
When the number of faces included in a video picture is determined by the face detection module, the following process can be implemented:
inputting the obtained video picture into a plurality of rapid LAB cascade classifiers of a FuSt cascade structure, obtaining a plurality of LAB characteristics through the rapid LAB cascade classifiers, inputting the obtained LAB characteristics into a plurality of MLPs in a first-layer MLP cascade structure, obtaining a plurality of SURF characteristics through the plurality of MLPs, inputting the obtained SURF characteristics into a unified MLP cascade structure of a second layer, and outputting the number of faces included in the video picture through the unified MLP cascade structure.
The above is only an exemplary way of determining the number of faces included in the video picture, and in more possible implementation ways, other implementation ways may also be adopted to determine the number of faces included in the video picture, and the specific way adopted in the present application is not limited.
It should be noted that after the number of faces is determined, the server can send the determined number of faces to the terminal, so that the terminal can obtain the number of people in line corresponding to the clicked object, and further display the obtained number of people in line.
The above process is described by taking the number of queuing people at the time when the user clicks the object as an example, in more possible implementation manners, after the server determines the number of faces included in the video picture at the current time, the server may further acquire the video picture from the video data at intervals of a first preset time length, and further determine the number of faces included in the acquired video picture based on the acquired video picture. The first preset duration is any duration, and the specific value of the first preset duration is not limited in the application.
It should be noted that, in the above process, the user clicks any object on the three-dimensional map model displayed by the terminal, and the server acquires and identifies the video picture based on the clicked object, in more possible implementation manners, after acquiring the video data sent by each image pickup device, the server may acquire the video picture from the received video data according to a second preset time duration, so as to determine the number of faces included in the acquired video picture, and further send the determined number of faces to the corresponding display device, so that the display device may display the determined number of faces as the number of people in line, so that the user may view the number of people in line in the current tour from the display device located at each tour entry.
Referring to fig. 5, fig. 5 is a flowchart of a face detection process according to an embodiment of the present application, after video data is acquired, it is determined whether current time meets a screenshot period, that is, whether a time interval between the current time and a previous face detection process is a second preset time duration, so that when the current time meets the screenshot period, an FFmpeg library is used to screenshot the video data to obtain a video frame, a setface face recognition engine is used to perform face detection on the obtained video frame to obtain the number of faces included in the video frame, and the determined number of faces is sent to a display device, so that the display device uses the number of received faces as the number of people in line to display the number of people in line. The process shown in fig. 5 is only a flow-based description, and specific implementation processes may refer to the foregoing embodiments, which are not described herein again.
Optionally, the server may further send the number of faces corresponding to each object to each display device, so that the user can see the number of people in line for each tour item in the scenic spot on any display device.
The second preset duration may be any duration, and the second preset duration may be the same as or different from the first preset duration.
Optionally, the user may use the service corresponding to the first function option and the service corresponding to the second function option in addition to the service corresponding to the third function option, and the following describes the services corresponding to the first function option and the second function option, respectively.
First, a service corresponding to a first function option is introduced.
The user can use the function corresponding to the first function option by triggering the first function option, and the first function option can provide a three-dimensional navigation function for intelligently planning the path of the project with the shortest arrival waiting time (including the time for walking to reach the project and the queuing time) for the user.
In one possible implementation, after the user triggers the first function option, the three-dimensional navigation function may be provided to the user through the following steps.
Step one, responding to the triggering operation of the first function option, and determining a first time length based on first position information of the terminal, second position information of each candidate object and a preset moving speed.
In a possible implementation manner, the distance between the terminal and each candidate object is determined based on the first position information and each second position information, and the first time length is further determined based on the distance between the terminal and each candidate object and the preset moving speed.
The first location information and each of the second location information may be represented in the form of Global Positioning System (GPS) coordinates, and when determining the distance between the terminal and any candidate object based on the first location information and any one of the second location information, the following equations (1) to (3) may be implemented:
d=R*c (1)
Figure BDA0003409821090000221
Figure BDA0003409821090000222
wherein d represents a terminal and candidate pairThe distance of the image, R represents the earth radius (typically 6378137 meters),
Figure BDA0003409821090000223
represents the arc value converted from the longitude coordinate of the point A,
Figure BDA0003409821090000224
the arc value obtained by converting the longitude coordinate of the point B is represented, omega A is the arc value obtained by converting the latitude coordinate of the point A, and omega B is the arc value obtained by converting the latitude coordinate of the point B.
It should be noted that the GPS coordinates (including longitude and latitude) used in the above calculation formula are in decimal form, and the decimal form is generally used for representing the GPS coordinates, so that the decimal GPS coordinates are converted from the decimal form of the GPS coordinates in degree/minute/second. For example, the GPS coordinates are (39 ° 54 '27 "north latitude, 116 ° 23' 17" east longitude), which can be represented as (39.5427,116.2317).
After acquiring the GPS coordinates in decimal form, the GPS coordinates may be converted into corresponding arc values by the following equation (4):
Figure BDA0003409821090000225
wherein the content of the first and second substances,
Figure BDA0003409821090000231
the arc value obtained by converting the longitude coordinate is shown, omega is the arc value obtained by converting the latitude coordinate, and N is the GPS coordinate (the longitude coordinate or the latitude coordinate).
After determining the distance between the terminal and each candidate object, the first duration may be determined by the following formula (5):
t1=d/v (5)
wherein, t1The first time length is represented, d represents the distance between the terminal and any candidate object, v represents the preset moving speed, and v generally takes the value of 1.1 mIn seconds (m/s) to 1.5m/s, for example, v can take on a value of 1.2 m/s.
And step two, determining a second time length based on the preset time periods, the number of queuing people, the preset passing time length and the first time length, wherein the queuing people number corresponds to each candidate object in two adjacent preset time periods.
In one possible implementation mode, the change rate of the number of the queued people is determined based on a preset time period and the number of the queued people respectively corresponding to each candidate object in two adjacent preset time periods; and determining the second time length based on the number of the queued people, the change rate of the number of the queued people, the preset passing time length and the first time length of each candidate object in the later preset time period of the two adjacent preset time periods.
When the change rate of the number of people in the queue is determined, the method can be realized by the following formula (6):
S=(nnext-npre)/T (6)
wherein S represents the rate of change of the number of queuing people, nnextRepresenting the number of the candidate objects queued in a preset time period after two adjacent preset time periods, npreThe number of the queuing people of the candidate object in the previous preset time period in the two adjacent preset time periods is shown, and T shows the preset time period (which can be a first preset time length, a second preset time length, or other preset time lengths).
After determining the rate of change of the number of people in the queue, the second duration can be determined by the following formula (7):
t2=(nnext+S*t1)*tj (7)
wherein, t2Represents a second preset time period, nnextRepresenting the number of queuing people of the candidate object in a later preset time period in two adjacent preset time periods, S representing the change rate of the number of queuing people, t1Represents a first preset time period, tjIndicating a preset passage time period (i.e., a preset ticket checking time period).
And step three, determining the target object based on the first duration and the second duration.
In one possible implementation manner, the candidate object with the smallest sum of the first duration and the second duration is determined as the target object.
That is, after the first duration and the second duration are determined through the first step and the second step, the sum of the first duration and the second duration may be determined, so as to obtain the time required for entering each candidate object, and further, the candidate object with the minimum sum, that is, the candidate object with the shortest required entering time, is determined as the target object.
And fourthly, displaying a route from the position where the terminal is located to the position where the target object is located on the three-dimensional map model.
In a possible implementation manner, after the target object is determined through the third step, the position where the target object is located and the position where the user is located (that is, the position where the terminal is located) may be displayed in the three-dimensional map model in a dotting manner, and roads included in the three-dimensional map model are preset, so that after the position where the target object is located and the position where the user is located are determined in the three-dimensional map model, a route from the position where the terminal is located to the position where the target object is located is determined from the preset roads, and the determined route is displayed in a manner of adding a layer.
It should be noted that, when a route is displayed on the three-dimensional map model, a user may still perform, through the interactive operation involved in the above embodiments, zooming in, zooming out, rotating, moving, and the like on the three-dimensional map model, and specific processes may refer to the above embodiments, and are not described herein again.
When the route is displayed, the functions of amplifying, reducing, rotating and moving the displayed three-dimensional map model are provided for the user, so that the user can adjust the display of the three-dimensional map model according to the requirement of the user, and the user can conveniently check the displayed off-line.
In addition, when the route is displayed on the three-dimensional map model, the user may still click on the object included in the three-dimensional map model, so as to view video data, the number of people queued, and the like corresponding to the clicked object.
The scenic spot live function is provided for the user when the route is displayed, so that the user can check the queuing condition of each project while using the three-dimensional navigation function, and further adjust the own tour sequence in real time according to the checked actual condition.
The service corresponding to the second function option is described below.
The user can use the function corresponding to the second function option by triggering the second function option, and the second function option can provide an information reporting function for reporting the information of the lost object for the user and can also provide an information reporting function for reporting a clue of the lost object for the user.
In a possible implementation manner, after the user triggers the second function option, an information reporting function may be provided for the user through the following steps. The information reporting function may include a function for uploading lost object information, so that when a user loses an article, or a companion or a child of the user loses the article, the function for uploading the lost object information may be used to report related information (such as an article image, an article description, and the like) of the lost article, or related information (such as a person image, a person description, and the like) of the lost companion or the child. In addition, the information reporting function may further include a function for uploading a clue of the lost object, so that the user can provide a clue for the user reporting the information of the lost object through the function for uploading the clue of the lost object when the user sees the lost article or the lost person reported by other users.
In a possible implementation manner, after the user triggers the second function option, an information reporting function for uploading the lost object information may be provided for the user through the following steps.
Step one, responding to the triggering operation of the second function option, and displaying a first function inlet.
The first function entry may be a button, for example, a lost object information reporting button, and optionally, the first function entry may also be another type of control, which is not limited in this application.
And step two, responding to the triggering operation of the first function entrance, and displaying a first information filling interface, wherein the first information filling interface is used for providing a function of reporting the lost object information.
Optionally, the first information filling-in interface may include an image upload entry and an edit box, so that the user can upload the image of the lost item or the image of the lost person through the image upload entry and fill in the description about the lost item or the description about the lost person through the edit box.
And step three, responding to the submission operation on the first information filling interface, and issuing the lost object information.
Optionally, the first information filling interface may include a submission control, the submission control may be triggered after the user fills in the lost object information, and the terminal may respond to the triggering operation of the submission control and send the lost object information uploaded by the user to the server, so that the lost object information is published through the server.
When the lost object information is released through the server, the server can send the lost object information to each display device so as to display the lost object information through each display device, and therefore visitors in the scenic spot can see the lost object information at each project entrance.
The display device may be internally or externally connected with an audio component, such as a speaker, a loudspeaker, etc., so as to broadcast the lost object information reported by the user through the audio component.
In addition, the server can also send the lost object information to the terminal running the target application program, so that tourists in the scenic spot can check the lost object information uploaded by other users through the terminal carried by the tourists.
If other users see the lost objects or lost people corresponding to the released lost object information while visiting the scenic spot, the other users can also upload the clues of the lost objects through the portable terminals.
When uploading a missing object clue, the method can be realized by the following steps:
and step one, responding to the triggering operation of the second function option, and displaying a second function inlet.
The second function entry may be a button, for example, a missing object clue reporting button, and optionally, the second function entry may also be another type of control, which is not limited in this application.
And step two, responding to the triggering operation of the second function entrance, and displaying a second information filling interface, wherein the second information filling interface is used for providing the function of uploading the clue of the lost object.
Optionally, the second information filling interface may include an image uploading entry and a position reporting entry, so that the user may upload an image of a suspected lost object (including an image of a suspected lost article and an image of a suspected lost person) viewed by the user through the image uploading entry, and report a position where the user views the suspected lost object through the position reporting entry.
And step three, responding to the submission operation on the second information filling interface, and issuing a clue of the lost object.
Optionally, the second information filling interface may include a submission control, the submission control may be triggered after the user fills in the missing object hint, and the terminal may respond to the triggering operation of the submission control and send the missing object hint uploaded by the user to the server, so that the missing object hint is issued by the server.
When the missing object clue is issued through the server, the server can send the missing object clue to each display device, so that the missing object clue is displayed through each display device, and a user reporting the missing object information can see clues provided by other users at each project entrance.
In addition, the server can also send the clue of the lost object to the terminal of the user object reporting the information of the lost object, so that the user can check clues provided by other users through the terminal carried by the user.
In one possible implementation manner, in response to receiving a missing object clue fed back based on the missing object information, the terminal displays a missing object position corresponding to the missing object clue on the three-dimensional map model and/or displays a route from the position where the terminal is located to the missing object position.
Optionally, when receiving a missing object clue fed back based on the missing object information, the terminal may prompt the user first, and then display a missing object position corresponding to the missing object clue on the three-dimensional map model based on the operation of the user, and/or display a route from the position where the terminal is located to the missing object position.
That is, when receiving a missing object clue fed back based on the missing object information, the terminal may display prompt information through the dialog box, where the prompt information is used to prompt the missing object clue fed back by another user, and the terminal may display a clue viewing control in the dialog box for displaying the prompt information, so that the user may view the control through the clue viewing control, display a clue viewing interface, and display the missing object clue provided by another user on the clue viewing interface, so that the user can view the missing object clue.
The user can check the positions of the lost objects uploaded by other users and plan a route to the positions of the lost objects under the condition that the user determines that the lost object clues uploaded by other users correspond to the lost objects possibly lost articles or lost companions and children.
In one possible implementation, a location viewing control and a route planning control may be provided on the cable viewing interface, so that the user can view the lost object location uploaded by other users through the location viewing control, and plan a route to the lost object location through the route planning control.
If the user triggers the position viewing control, the terminal can respond to the triggering operation of the user on the position viewing control, and the position of the lost object corresponding to the clue of the lost object is displayed on the three-dimensional map model.
If the user triggers the route planning control, the terminal can respond to the triggering operation of the user on the route planning control, and a route from the position where the terminal is located to the position of the lost object is displayed on the three-dimensional map model. The implementation process of displaying the route may refer to the above embodiments, and is not described herein again.
It should be noted that, when the position of the lost object is displayed on the three-dimensional map model or the route reaching the position of the lost object is displayed, the user may still perform the operations of zooming in, zooming out, rotating, moving, and the like on the three-dimensional map model through the interactive operations involved in the above embodiments, and the specific processes may refer to the above embodiments, and are not described herein again.
The above-mentioned process of displaying a three-dimensional map model and providing an interactive function based on the displayed three-dimensional map model can be referred to fig. 6, fig. 6 is a process diagram of an interactive function based on a three-dimensional map model provided according to an embodiment of the present application, after the position information of the terminal is acquired, the acquired position information is used as the current position information of the tourist, so as to determine whether the current position information meets the target condition, that is, determine whether the tourist is in the scenic spot range, so that the three-dimensional map model of the scenic spot is loaded under the condition that the tourist is in the scenic spot range, and further provide a three-dimensional navigation function of displaying a shortest clue and drawing a three-dimensional navigation route map, a live viewing function of viewing a real-time video stream of the scenic spot, and an information function of reporting the lost object information and the lost object based on the loaded three-dimensional map model, for the description of the functions, reference may be made to the above embodiments, which are not described herein again.
An embodiment of the present application further provides a processing apparatus for map data, and referring to fig. 7, fig. 7 is a flowchart of a processing apparatus for map data provided according to an embodiment of the present application, the apparatus including:
a display unit 701, configured to display, through the target rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine;
an obtaining unit 702, configured to, in response to an interactive operation on a three-dimensional map model, obtain a screen touch event corresponding to the interactive operation through a screen event obtaining function included in a target rendering tool class;
a determining unit 703, configured to determine an event type of the screen touch event according to an event type determining function included in the target rendering tool class;
and an updating unit 704, configured to update the displayed three-dimensional map model through a touch event processing function corresponding to the event type.
In some embodiments, the display unit 701, when configured to display, through the object rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine, is configured to:
starting a drawing thread of a target rendering tool class;
initializing an OSG engine through a view initialization function in a rendering method subclass of a target rendering tool class;
loading the three-dimensional map model based on the OSG through a resource loading function of the target rendering tool class;
rendering the three-dimensional map model through an effect rendering function in a rendering method subclass of the target rendering tool class;
and displaying the rendered three-dimensional map model.
In some embodiments, the terminal uses an Android operating system, the target rendering tool class is a GLSurfaceView class, the screen event obtaining function is an ontouchvent function, the event type determining function is a changeMode function, the touch event processing function is a preformMode function, the drawing thread is a GLThread, the rendering method subclass is a Render subclass, the view initializing function is an onsurface changed function, the resource loading function is a loadObject function, and the effect rendering function is an onDrawFrame function.
In some embodiments, the display unit 701, when configured to display, through the object rendering tool class, a three-dimensional map model based on an open source three-dimensional rendering OSG engine, is configured to:
acquiring first position information of a terminal;
and displaying the three-dimensional map model based on the open source three-dimensional rendering OSG engine through the target rendering tool class under the condition that the first position information meets the target condition.
In some embodiments, the updating unit 704, when being configured to update the displayed three-dimensional map model, is configured to either:
magnifying the displayed three-dimensional map model;
zooming out the displayed three-dimensional map model;
rotating the displayed three-dimensional map model;
and moving the displayed three-dimensional map model.
In some embodiments, the apparatus further comprises:
the call-back unit is used for calling back the register listener function from the interface event processing class in the OSG function through the click event listener function included in the target rendering tool class;
the obtaining unit 702 is further configured to, in response to monitoring the click operation on the three-dimensional map model through the registered listener function, obtain, through the event transparent transmission function, a click event corresponding to the click operation.
In some embodiments, the click event listener function is a setoncomodelclick listener function, the interface event handling class is a GUIEventHandler class, the registration listener function is an oncomodelclick listener function, and the event pass-through function is an oncomodelclicked function.
In some embodiments, the apparatus further comprises:
and the playing unit is used for playing the video data corresponding to the clicked object.
In some embodiments, the obtaining unit 702 is further configured to obtain the number of people in line corresponding to the clicked object;
the display unit 701 is further configured to display the obtained number of people in line;
wherein the number of people queued is determined based on the video data corresponding to the clicked object.
In some embodiments, the determination of the number of people in line comprises:
acquiring video data corresponding to a clicked object;
acquiring a video picture corresponding to the current time in the video data;
determining the number of faces included in the video picture, and taking the number of faces as the number of people in queue.
In some embodiments, the display unit 701 is further configured to display a first function option on the three-dimensional map model;
the determining unit 703 is further configured to determine, in response to a triggering operation on the first function option, a first time length based on the first position information for displaying the terminal, the second position information of each candidate object, and the preset moving speed;
the determining unit 703 is further configured to determine a second duration based on the preset time period, the number of queuing people, the preset passing duration and the first duration, where each candidate object corresponds to two adjacent preset time periods respectively;
the determining unit 703 is further configured to determine a target object based on the first duration and the second duration;
the display unit 701 is further configured to display a route from the position where the terminal is located to the position where the target object is located on the three-dimensional map model.
In some embodiments, the determining unit 703, when configured to determine the first time duration based on the first location information of the terminal, the second location information of each candidate object, and the preset moving speed, is configured to:
determining the distance between the terminal and each candidate object based on the first position information and each second position information;
and determining the first time length based on the distance between the terminal and each candidate object and the preset moving speed.
In some embodiments, the determining unit 703, when configured to determine the second duration based on the preset time period, the number of people in the queue respectively corresponding to each candidate object in two adjacent preset time periods, the preset passing duration, and the first duration, is configured to:
determining the change rate of the number of queued people based on the preset time period and the number of queued people respectively corresponding to each candidate object in two adjacent preset time periods;
and determining a second time length based on the number of the queued people, the change rate of the number of the queued people, the preset passing time length and the first time length of each candidate object in a later preset time period of two adjacent preset time periods.
In some embodiments, the determining unit 703, when configured to determine the target object based on the first duration and the second duration, is configured to:
and determining the candidate object with the minimum sum of the first duration and the second duration as the target object.
In some embodiments, the display unit 701 is further configured to display a second function option on the three-dimensional map model;
the display unit 701 is further configured to display a first function entry in response to a trigger operation on the second function option;
the display unit 701 is further configured to display a first information filling interface in response to a trigger operation on the first function entry, where the first information filling interface is configured to provide a function of reporting information of a lost object;
the device also includes:
and the release unit is used for responding to the submission operation on the first information filling interface and releasing the lost object information.
In some embodiments, the display unit 701 is further configured to, in response to receiving a missing object cue fed back based on the missing object information, display a position of the missing object corresponding to the missing object cue on the three-dimensional map model.
In some embodiments, the display unit 701 is further configured to display, on the three-dimensional map model, a route from the location where the terminal is located to the location of the lost object in response to receiving a clue of the lost object fed back based on the information of the lost object.
In some embodiments, the display unit 701 is further configured to display a second function entry in response to a triggering operation of the second function option;
the display unit 701 is further configured to display a second information filling interface in response to a triggering operation on the second function entry, where the second information filling interface is configured to provide a function of uploading a clue of a lost object;
the issuing unit is further configured to issue a thread of the missing object in response to the submitting operation at the second information filling interface.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The present application further provides a terminal, referring to fig. 8, where fig. 8 is a schematic structural diagram of a terminal provided according to an embodiment of the present application. As shown in fig. 8, the terminal includes a processor 810, a memory 820 and a network interface 830, the memory 820 is used for storing computer program codes executable on the processor 810, the processor 810 is used for implementing a map data processing method provided by any embodiment of the present application when executing the computer program codes, and the network interface 830 is used for implementing an input and output function. In more possible implementations, the terminal may further include other hardware, which is not limited in this application.
The present application also provides a computer-readable storage medium, which may be in various forms, such as, in different examples: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof. In particular, the computer readable medium may also be paper or another suitable medium on which the program is printed. The computer readable storage medium stores thereon a computer program, and the computer program is executed by a processor to implement the map data processing method provided in any embodiment of the present application.
The present application further provides a computer program product comprising a computer program, which when executed by a processor implements the method for processing map data provided in any of the embodiments of the present application.
In this application, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (21)

1. A processing method of map data is applied to a terminal, and the method comprises the following steps:
displaying a three-dimensional map model based on an open source three-dimensional rendering OSG engine through a target rendering tool class;
responding to interactive operation on the three-dimensional map model, and acquiring a screen touch event corresponding to the interactive operation through a screen event acquisition function included in the target rendering tool class;
determining the event type of the screen touch event through an event type judgment function included in the target rendering tool class;
and updating the displayed three-dimensional map model through a touch event processing function corresponding to the event type.
2. The method of claim 1, wherein displaying the three-dimensional map model based on the open source three-dimensional rendering OSG engine through the object rendering tool class comprises:
starting a drawing thread of the target rendering tool class;
initializing an OSG engine through a view initialization function in the rendering method subclass of the target rendering tool class;
loading the three-dimensional map model based on the OSG through the resource loading function of the target rendering tool class;
rendering the three-dimensional map model through an effect rendering function in the rendering method subclass of the target rendering tool class;
and displaying the rendered three-dimensional map model.
3. The method according to claim 1 or 2, wherein the terminal uses an Android operating system, the target rendering tool class is a GLSurfaceView class, the screen event obtaining function is an ontouch function, the event type determining function is a changeMode function, the touch event processing function is a preformMode function, the drawing thread is a GLThread, the rendering method subclass is a Render subclass, the view initializing function is an ontouch function, the resource loading function is a loadObject function, and the effect rendering function is an onDrawFrame function.
4. The method of claim 1, wherein displaying the three-dimensional map model based on the open source three-dimensional rendering OSG engine through the object rendering tool class comprises:
acquiring first position information of the terminal;
and displaying a three-dimensional map model based on an open source three-dimensional rendering OSG engine through the target rendering tool class under the condition that the first position information meets a target condition.
5. The method of claim 1, wherein the updating the displayed three-dimensional map model comprises at least one of:
magnifying the displayed three-dimensional map model;
zooming out the displayed three-dimensional map model;
rotating the displayed three-dimensional map model;
and moving the displayed three-dimensional map model.
6. The method of claim 1, further comprising:
calling back a register listener function to an interface event processing class in an OSG function through a click event listener function included in the target rendering tool class;
and responding to the click operation on the three-dimensional map model monitored by the register listener function, and acquiring a click event corresponding to the click operation by an event transparent transmission function.
7. The method as claimed in claim 6, wherein the click event listener function is a setoncomodelclick listener function, the interface event handling class is a GUIEventHandler class, the registration listener function is an oncomodelclick listener function, and the event passthrough function is an oncomodelclicked function.
8. The method according to claim 6, wherein after the click event corresponding to the click operation is obtained through the event transparent transfer function, the method further comprises:
and playing the video data corresponding to the clicked object.
9. The method of claim 6, further comprising:
and acquiring the number of queuing people corresponding to the clicked object, and displaying the acquired number of queuing people, wherein the number of queuing people is determined based on the video data corresponding to the clicked object.
10. The method of claim 9, wherein the determining of the number of people in line comprises:
acquiring video data corresponding to a clicked object;
acquiring a video picture corresponding to the current time in the video data;
and determining the number of the faces included in the video picture, and taking the number of the faces as the number of the people in the queue.
11. The method of claim 1, further comprising:
displaying a first function option on the three-dimensional map model;
responding to the triggering operation of the first function option, and determining a first time length based on first position information for displaying the terminal, second position information of each candidate object and a preset moving speed;
determining a second time length based on a preset time period, the number of queuing people of each candidate object in two adjacent preset time periods, a preset passing time length and the first time length;
determining a target object based on the first duration and the second duration;
and displaying a route from the position of the terminal to the position of the target object on the three-dimensional map model.
12. The method according to claim 11, wherein the determining a first time length based on the first position information of the terminal, the second position information of each candidate object and a preset moving speed comprises:
determining distances between the terminal and each candidate object based on the first position information and each second position information;
and determining the first time length based on the distance between the terminal and each candidate object and the preset moving speed.
13. The method of claim 11, wherein the determining the second duration based on the preset time period, the number of people in queue for each candidate object in two adjacent preset time periods, the preset passing duration and the first duration comprises:
determining the change rate of the number of queuing people based on the preset time periods and the number of queuing people of each candidate object in two adjacent preset time periods;
and determining the second time length based on the number of queuing people, the change rate of the number of queuing people, the preset passing time length and the first time length of each candidate object in a later preset time period of two adjacent preset time periods.
14. The method of claim 11, wherein determining a target object based on the first duration and the second duration comprises:
and determining the candidate object with the minimum sum of the first duration and the second duration as the target object.
15. The method of claim 1, further comprising:
displaying a second function option on the three-dimensional map model;
responding to the triggering operation of the second function option, and displaying a first function inlet;
responding to the triggering operation of the first function entrance, and displaying a first information filling interface, wherein the first information filling interface is used for providing a function of reporting the lost object information;
and issuing the lost object information in response to the submission operation on the first information filling interface.
16. The method of claim 15, further comprising:
in response to receiving a missing object clue fed back based on the missing object information, displaying a missing object position corresponding to the missing object clue on the three-dimensional map model.
17. The method of claim 16, further comprising:
in response to receiving a missing object cue fed back based on the missing object information, displaying a route from a location where the terminal is located to the location of the missing object on the three-dimensional map model.
18. The method of claim 15, wherein after displaying the second functional option on the three-dimensional map model, the method further comprises:
responding to the triggering operation of the second function option, and displaying a second function inlet;
responding to the triggering operation of the second function entrance, and displaying a second information filling interface, wherein the second information filling interface is used for providing a function of uploading a clue of a lost object;
and issuing a missing object clue in response to the submitting operation on the second information filling-in interface.
19. An apparatus for processing map data, the apparatus comprising:
the display unit is used for displaying a three-dimensional map model based on the open source three-dimensional rendering OSG engine through the target rendering tool class;
the acquisition unit is used for responding to the interactive operation on the three-dimensional map model and acquiring a screen touch event corresponding to the interactive operation through a screen event acquisition function included in the target rendering tool class;
the determining unit is used for determining the event type of the screen touch event through an event type judging function included in the target rendering tool class;
and the updating unit is used for updating the displayed three-dimensional map model through the touch event processing function corresponding to the event type.
20. A terminal, characterized in that the terminal comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the operations performed by the map data processing method according to any one of claims 1 to 18.
21. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program which, when executed by a processor, realizes an operation performed by the processing method of map data according to any one of claims 1 to 18.
CN202111528378.5A 2021-12-14 2021-12-14 Map data processing method, device, terminal and medium Pending CN114237438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111528378.5A CN114237438A (en) 2021-12-14 2021-12-14 Map data processing method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111528378.5A CN114237438A (en) 2021-12-14 2021-12-14 Map data processing method, device, terminal and medium

Publications (1)

Publication Number Publication Date
CN114237438A true CN114237438A (en) 2022-03-25

Family

ID=80755895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111528378.5A Pending CN114237438A (en) 2021-12-14 2021-12-14 Map data processing method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN114237438A (en)

Similar Documents

Publication Publication Date Title
US9558559B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US9595294B2 (en) Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
CN112070906A (en) Augmented reality system and augmented reality data generation method and device
US9454848B2 (en) Image enhancement using a multi-dimensional model
CN109374002B (en) Navigation method and system, computer readable storage medium
CN111966275B (en) Program trial method, system, device, equipment and medium
CN112074797A (en) System and method for anchoring virtual objects to physical locations
US11558562B2 (en) Apparatus and method for providing 360-degree panoramic background during video call
WO2012007764A1 (en) Augmented reality system
CN108572969A (en) The method and device of geography information point recommended information is provided
CN110732136B (en) Method, device, terminal and storage medium for previewing in-office behavior in out-office environment
CN112181573A (en) Media resource display method, device, terminal, server and storage medium
TWI783472B (en) Ar scene content generation method, display method, electronic equipment and computer readable storage medium
CN112070907A (en) Augmented reality system and augmented reality data generation method and device
CN109314802A (en) Game based on position in game is carried out with application
CN112330819A (en) Interaction method and device based on virtual article and storage medium
CN110619659A (en) House resource display method, device, terminal equipment and medium
Liu et al. Handheld vs. head-mounted AR interaction patterns for museums or guided tours
US20180014067A1 (en) Systems and methods for analyzing user interactions with video content
CN114237438A (en) Map data processing method, device, terminal and medium
CN111277866A (en) Method and related device for controlling VR video playing
Střelák Augmented reality tourist guide
CN112619128A (en) Data processing method, virtual machine, device and system
KR20210061120A (en) Method and apparatus for providing an interaction using a virtual light stick

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination