CN112422653A - Scene information pushing method, system, storage medium and equipment based on location service - Google Patents
Scene information pushing method, system, storage medium and equipment based on location service Download PDFInfo
- Publication number
- CN112422653A CN112422653A CN202011231654.7A CN202011231654A CN112422653A CN 112422653 A CN112422653 A CN 112422653A CN 202011231654 A CN202011231654 A CN 202011231654A CN 112422653 A CN112422653 A CN 112422653A
- Authority
- CN
- China
- Prior art keywords
- scene
- information
- target
- pushing
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 4
- 239000007787 solid Substances 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- DMBHHRLKUKUOEG-UHFFFAOYSA-N diphenylamine Chemical compound C=1C=CC=CC=1NC1=CC=CC=C1 DMBHHRLKUKUOEG-UHFFFAOYSA-N 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/18—Commands or executable codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a scene information pushing method, a system, a storage medium and equipment based on location service, wherein a scene model is constructed based on a depth map of a scene to be processed, and an information pushing range is determined according to the scene model; detecting the information push range, confirming whether a target enters the set information push range, if so, pushing real-time scene information to the target according to the positioning information of the target, and if not, not pushing the scene information to the target; the scene information is a scene graph in a certain surrounding area based on the real-time position of the target; the invention can push scene information to the user according to the real-time position information of the user.
Description
Technical Field
The invention belongs to the technical field of information pushing, and particularly relates to a scene information pushing method, a scene information pushing system, a scene information pushing storage medium and scene information pushing equipment based on location-based service.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Currently, the message pushing based on the location service has been widely applied in the aspects of e-commerce, tourism, social contact, games, etc., for example, when a person takes a train to enter a certain city, a mobile phone receives the travel information pushed by the city. The message pushing technology based on the location service establishes a mechanism for actively pushing messages to users, and improves the real-time property of message pushing. However, such techniques still have certain problems, such as inaccurate pushing due to low positioning accuracy, leakage of messages due to low security, and the like, which also become problems to be solved in this direction.
Disclosure of Invention
The invention provides a scene information pushing method, a scene information pushing system, a storage medium and a scene information pushing device based on a location service, aiming at solving the problems.
According to some embodiments, the invention adopts the following technical scheme:
a scene information pushing method based on location service comprises the following steps:
constructing a scene model based on a depth map of a scene to be processed, and determining an information push range according to the scene model;
detecting the information push range, confirming whether a target enters the set information push range, if so, pushing real-time scene information to the target according to the positioning information of the target, and if not, not pushing the scene information to the target;
the scene information is a scene graph in a certain surrounding area based on the real-time position of the target.
As an alternative embodiment, the specific process of constructing the scene model based on the depth map of the scene to be processed includes: and converting the scene into point cloud information based on the depth information of the target scene, matching and splicing the point cloud information to obtain a point cloud model after matching and splicing, and giving color information in the corresponding RGB image to the point cloud to obtain the scene model.
As an alternative embodiment, the specific process of locating the target includes: blocking a detection area in advance, and collecting positioning information in different sub-areas; clustering the positioning information in each sub-region to obtain a clustering center as a position vector of the sub-region; calculating the Euclidean distance between the actual positioning information and the position vector of the sub-region where the position is located, and if the calculated distance is smaller than a set threshold, determining that the positioning is correct; and if the threshold value is larger than the threshold value, carrying out point relocation.
As an alternative embodiment, the scene graph comprises position information of each entity target in the scene, and the target entity information is shown in a VR/AR form.
The specific process comprises the following steps: acquiring images of a plurality of regions in a scene through a plurality of cameras, and carrying out 3D target detection on each target entity in the images based on a plurality of acquired RGB images to obtain positions and space occupation of different targets so as to form corresponding scene graphs; and carrying out panoramic stitching on the multiple scene graphs, then pushing the scene graphs to a user terminal, and displaying the scene graphs in a VR/AR mode.
As an alternative embodiment, in the scene map, each target position is marked, the target is marked by using a solid frame, and based on the position of the target and the solid frame, the actual spatial position and the actual three-dimensional space occupation of the target entity are solved through the conversion relation between the pixels and the three-dimensional coordinates, so as to provide the position and the space occupation of the target in the current scene of each target entity.
As an alternative embodiment, the specific process of labeling the target with the stereoscopic frame includes: estimating the target depth by using the two images, converting the depth map into a 3D point cloud, realizing space three-dimensional detection by using VoteNet as a 3D detector, obtaining the position and space occupation of the target, and solving the actual space size of the target three-dimensional frame and the distance from the target three-dimensional frame to a user according to the point pair relation between the pixel and the space coordinate.
A scene information pushing system based on location service specifically comprises:
the scene model building module is configured to build a scene model based on a depth map of a scene to be processed, and determine an information push range according to the scene model;
the detection judging module is configured to detect the information pushing range and confirm whether a target enters the set information pushing range;
and the scene information pushing module is configured to push real-time scene information to the target according to the positioning information of the target if the target enters a set information pushing range, and not push the scene information to the target if no target enters the information pushing range or the target leaves the information pushing range.
As an alternative embodiment, the system further comprises a client device, the device comprising:
a positioning module configured to acquire position information of a user;
a communication module configured to communicate information with a server;
a display module configured to display the pushed scene information;
the processing module is configured to splice the scene information;
the input module is configured to acquire an input instruction of a user.
As an optional implementation mode, the system further comprises a positioning system, specifically comprises a Beidou positioning module and an indoor positioning module, wherein the Beidou positioning module is used for outdoor positioning, sending a Beidou signal to the client equipment and acquiring position information; the indoor positioning module is configured to provide position information when the Beidou signal cannot be received.
A computer-readable storage medium, in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor of a terminal device and to execute the steps in the method for pushing context information based on location service.
A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the steps in the scene information pushing method based on the location service.
Compared with the prior art, the invention has the beneficial effects that:
the invention can actively push the scene information to the user according to the user position information, has the effect of 'instant sending', and has better real-time property.
According to the invention, by setting the identification area, based on whether the position information of the user meets the message pushing requirement, then the message is pushed, and the information is automatically emptied when the user leaves the set area, so that the effect of burning after reading is achieved, and the safety is high;
according to the invention, the scene graph is generated, and the scene information such as the target entity in the scene and the occupied position space of the target entity is pushed to the user, so that the user can know the scene information in all aspects and at multiple angles.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a system work flow diagram;
FIG. 2 is a schematic diagram of a scene three-dimensional model and a message push identification area;
fig. 3 is a flow chart of pushing scene information;
FIG. 4 is a schematic diagram illustrating a generation process of a scene graph;
FIG. 5 is a scene graph illustration;
fig. 6 is a block diagram of the system.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The first embodiment is as follows:
a scene information pushing method based on location service can push scene information to a user according to the real-time location information of the user: after a user enters a set scene information pushing range, the system pushes real-time scene information to the user through equipment; and when the user leaves the set scene range, the message is not pushed to the user any more. The work flow of the invention is shown in figure 1:
1. establishing a scene model
Firstly, a three-dimensional model of a target scene is established, and a user can receive the three-dimensional model of the target scene through the equipment. The three-dimensional model of the scene mainly comprises a solidified structure of the current scene, such as a fixed building, a river, a tree and the like, and does not contain moving or changing target entities, such as people, vehicles and the like. Therefore, a three-dimensional model of the scene can be built in advance by methods including, but not limited to, photogrammetry, lidar scanning, visual three-dimensional reconstruction, and the like. The scene model is mainly established by showing the range of the message pushing area and the position of the user, and the user can view the three-dimensional model of the scene from different angles on the equipment terminal.
Unlike the three-dimensional reconstruction of an indoor scene or a single target, the three-dimensional modeling of an outdoor large scene has wider area coverage and more time-consuming calculation. In the invention, an unmanned aerial vehicle is adopted to carry a three-line-array stereo camera to construct a scene three-dimensional model, and a depth map of a target scene is obtained. Based on the depth information of the target scene, the scene is converted into a point cloud, i.e.
And matching and splicing the point clouds on the basis of the point clouds. In the invention, 3D match is adopted to carry out point cloud matching, and a point cloud model after matching and splicing is obtained. And simultaneously giving color information in the corresponding RGB image to the point cloud to obtain a scene model.
The message pushing range can be defined according to the scene model. For example, in this embodiment, according to the boundary of the scene model, a certain distance is extended outwards to obtain the message pushing range, as shown in fig. 2.
2. Real-time target location
In this embodiment, real-time positioning of the target may be achieved by a device (client device) held by the user.
Contain high accuracy big dipper location module and indoor location module in the equipment, seamless connection that can indoor outer accurate location. For outdoor equipment, the Beidou equipment can receive the Beidou signals and automatically and preferentially use the Beidou positioning module to acquire position information; when indoor, can't receive big dipper signal, automatic switch is indoor location. The reference point is established indoors, the position of the reference point is accurately measured, and outdoor high-precision Beidou positioning is introduced indoors. The indoor positioning module can use wifi, bluetooth, Ultra Wide Band (UWB), Radio Frequency Identification (RFID), magnetometer, accelerometer, gyroscope, etc., and the equipment can contain the combination of the aforementioned one or more equipment. In the invention, in order to realize centimeter-level high-precision positioning, the UWB positioning technology is optimized to realize indoor positioning.
In the embodiment, in order to detect the real-time position of the user, the UWB positioning base station is installed at the indoor top, when the user holds the tag to enter the signal coverage range of the base station, the UWB positioning base station automatically establishes contact with the base station, and the improved TDOA ranging positioning algorithm is adopted to realize positioning.
Unlike the existing TDOA positioning algorithm, in this embodiment, to prevent the mis-push of the message triggered by the positioning error of the location, the positioning convergence detection is added: in the implementation, the indoor area is partitioned in advance, and the positioning information in different sub-areas is collected. And clustering the positioning information in each sub-region to obtain a clustering center as a position vector of the sub-region. Calculating the Euclidean distance between the actual positioning information and the position vector of the sub-region where the position is located, and if the calculated distance is smaller than a set threshold, determining that the positioning is correct; and if the value is larger than the threshold value, feeding the result back to the user to perform point relocation.
3. Message push judgment
As shown in fig. 3, it is determined whether the user enters a message pushing range based on the real-time location of the user. And if the user enters the message pushing range, sending a scene information pushing notification to the user, and determining whether to accept message pushing according to the selection of the user. When the user leaves the message pushing area, the pushing is automatically stopped, the previous pushed message is emptied, and the effects of 'instant-to-instant delivery and burning after reading' are achieved.
4. Generating a scene graph
The message pushed to the user is mainly that the user is located to obtain scene information, namely a scene graph, of surrounding nearby areas.
Unlike the panorama, the scene graph in the present embodiment includes location information of each entity target in the scene, and can show target entity information in the form of VR.
In this embodiment, a camera is used to acquire scene information and generate a scene graph, and a specific implementation flow is shown in fig. 4: firstly, images of the scene are collected through a plurality of cameras, 3D target detection is carried out on each target entity in the images based on a plurality of collected RGB images, the positions and space occupation of different target entities are obtained, and a scene graph is formed. And carrying out panoramic stitching on the plurality of scene graphs, then pushing the scene graphs to a user terminal, and displaying the scene graphs to a user in a VR (virtual reality) mode.
Different from the current position information pushing technology, in a pushed scene graph, each target position is marked and a target is detected by using a cube frame, after the position of a target entity and a three-dimensional detection frame are obtained in an image, the actual spatial position and the actual three-dimensional square space occupation of the target entity are solved through the conversion relation between pixels and three-dimensional coordinates, and a user can check the position of each target entity in a current scene, the space occupation of the target and the distance between the target entity and the user.
In the present embodiment, detecting a spatial cube box of a target by multiple images is an important technique for determining the occupation of its space. In this embodiment, an end-to-end 3D detection method is provided. Firstly, two images are used for estimating the target depth, then the depth map is converted into a 3D point cloud, and then VoteNet is used as a 3D detector to realize space three-dimensional detection. In this way the position and space occupation of the object is obtained. And then solving the actual space size of the target stereo frame and the distance from the target stereo frame to the user according to the point pair relation between the pixels and the space coordinates.
5. Scene information push
And converting the actual position coordinates of the user into scene coordinates based on the position information of the user. When the scene information is pushed, the user is positioned to a position in the scene graph, the user can view the scene graph from the angle of 720 degrees through the terminal, each target entity in the scene graph has respective position information and space occupation, and the user can obtain the position, space occupation and distance of each entity, as shown in fig. 5.
6. Scenario message logout
And after detecting that the user leaves the push area, sending a reminding notice to the user to remind the user of about logout and emptying the scene graph.
The method provided by the first embodiment comprises the steps of acquiring the position information, pushing the message and post-processing the message. The present embodiment mainly addresses the problem of obtaining the position, and is mainly the accuracy of positioning. In order to realize high-precision positioning, the Beidou positioning module is adopted in the embodiment, so that the high-precision positioning of the target is realized, meanwhile, a UWB positioning algorithm is optimized indoors, and the error of indoor positioning is reduced; in the process of pushing the message, the scene graph is used as the pushing content, and the message is real-time and comprehensive; in the post-processing of the message, a processing mechanism of burning after reading is adopted, so that the safety of message pushing is improved, and the leakage of scene information is prevented.
Example two:
the scene information pushing system based on the location service specifically comprises:
the scene model building module is configured to build a scene model based on a depth map of a scene to be processed, and determine an information push range according to the scene model;
the detection judging module is configured to detect the information pushing range and confirm whether a target enters the set information pushing range;
and the scene information pushing module is configured to push real-time scene information to the target according to the positioning information of the target if the target enters a set information pushing range, and not push the scene information to the target if no target enters the information pushing range or the target leaves the information pushing range.
As shown in fig. 6, the system further comprises a client device, said device comprising:
a positioning module configured to acquire position information of a user;
a communication module configured to communicate information with a server;
a display module configured to display the pushed scene information;
the processing module is configured to splice the scene information;
the input module is configured to acquire an input instruction of a user.
The system also comprises a positioning system, in particular a Beidou positioning module and an indoor positioning module, wherein the Beidou positioning module is used for outdoor positioning, sending a Beidou signal to the client equipment and acquiring position information; the indoor positioning module is configured to provide position information when the Beidou signal cannot be received.
Of course, in some embodiments, the display module may be, but is not limited to, a liquid crystal display, an LCD display, an LED display, or the like for the display module; the positioning module mainly comprises a Beidou positioning module and an indoor positioning module, the Beidou positioning module is mainly a Beidou signal receiving chip or a receiver, and the indoor positioning module can be a UWB module, a WIFI module, a Bluetooth module, an RFID module and the like; the communication module is mainly used for receiving scene information, and can be 4G/5G, wifi and the like; the storage module is mainly used for storing running programs and cache data and can be an RAM, an ROM, a hard disk, a USB flash disk and the like; the input module is a part used for inputting instructions by a user, and can be a touch screen or a keyboard arranged on the screen; the power supply module is mainly used for providing a power supply for the equipment terminal; the processing module refers to a processor with certain computing power, including but not limited to processing units such as ARM, FPGA, GPU, etc., and can complete the computation and command execution involved in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (10)
1. A scene information pushing method based on location service is characterized in that: the method comprises the following steps:
constructing a scene model based on a depth map of a scene to be processed, and determining an information push range according to the scene model;
detecting the information push range, confirming whether a target enters the set information push range, if so, pushing real-time scene information to the target according to the positioning information of the target, and if not, not pushing the scene information to the target;
the scene information is a scene graph in a certain surrounding area based on the real-time position of the target.
2. The method as claimed in claim 1, wherein the method for pushing context information based on location-based service comprises: based on the depth map of the scene to be processed, the specific process of constructing the scene model comprises the following steps: and converting the scene into point cloud information based on the depth information of the target scene, matching and splicing the point cloud information to obtain a point cloud model after matching and splicing, and giving color information in the corresponding RGB image to the point cloud to obtain the scene model.
3. The method as claimed in claim 1, wherein the method for pushing context information based on location-based service comprises: the specific process of locating the target comprises the following steps: blocking a detection area in advance, and collecting positioning information in different sub-areas; clustering the positioning information in each sub-region to obtain a clustering center as a position vector of the sub-region; calculating the Euclidean distance between the actual positioning information and the position vector of the sub-region where the position is located, and if the calculated distance is smaller than a set threshold, determining that the positioning is correct; and if the threshold value is larger than the threshold value, carrying out point relocation.
4. The method as claimed in claim 1, wherein the method for pushing context information based on location-based service comprises: the scene graph comprises position information of each entity target in the scene, and target entity information is shown in a VR/AR form; the specific process comprises the following steps: acquiring images of a plurality of regions in a scene through a plurality of cameras, and carrying out 3D target detection on each target entity in the images based on a plurality of acquired RGB images to obtain positions and space occupation of different targets so as to form corresponding scene graphs; and carrying out panoramic stitching on the multiple scene graphs, then pushing the scene graphs to a user terminal, and displaying the scene graphs in a VR/AR mode.
5. The method as claimed in claim 1, wherein the method for pushing context information based on location-based service comprises: in a scene graph, marking each target position, labeling the target by using a solid frame, solving the actual space position and the actual three-dimensional space occupation of the target entity through the conversion relation between pixels and three-dimensional coordinates on the basis of the position and the solid frame of the target, and providing the position and the space occupation of the target of each target entity in the current scene.
6. The method as claimed in claim 1, wherein the method for pushing context information based on location-based service comprises: the specific process of marking the target by using the stereo frame comprises the following steps: estimating the target depth by using the two images, converting the depth map into a 3D point cloud, realizing space three-dimensional detection by using VoteNet as a 3D detector, obtaining the position and space occupation of the target, and solving the actual space size of the target three-dimensional frame and the distance from the target three-dimensional frame to a user according to the point pair relation between the pixel and the space coordinate.
7. A scene information pushing system based on location service is characterized in that: the method specifically comprises the following steps:
the scene model building module is configured to build a scene model based on a depth map of a scene to be processed, and determine an information push range according to the scene model;
the detection judging module is configured to detect the information pushing range and confirm whether a target enters the set information pushing range;
and the scene information pushing module is configured to push real-time scene information to the target according to the positioning information of the target if the target enters a set information pushing range, and not push the scene information to the target if no target enters the information pushing range or the target leaves the information pushing range.
8. The system as claimed in claim 7, wherein the scene information push system based on location service comprises: the system further includes a client device, the device including:
a positioning module configured to acquire position information of a user;
a communication module configured to communicate information with a server;
a display module configured to display the pushed scene information;
the processing module is configured to splice the scene information;
the input module is configured to acquire an input instruction of a user.
9. The system as claimed in claim 7, wherein the scene information push system based on location service comprises: the system also comprises a positioning system, in particular a Beidou positioning module and an indoor positioning module, wherein the Beidou positioning module is used for outdoor positioning, sending a Beidou signal to the client equipment and acquiring position information; the indoor positioning module is configured to provide position information when the Beidou signal cannot be received.
10. A terminal device is characterized in that: the system comprises a processor and a computer readable storage medium, wherein the processor is used for realizing instructions; the computer-readable storage medium is used for storing a plurality of instructions, which are adapted to be loaded by a processor and execute the steps in the method for pushing context information based on location service according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011231654.7A CN112422653A (en) | 2020-11-06 | 2020-11-06 | Scene information pushing method, system, storage medium and equipment based on location service |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011231654.7A CN112422653A (en) | 2020-11-06 | 2020-11-06 | Scene information pushing method, system, storage medium and equipment based on location service |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112422653A true CN112422653A (en) | 2021-02-26 |
Family
ID=74782036
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011231654.7A Pending CN112422653A (en) | 2020-11-06 | 2020-11-06 | Scene information pushing method, system, storage medium and equipment based on location service |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112422653A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434764A (en) * | 2021-06-29 | 2021-09-24 | 青岛海尔科技有限公司 | Content pushing method and device, storage medium and electronic device |
CN114500971A (en) * | 2022-02-12 | 2022-05-13 | 北京蜂巢世纪科技有限公司 | Stadium 3D panoramic video generation method and device based on data sharing, head-mounted display equipment and medium |
CN115473934A (en) * | 2022-08-04 | 2022-12-13 | 广州市明道文化产业发展有限公司 | Multi-role decentralized and centralized text travel information pushing method and device based on event triggering |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107172205A (en) * | 2017-06-29 | 2017-09-15 | 腾讯科技(深圳)有限公司 | Pushed information processing method, mobile terminal and computer-readable storage medium |
CN107864225A (en) * | 2017-12-21 | 2018-03-30 | 北京小米移动软件有限公司 | Information-pushing method, device and electronic equipment based on AR |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN108763455A (en) * | 2018-05-25 | 2018-11-06 | 薛文迪 | A kind of service recommendation method and system based on AR real scene navigations |
CN109040289A (en) * | 2018-08-27 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | Interest point information method for pushing, server, terminal and storage medium |
CN109995799A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Information-pushing method, device, terminal and storage medium |
CN111581547A (en) * | 2020-06-04 | 2020-08-25 | 浙江商汤科技开发有限公司 | Tour information pushing method and device, electronic equipment and storage medium |
-
2020
- 2020-11-06 CN CN202011231654.7A patent/CN112422653A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107172205A (en) * | 2017-06-29 | 2017-09-15 | 腾讯科技(深圳)有限公司 | Pushed information processing method, mobile terminal and computer-readable storage medium |
CN107864225A (en) * | 2017-12-21 | 2018-03-30 | 北京小米移动软件有限公司 | Information-pushing method, device and electronic equipment based on AR |
CN109995799A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Information-pushing method, device, terminal and storage medium |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN108763455A (en) * | 2018-05-25 | 2018-11-06 | 薛文迪 | A kind of service recommendation method and system based on AR real scene navigations |
CN109040289A (en) * | 2018-08-27 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | Interest point information method for pushing, server, terminal and storage medium |
CN111581547A (en) * | 2020-06-04 | 2020-08-25 | 浙江商汤科技开发有限公司 | Tour information pushing method and device, electronic equipment and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434764A (en) * | 2021-06-29 | 2021-09-24 | 青岛海尔科技有限公司 | Content pushing method and device, storage medium and electronic device |
CN113434764B (en) * | 2021-06-29 | 2023-10-24 | 青岛海尔科技有限公司 | Content pushing method and device, storage medium and electronic device |
CN114500971A (en) * | 2022-02-12 | 2022-05-13 | 北京蜂巢世纪科技有限公司 | Stadium 3D panoramic video generation method and device based on data sharing, head-mounted display equipment and medium |
CN114500971B (en) * | 2022-02-12 | 2023-07-21 | 北京蜂巢世纪科技有限公司 | Venue 3D panoramic video generation method and device based on data sharing, head-mounted display equipment and medium |
CN115473934A (en) * | 2022-08-04 | 2022-12-13 | 广州市明道文化产业发展有限公司 | Multi-role decentralized and centralized text travel information pushing method and device based on event triggering |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11393173B2 (en) | Mobile augmented reality system | |
CN105940429B (en) | For determining the method and system of the estimation of equipment moving | |
US9996936B2 (en) | Predictor-corrector based pose detection | |
US9324003B2 (en) | Location of image capture device and object features in a captured image | |
US8644859B2 (en) | Apparatus to provide augmented reality service using location-based information and computer-readable medium and method of the same | |
CN112422653A (en) | Scene information pushing method, system, storage medium and equipment based on location service | |
CN108810473B (en) | Method and system for realizing GPS mapping camera picture coordinate on mobile platform | |
US11416719B2 (en) | Localization method and helmet and computer readable storage medium using the same | |
KR20110066133A (en) | Image annotation on portable devices | |
CN109165606B (en) | Vehicle information acquisition method and device and storage medium | |
CN103874193A (en) | Method and system for positioning mobile terminal | |
KR102097416B1 (en) | An augmented reality representation method for managing underground pipeline data with vertical drop and the recording medium thereof | |
US10733777B2 (en) | Annotation generation for an image network | |
US20100092034A1 (en) | Method and system for position determination using image deformation | |
CN103245337B (en) | A kind of obtain the method for mobile terminal locations, mobile terminal and position detecting system | |
CN106470478B (en) | Positioning data processing method, device and system | |
CN111192321A (en) | Three-dimensional positioning method and device for target object | |
CN110969592A (en) | Image fusion method, automatic driving control method, device and equipment | |
KR20160070874A (en) | Location-based Facility Management System Using Mobile Device | |
KR101996241B1 (en) | Device and method for providing 3d map representing positon of interest in real time | |
US11227407B2 (en) | Systems and methods for augmented reality applications | |
CN113483771B (en) | Method, device and system for generating live-action map | |
KR102029741B1 (en) | Method and system of tracking object | |
CN110969704B (en) | Mark generation tracking method and device based on AR guide | |
CN106840167B (en) | Two-dimensional quantity calculation method for geographic position of target object based on street view map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210226 |
|
RJ01 | Rejection of invention patent application after publication |