CN111934896B - Elevated road terminal user identification method and device and computing equipment - Google Patents
Elevated road terminal user identification method and device and computing equipment Download PDFInfo
- Publication number
- CN111934896B CN111934896B CN201910395543.0A CN201910395543A CN111934896B CN 111934896 B CN111934896 B CN 111934896B CN 201910395543 A CN201910395543 A CN 201910395543A CN 111934896 B CN111934896 B CN 111934896B
- Authority
- CN
- China
- Prior art keywords
- scene
- elevated
- sampling point
- cell
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention relates to the technical field of communication, and discloses a method and a device for identifying an elevated road terminal user and computing equipment. The method comprises the following steps: establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library; calculating all sampling point speeds of all terminal users in the overhead scene through the multi-dimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users; counting the sampling point occupation ratio which accords with the elevated scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed; and identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point occupation ratio based on different sampling point occupation ratio standards of the upper and lower layer terminal users in the elevated scene. Through the mode, the embodiment of the invention can automatically and efficiently identify the upper layer terminal user and the lower layer terminal user in the overhead scene.
Description
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a method and a device for identifying an elevated road terminal user and computing equipment.
Background
With the rapid development of urban construction, urban road construction is faster and faster, and the overhead is always the key point and the difficulty of wireless network optimization as an important part of dense urban road networks. Because urban viaducts are divided into an upper layer structure and a lower layer structure, sites along roads are influenced by factors in various aspects such as physical conditions, user conditions and the like, the existing daily road test optimization cannot completely penetrate all viaduct scene roads, and particularly cannot give consideration to the coverage requirements of the upper and lower layered viaduct scenes, so that a targeted optimization strategy is lacked, and the coverage difference of the upper and lower layer networks is easily caused; the problems of weak coverage, untimely switching and the like occur on part of road sections, and the user perception is influenced.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide an elevated road end user identification method, apparatus and computing device, which overcome or at least partially solve the above problems.
According to an aspect of an embodiment of the present invention, there is provided an elevated road end user identification method, the method including:
establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library;
calculating all sampling point speeds of all terminal users in the overhead scene through the multi-dimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users;
counting the sampling point occupation ratio which accords with the elevated scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed;
and identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point occupation ratio based on different sampling point occupation ratio standards of the upper and lower layer terminal users in the elevated scene.
In an optional manner, in the establishing of the overhead scene analysis model including the scene address fingerprint database and the scene coverage cell database, the scene address fingerprint database is generated as follows:
and acquiring a grid model simulation address through propagation model address simulation calculation, and integrating the grid model simulation address and grid index mapping to generate the scene address fingerprint database.
In an optional manner, the obtaining a grid model simulation address through a propagation model address simulation calculation further includes:
rasterizing layer information of the scene;
obtaining the characteristic value x of the ith grid through the address simulation of a propagation modeli;
Calculating the characteristic value x of MRi 2;
Based on cosine distance methodCharacteristic value x of MRi 2And the characteristic value x of the grid obtained by simulationiAnd judging the grid positioning position to which the MR belongs to obtain the simulation address of the grid model.
In an alternative mode, the characteristic value x of the MR is calculated based on a cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiThe shortest signal distance between them is:wherein, x'iA feature value representing the ith grid of the scene address fingerprint library,the characteristic value of the sampling point of the ith grid to be evaluated is represented, and n represents the total number of grids.
In an alternative approach, the grid metric mapping includes:
counting the level difference delta between the main cell level of the actually measured data and the simulated main cell in the grid, and counting the level difference rho 1 between the first strong neighbor cell level and the simulated level difference rho 2 between the first strong neighbor cell level and the simulated main cell level, and the level difference rho 3 between the second strong neighbor cell level and the third strong neighbor cell level of the test data in the beacon grid;
respectively endowing compensation values to main cell level and adjacent cell level in scene address fingerprint databaseAndwhereinComprises a firstOf strong neighbourhoodOf the second strong neighbourhoodAnd a third strong neighbor
In an optional manner, in the establishing of the overhead scene analysis model including the scene address fingerprint library and the scene coverage cell library, the scene coverage cell library is generated by:
screening grids of the elevated main road, and selecting a cell with a road vertical distance and a grid sampling point ratio meeting preset conditions from the screened grids as an elevated main road cell, wherein the road vertical distance is the vertical distance from the longitude and latitude of a peripheral cell of the elevated main road;
screening the elevated entrance and exit grids, and selecting a cell with the sampling point proportion, the average RSRP and the switching success rate meeting preset conditions from the screened grids as an elevated entrance and exit cell;
and establishing a scene coverage cell library based on the elevated main trunk road cell and the elevated entrance and exit cell.
In an optional manner, the multidimensional association mapping between the scene address fingerprint base, the scene coverage cell base and the end user includes:
the scene address fingerprint database, the scene coverage cell database and the multi-dimensional correlation of the terminal user information, wherein the correlation information comprises the longitude and latitude of each sampling point of each terminal user in the elevated scene.
In an alternative, the sample point velocity V is S/T, where S represents the linear distance between its neighboring sample point P1(x1, y1) and P2(x2, y2) by the end user, x and y represent the longitude and latitude of the sample point, respectively, and T represents the time difference between its neighboring sample points P1 and P2 by the end user.
In an optional manner, before the step of counting the fraction of sampling points, which meet the elevated scene driving speed limit standard, in all sampling points of the end user based on the sampling point speed, the method further includes:
screening a scene address fingerprint database and terminal users with sampling point speed meeting preset conditions from the terminal users occupying the scene coverage cell database;
the sampling point proportion, which accords with the overhead scene driving speed limit standard, of all the sampling points of the terminal user is counted based on the sampling point speed is specifically as follows:
and counting the sampling point occupation ratio which accords with the overhead scene driving speed limit standard in all the sampling points of the terminal user which accords with the preset condition based on the sampling point speed.
In an optional manner, the meeting of the scene address fingerprint library with preset conditions is:
and if the grid deviation between the scene address fingerprint database and the elevated scene is not greater than a preset value, the scene address fingerprint database accords with a preset condition.
According to another aspect of embodiments of the present invention, there is provided an elevated road end user identification apparatus including:
the model establishing module is used for establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library;
the speed calculation module is used for calculating all sampling point speeds of all terminal users in the overhead scene through the multi-dimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users;
the sampling point proportion counting module is used for counting the proportion of sampling points which accord with the elevated scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed;
and the user identification module is used for identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point proportion based on different sampling point proportion standards of the upper and lower layer terminal users in the elevated scene.
According to another aspect of embodiments of the present invention, there is provided a computing device including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the elevated road end user identification method as described above.
According to another aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to execute the method for identifying an elevated road end user as described above.
According to the embodiment of the invention, the speed of the sampling point of the user is calculated through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user, the sampling point occupation ratio of the terminal user according with the overhead scene driving speed limit standard is counted based on the speed of the sampling point, the characteristics of the upper layer user and the lower layer user in the overhead scene are stripped, so that the upper layer user and the lower layer user correspond to different sampling point occupation ratio standards, the upper layer terminal user and the lower layer terminal user in the overhead scene are accurately identified based on the sampling point occupation ratio, the automatic differentiation evaluation analysis of the upper layer network and the lower layer network in the overhead scene can be realized, the corresponding optimization strategy is formulated, and the network quality of the overhead scene is improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating an elevated road end user identification method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an elevated road end user identification method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of an elevated road end user identification device provided by an embodiment of the invention;
fig. 4 is a schematic structural diagram of a computing device provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a flowchart of an elevated road end user identification method provided by an embodiment of the present invention, which is applied to a computing device, such as a server in a communication network. As shown in fig. 1, the method comprises the steps of:
step 110: and establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library.
In this step, based on a big data platform, an overhead scene map layer, and a base station database, a scene address fingerprint database is obtained by combining propagation model address simulation with raster algorithm mapping (for example, 5m × 5m raster algorithm mapping), and then an overhead scene analysis model is established by combining a scene coverage cell database. The scene coverage cell library can be obtained through main trunk channel cell identification and scene entrance and exit cell identification.
Step 120: and calculating all sampling point speeds of all terminal users in the overhead scene through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users.
In this step, the multidimensional association mapping between the scene address fingerprint database, the scene coverage cell database, and the terminal user includes: the scene address fingerprint library, the scene coverage cell library and the multidimensional correlation of the terminal user information, wherein the correlation information comprises the longitude and latitude of each sampling point of each terminal user in the overhead scene, namely the longitude and latitude of each sampling point of each terminal user in the overhead scene can be obtained through the multidimensional correlation mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user. And then calculating the speed of the sampling point by combining the longitude and latitude. The multidimensional association mapping is used for determining the position of a sampling point of a user, so that the moving speed of the user is calculated by using a sampling point speed calculation formula, and the position of the sampling point of the user can be determined by the multidimensional association mapping, so that whether a certain sampling point belongs to an overhead user can be accurately determined in the follow-up process, and the situation that users around the overhead user are mistakenly brought into the follow-up calculation process is avoided.
Specifically, the sampling point velocity V can be calculated by the formula: where S represents the linear distance of the end user between its neighboring sample points P1(x1, y1) and P2(x2, y2), x and y represent the longitude and latitude of the sample points, respectively, and T represents the time difference that the end user is between its neighboring sample points P1 and P2.
Step 130: and counting the sampling point proportion which accords with the elevated scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed.
For the users at the upper and lower layers of the overhead, the characteristics of the users at the upper and lower layers in the overhead scene are stripped, so that the users at the upper and lower layers correspond to different comparison standards, for example, the ratio of sampling points meeting the driving speed limit standard of the overhead scene is selected as the comparison standard of the users at the upper and lower layers in the overhead scene, and then the ratio of the sampling points meeting the driving speed limit standard of the overhead scene in all the sampling points of the users is counted in the step and used for the identification of the subsequent steps.
Specifically, the sampling point occupation ratio which accords with the overhead scene driving speed limit standard in all the sampling points of the user can be counted based on the overhead driving speed limit 60km/h (lowest speed limit) simulation, as shown in table 1. It is understood that the threshold value of the speed limit can be evaluated based on the high-speed limit situations of different cities in different countries.
TABLE 1
Step 140: and identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point occupation ratio based on different sampling point occupation ratio standards of the upper and lower layer terminal users in the elevated scene.
In the step, user identification is carried out through an occupation ratio method, namely accurate identification marking of the upper layer user and the lower layer user is realized through different sampling point occupation ratio standards of the upper layer user and the lower layer user in the overhead scene. For example, because the lowest speed limit of the overhead upper road is 60km/h, users with the sampling point proportion of more than 70% of the user speed of more than 60km/h can be positioned as overhead upper users, and users with the sampling point proportion of less than 70% of the user speed of more than 60km/h can be positioned as overhead lower users, so that the separation of the upper and lower users is realized. Please refer to table 1, wherein user 1 is an upper layer of users and user 2 is a lower layer of users.
According to the embodiment of the invention, the speed of the sampling point of the user is calculated through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user, the sampling point occupation ratio of the terminal user according with the overhead scene driving speed limit standard is counted based on the speed of the sampling point, the characteristics of the upper layer user and the lower layer user in the overhead scene are stripped, so that the upper layer user and the lower layer user correspond to different sampling point occupation ratio standards, the upper layer terminal user and the lower layer terminal user in the overhead scene are accurately identified based on the sampling point occupation ratio, the automatic differentiation evaluation analysis of the upper layer network and the lower layer network in the overhead scene can be realized, the corresponding optimization strategy is formulated, and the network quality of the overhead scene is improved.
Further, in the foregoing embodiment of the present invention, the scene address fingerprint database in step 110 is generated as follows: and acquiring a grid model simulation address through propagation model address simulation calculation, and integrating the grid model simulation address and grid index mapping to generate the scene address fingerprint database.
Specifically, the elevated address fingerprint library is based on MR (Measurement Report) data, work parameters, a three-dimensional map, and sweep/drive test data, and is displayed by a set reflecting geographical grid model simulation addresses and index maps of n meters × n meters (for example, 5m × 5m) obtained by performing simulation calculation through a propagation model. Wherein, MR is data that the mobile terminal transmits once every 480ms (470 ms on the signaling channel) on the traffic channel, and has universality and authenticity.
The obtaining of the grid model simulation address through the propagation model address simulation calculation further includes:
step A1: rasterizing layer information of the scene;
for example, the network is divided into n square grids on a scale of n meters by n meters (e.g., 5m by 5 m).
Step A2: obtaining the characteristic value x of the ith grid through the address simulation of a propagation modeli;
Step A3: calculating the characteristic value x of the MR of the ith gridi 2;
Step A4: calculating characteristic value x of MR based on cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiAnd judging the grid positioning position to which the MR belongs to obtain the simulation address of the grid model.
Specifically, the characteristic value x of the MR is calculated by a cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiThe shortest signal distance between them is:wherein, x'iA feature value representing the ith grid of the scene address fingerprint library,the characteristic value of the sampling point of the ith grid to be evaluated is represented, and n represents the total number of grids. According to the meterAnd determining the attribution of the MR sampling points to the fingerprint library according to the calculation result.
The grid metric mapping includes:
step B1: counting the level difference delta between the main cell level of the actually measured data and the simulated main cell in the grid, and counting the level difference rho 1 between the first strong neighbor cell level and the simulated level difference rho 2 between the first strong neighbor cell level and the simulated main cell level, and the level difference rho 3 between the second strong neighbor cell level and the third strong neighbor cell level of the test data in the beacon grid;
step B3: respectively endowing main cell level and adjacent cell level with compensation values in scene address fingerprint databaseAndwhereinComprising a first strong neighborOf the second strong neighbourhoodAnd a third strong neighbourhood
The scene address fingerprint database is established mainly based on the judgment of signal strength, in the step, a compensation value is given to the main cell level of the scene address fingerprint database, namely the main cell levelLevel of neighbor cellAnd reducing the error probability of the sampling points matching the scene address fingerprint database, and preparing for subsequently judging the position of the user by using the scene address fingerprint database. Further, in the foregoing embodiment of the present invention, the scene coverage cell library in step 110 is generated as follows:
step C1: and screening the grids of the elevated main road, and selecting a cell with road vertical distance and grid sampling point proportion meeting preset conditions from the screened grids as an elevated main road cell.
Specifically, the scene coverage cell library is composed of an overhead main trunk cell and an overhead entrance/exit cell. This step is used to determine the elevated arterial road cell. The elevated main road cell needs to satisfy the following conditions:
1. grid screening: a grid belonging to the elevated arterial road;
2. the vertical distance of the road meets the preset condition; and the vertical distance of the road is the vertical distance from the longitude and latitude of the surrounding area of the elevated frame to the elevated main road. For example, when the road vertical distance is less than 500m, the preset condition is met; the main purpose of the condition is to bring the cells around the elevated road into a scene coverage cell library;
3. the proportion of the grid sampling points of the cell meets the preset condition; for example, the cell grid sampling point proportion is less than or equal to 30%. Because the distance of the elevated main road is longer, the vehicle speed is higher, and the proportion of sampling points is lower, a lower proportion value can be set, and the cells which do not exceed the value are selected.
Step C2: and screening the overhead entrance grid, and selecting a cell with the sampling point proportion, the average RSRP (Reference Signal Receiving Power) and the switching success rate meeting preset conditions from the screened grid as an overhead entrance cell.
This step is used to determine an elevated entrance cell. The overhead entrance and exit cell needs to satisfy the following conditions:
1. grid screening: a grid belonging to an elevated doorway;
2. the proportion of sampling points accords with a preset condition; for example, the cell grid sampling point ratio is > 40%. Because the speed of vehicles at the entrance and exit positions in an elevated scene is slower than that of the elevated main road and the proportion of sampling points is higher, a higher proportion value can be set, and cells exceeding the value are selected. In addition, the number of the access positions is only 1-2, and cells with grid sampling points in a cell in an elevated scene which are proportionally ranked between the first cell and the second cell (or the highest cell) can be used as candidates of the cells of the elevated access;
3. the average RSRP meets the preset condition; for example, the average RSRP > -90 dBm; generally speaking, the coverage intensity level exceeds-90 dBmRSRP and is above level 4, various services can be initiated outdoors, and low-rate data services can be obtained;
4. the switching success rate meets the preset condition; for example, the handover success rate > 95%.
Step C3: and establishing a scene coverage cell library based on the elevated main trunk road cell and the elevated entrance and exit cell.
Fig. 2 is a flowchart illustrating an elevated road end user identification method according to another embodiment of the present invention. The difference from the embodiment shown in fig. 1 is that in the present embodiment, the statistical sampling point occupancy is based on a precise overhead user. As shown in fig. 2, the method comprises the steps of:
step 210: and establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library.
Step 220: and calculating all sampling point speeds of all terminal users in the overhead scene through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users.
Step 230: and screening the terminal users of which the scene address fingerprint database and the sampling point speed meet preset conditions from the terminal users occupying the scene coverage cell database.
This step is used for screening overhead user, and the screening item includes: cell occupancy screening, speed screening and scene address fingerprint database screening. The method comprises the following steps of screening out users meeting the requirement of occupying a scene coverage cell library through cell occupation, screening out users meeting more than 90% of sampling point speed between 5 km/h and 100km/h through speed, and enabling a scene address fingerprint library to meet preset conditions: and if the grid deviation between the scene address fingerprint database and the elevated scene is not greater than a preset value, the scene address fingerprint database accords with a preset condition, and specifically, the scene address fingerprint database screens out the users with the distance deviation between the longitude and latitude of the scene address fingerprint database and the longitude and latitude of the center of the fixed grid of the elevated scene being less than 20 meters. And if the three conditions of a certain user are all met, determining the user as an elevated user.
Step 240: and counting the sampling point occupation ratio which accords with the overhead scene driving speed limit standard in all the sampling points of the terminal user which accords with the preset condition based on the sampling point speed.
Step 250: and identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point occupation ratio based on different sampling point occupation ratio standards of the upper and lower layer terminal users in the elevated scene.
In this embodiment, steps 210, 220, 240, and 250 are similar to the specific implementation processes of steps 110, 120, 130, and 140 in the foregoing embodiments, and reference may be made to the description of the foregoing embodiments, which is not repeated herein. It should be noted that, in step 240, unlike step 130, the embodiment only counts the percentage of sampling points that meet the driving speed limit standard of the overhead scene, rather than all sampling users, among all sampling points of the end users that meet the preset condition.
In this embodiment, by adding step 230, the terminal users whose scene address fingerprint database and sampling point speed meet the preset conditions are screened out, non-overhead users are excluded, it is ensured that the subsequent terminal users for identification are all overhead users, and the accuracy of identification is improved.
Fig. 3 is a schematic structural diagram illustrating an elevated road end user identification device according to an embodiment of the present invention. As shown in fig. 3, the apparatus 300 includes: a model building module 310, a speed calculation module 320, a sample point ratio statistics module 330, and a subscriber identification module 340.
The model establishing module 310 is configured to establish an elevated scene analysis model including a scene address fingerprint database and a scene coverage cell database; the speed calculation module 320 is configured to calculate speeds of all sampling points of all terminal users in the elevated scene through the multidimensional association mapping between the scene address fingerprint database, the scene coverage cell database, and the terminal users; the sampling point proportion counting module 330 is used for counting the proportion of sampling points which accord with the overhead scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed; the user identification module 340 is configured to identify the upper and lower layer terminal users in the elevated scene according to the statistical sampling point occupation ratio based on different sampling point occupation ratios of the upper and lower layer terminal users in the elevated scene.
In an optional manner, when the model building module 310 builds an overhead scene analysis model including a scene address fingerprint database and a scene coverage cell database, the scene address fingerprint database is generated by:
and acquiring a grid model simulation address through propagation model address simulation calculation, and integrating the grid model simulation address and grid index mapping to generate the scene address fingerprint database.
In an optional manner, the obtaining a grid model simulation address through a propagation model address simulation calculation further includes:
rasterizing layer information of the scene;
obtaining the characteristic value x of the ith grid through the address simulation of a propagation modeli;
Calculating the characteristic value x of MRi 2;
Calculating characteristic value x of MR based on cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiAnd judging the grid positioning position to which the MR belongs to obtain the simulation address of the grid model.
In an alternative mode, the characteristic value x of the MR is calculated based on a cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiThe shortest signal distance between them is:wherein, x'iA feature value representing the ith grid of the scene address fingerprint library,the characteristic value of the sampling point of the ith grid to be evaluated is represented, and n represents the total number of grids.
In an optional manner, the grid metric mapping includes:
counting the level difference delta between the measured data main cell level and the simulated main cell level in the grid, and counting the level difference rho 1 between the first strong neighbor cell level and the simulated level difference rho 2 between the first strong neighbor cell level and the simulated level difference rho 3 between the second strong neighbor cell level and the third strong neighbor cell level difference of the test data in the beacon grid;
respectively endowing compensation values to main cell level and adjacent cell level in scene address fingerprint databaseAndwhereinComprising a first strong neighborOf the second strong neighbourhoodAnd a third strong neighbor
In an optional manner, when the model building module 310 builds an overhead scene analysis model including a scene address fingerprint database and a scene coverage cell database, the scene coverage cell database is generated by:
screening grids of the elevated main road, and selecting a cell with a road vertical distance and a grid sampling point ratio meeting preset conditions from the screened grids as an elevated main road cell, wherein the road vertical distance is the vertical distance from the longitude and latitude of a peripheral cell of the elevated main road;
screening the elevated entrance and exit grids, and selecting a cell with the sampling point proportion, the average RSRP and the switching success rate meeting preset conditions from the screened grids as an elevated entrance and exit cell;
and establishing a scene coverage cell library based on the elevated main trunk road cell and the elevated entrance and exit cell.
In an optional manner, the multidimensional association mapping between the scene address fingerprint base, the scene coverage cell base and the end user includes:
and the scene address fingerprint library, the scene coverage cell library and the multi-dimensional correlation of the terminal user information, wherein the correlation information comprises the longitude and latitude of each sampling point of each terminal user in the elevated scene.
In an alternative, the sample point velocity V is S/T, where S represents the linear distance between its neighboring sample point P1(x1, y1) and P2(x2, y2) by the end user, x and y represent the longitude and latitude of the sample point, respectively, and T represents the time difference between its neighboring sample points P1 and P2 by the end user.
In an optional manner, the apparatus 300 further includes a user screening module 350, configured to screen, from the end users occupying the scene coverage cell library, end users whose scene address fingerprint library and sampling point speed meet preset conditions. At this time, the sampling point proportion statistic module 330 is further configured to count, based on the sampling point speed, the proportion of sampling points that meet the elevated scene driving speed limit standard among all the sampling points of the terminal user that meet the preset condition.
In an optional manner, the meeting of the scene address fingerprint library with the preset condition is:
and if the grid deviation between the scene address fingerprint database and the elevated scene is not greater than a preset value, the scene address fingerprint database accords with a preset condition.
According to the embodiment of the invention, the speed of the sampling point of the user is calculated through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user, the sampling point occupation ratio of the terminal user according with the overhead scene driving speed limit standard is counted based on the speed of the sampling point, the characteristics of the upper layer user and the lower layer user in the overhead scene are stripped, so that the upper layer user and the lower layer user correspond to different sampling point occupation ratio standards, the upper layer terminal user and the lower layer terminal user in the overhead scene are accurately identified based on the sampling point occupation ratio, the automatic differentiation evaluation analysis of the upper layer network and the lower layer network in the overhead scene can be realized, the corresponding optimization strategy is formulated, and the network quality of the overhead scene is improved.
Embodiments of the present invention provide a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to execute the method for identifying an elevated road end user in any of the above-mentioned method embodiments.
According to the embodiment of the invention, the speed of the sampling point of the user is calculated through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user, the sampling point occupation ratio of the terminal user according with the overhead scene driving speed limit standard is counted based on the speed of the sampling point, the characteristics of the upper layer user and the lower layer user in the overhead scene are stripped, so that the upper layer user and the lower layer user correspond to different sampling point occupation ratio standards, the upper layer terminal user and the lower layer terminal user in the overhead scene are accurately identified based on the sampling point occupation ratio, the automatic differentiation evaluation analysis of the upper layer network and the lower layer network in the overhead scene can be realized, the corresponding optimization strategy is formulated, and the network quality of the overhead scene is improved.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of elevated road end user identification in any of the above-described method embodiments.
According to the embodiment of the invention, the speed of the sampling point of the user is calculated through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user, the sampling point occupation ratio of the terminal user according with the overhead scene driving speed limit standard is counted based on the speed of the sampling point, the characteristics of the upper layer user and the lower layer user in the overhead scene are stripped, so that the upper layer user and the lower layer user correspond to different sampling point occupation ratio standards, the upper layer terminal user and the lower layer terminal user in the overhead scene are accurately identified based on the sampling point occupation ratio, the automatic differentiation evaluation analysis of the upper layer network and the lower layer network in the overhead scene can be realized, the corresponding optimization strategy is formulated, and the network quality of the overhead scene is improved.
Fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 4, the computing device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically execute the method for identifying an elevated road end user in any of the above-described method embodiments.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
A memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
According to the embodiment of the invention, the speed of the sampling point of the user is calculated through the multidimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal user, the sampling point occupation ratio of the terminal user according with the overhead scene driving speed limit standard is counted based on the speed of the sampling point, the characteristics of the upper layer user and the lower layer user in the overhead scene are stripped, so that the upper layer user and the lower layer user correspond to different sampling point occupation ratio standards, the upper layer terminal user and the lower layer terminal user in the overhead scene are accurately identified based on the sampling point occupation ratio, the automatic differentiation evaluation analysis of the upper layer network and the lower layer network in the overhead scene can be realized, the corresponding optimization strategy is formulated, and the network quality of the overhead scene is improved.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.
Claims (13)
1. An elevated road end user identification method, the method comprising:
establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library;
calculating all sampling point speeds of all terminal users in the overhead scene through the multi-dimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users;
counting the sampling point occupation ratio which accords with the elevated scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed;
and identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point occupation ratio based on different sampling point occupation ratio standards of the upper and lower layer terminal users in the elevated scene.
2. The method of claim 1, wherein in the creating of the elevated scene analysis model comprising the scene address fingerprint database and the scene coverage cell database, the scene address fingerprint database is generated by:
and obtaining a grid model simulation address through propagation model address simulation calculation, and integrating the grid model simulation address and grid index mapping to generate the scene address fingerprint database.
3. The method of claim 2, wherein the obtaining of the grid model simulation address through the propagation model address simulation calculation further comprises:
rasterizing layer information of the scene;
obtaining the characteristic value x of the ith grid through the address simulation of a propagation modeli;
Calculating the characteristic value x of MRi 2;
Calculating characteristic value x of MR based on cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiAnd judging the grid positioning position to which the MR belongs to obtain the simulation address of the grid model.
4. Method according to claim 3, characterized in that the calculation of the MR eigenvalues x based on the cosine distance methodi 2And the characteristic value x of the grid obtained by simulationiThe shortest signal distance between them is:wherein, x'iA feature value representing the ith grid of the scene address fingerprint library,the characteristic value of the sampling point of the ith grid to be evaluated is represented, and n represents the total number of grids.
5. The method of claim 2, wherein the grid metric mapping comprises:
counting the level difference delta between the main cell level of the actually measured data and the simulated main cell in the grid, and counting the level difference rho 1 between the first strong neighbor cell level and the simulated level difference rho 2 between the first strong neighbor cell level and the simulated main cell level, and the level difference rho 3 between the second strong neighbor cell level and the third strong neighbor cell level of the test data in the beacon grid;
6. The method of claim 1, wherein in the establishing of the overhead scene analysis model comprising the scene address fingerprint database and the scene coverage cell database, the scene coverage cell database is generated by:
screening grids of the elevated main road, and selecting a cell with a road vertical distance and a grid sampling point ratio meeting preset conditions from the screened grids as an elevated main road cell, wherein the road vertical distance is the vertical distance from the longitude and latitude of a peripheral cell of the elevated main road;
screening the elevated entrance and exit grids, and selecting a cell with the sampling point proportion, the average RSRP and the switching success rate meeting preset conditions from the screened grids as an elevated entrance and exit cell;
and establishing a scene coverage cell library based on the elevated main trunk road cell and the elevated entrance and exit cell.
7. The method of claim 1, wherein the multidimensional association mapping of the scene address fingerprint base, the scene coverage cell base and the end user comprises:
the scene address fingerprint database, the scene coverage cell database and the multi-dimensional correlation of the terminal user information, wherein the correlation information comprises the longitude and latitude of each sampling point of each terminal user in the elevated scene.
8. The method of claim 1, wherein the sample point speed V ═ S/T, where S represents the linear distance of the end user between its neighboring sample points P1(x1, y1) and P2(x2, y2), x and y represent the longitude and latitude of sample points, respectively, and T represents the time difference that the end user is between its neighboring sample points P1 and P2.
9. The method according to claim 1, wherein before the step of counting the fraction of all sampling points of the end user that meet the elevated scene driving speed limit standard based on the sampling point speed, the method further comprises:
screening terminal users of which the scene address fingerprint database and the sampling point speed meet preset conditions from the terminal users occupying the scene coverage cell database;
the sampling point proportion, which accords with the overhead scene driving speed limit standard, of all the sampling points of the terminal user is counted based on the sampling point speed is specifically as follows:
and counting the sampling point occupation ratio which accords with the overhead scene driving speed limit standard in all the sampling points of the terminal user which accords with the preset condition based on the sampling point speed.
10. The method according to claim 9, wherein the scene address fingerprint database meets the preset condition:
and if the grid deviation between the scene address fingerprint database and the elevated scene is not greater than a preset value, the scene address fingerprint database accords with a preset condition.
11. An elevated road end user identification device, the device comprising:
the model establishing module is used for establishing an elevated scene analysis model comprising a scene address fingerprint library and a scene coverage cell library;
the speed calculation module is used for calculating all sampling point speeds of all terminal users in the overhead scene through the multi-dimensional associated mapping of the scene address fingerprint library, the scene coverage cell library and the terminal users;
the sampling point proportion counting module is used for counting the proportion of sampling points which accord with the elevated scene driving speed limit standard in all the sampling points of the terminal user based on the sampling point speed;
and the user identification module is used for identifying the upper and lower layer terminal users in the elevated scene according to the statistical sampling point proportion based on different sampling point proportion standards of the upper and lower layer terminal users in the elevated scene.
12. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the elevated road end user identification method according to any one of claims 1 to 10.
13. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform the method for elevated road end user identification according to any of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910395543.0A CN111934896B (en) | 2019-05-13 | 2019-05-13 | Elevated road terminal user identification method and device and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910395543.0A CN111934896B (en) | 2019-05-13 | 2019-05-13 | Elevated road terminal user identification method and device and computing equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111934896A CN111934896A (en) | 2020-11-13 |
CN111934896B true CN111934896B (en) | 2022-07-01 |
Family
ID=73282871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910395543.0A Active CN111934896B (en) | 2019-05-13 | 2019-05-13 | Elevated road terminal user identification method and device and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111934896B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102045734A (en) * | 2010-12-10 | 2011-05-04 | 上海百林通信软件有限公司 | TD-SCDMA (time division-synchronization code division multiple access) system parameter method based on automatic scene analysis |
CN108243405A (en) * | 2016-12-26 | 2018-07-03 | 中国移动通信集团广东有限公司 | The localization method and device of a kind of method for building up of fingerprint base, measurement report MR |
CN108802769A (en) * | 2018-05-30 | 2018-11-13 | 千寻位置网络有限公司 | Detection method and device of the GNSS terminal on overhead or under overhead |
CN109525959A (en) * | 2018-12-03 | 2019-03-26 | 中国联合网络通信集团有限公司 | High-speed railway user separation method and system, signaling data processing method and system |
CN110166991A (en) * | 2019-01-08 | 2019-08-23 | 腾讯大地通途(北京)科技有限公司 | For the method for Positioning Electronic Devices, unit and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10049129B2 (en) * | 2014-12-22 | 2018-08-14 | Here Global B.V. | Method and apparatus for providing map updates from distance based bucket processing |
-
2019
- 2019-05-13 CN CN201910395543.0A patent/CN111934896B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102045734A (en) * | 2010-12-10 | 2011-05-04 | 上海百林通信软件有限公司 | TD-SCDMA (time division-synchronization code division multiple access) system parameter method based on automatic scene analysis |
CN108243405A (en) * | 2016-12-26 | 2018-07-03 | 中国移动通信集团广东有限公司 | The localization method and device of a kind of method for building up of fingerprint base, measurement report MR |
CN108802769A (en) * | 2018-05-30 | 2018-11-13 | 千寻位置网络有限公司 | Detection method and device of the GNSS terminal on overhead or under overhead |
CN109525959A (en) * | 2018-12-03 | 2019-03-26 | 中国联合网络通信集团有限公司 | High-speed railway user separation method and system, signaling data processing method and system |
CN110166991A (en) * | 2019-01-08 | 2019-08-23 | 腾讯大地通途(北京)科技有限公司 | For the method for Positioning Electronic Devices, unit and storage medium |
Non-Patent Citations (2)
Title |
---|
基于可靠度的城市道路网络优化模型与算法;宋程;《交通信息与安全》;20100630(第3期);全文 * |
基于移动通信数据的城市行人定位及出行方式分析;余琳玲;《中国优秀硕士学位论文全文数据库 信息科技辑》;20181115(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111934896A (en) | 2020-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110677859B (en) | Method and device for determining weak coverage area and computer readable storage medium | |
CN114173356B (en) | Network quality detection method, device, equipment and storage medium | |
US8995988B2 (en) | Communication characteristic analyzing system, communication characteristic analyzing method, and communication characteristic analyzing program | |
US8862154B2 (en) | Location measuring method and apparatus using access point for wireless local area network service | |
CN112312301B (en) | User terminal positioning method, device, equipment and computer storage medium | |
CN112469066B (en) | 5G network coverage evaluation method and device | |
CN107567039B (en) | Automatic cell scene identification method and device for mobile network | |
CN110798804B (en) | Indoor positioning method and device | |
CN114885369A (en) | Network coverage quality detection processing method and device, electronic equipment and storage medium | |
CN114722944A (en) | Point cloud precision determination method, electronic device and computer storage medium | |
CN113316162A (en) | Method, device, equipment and storage medium for determining network coverage continuity | |
CN111934896B (en) | Elevated road terminal user identification method and device and computing equipment | |
CN108243424B (en) | Method and device for determining problem cell | |
CN112258881B (en) | Vehicle management method based on intelligent traffic | |
CN115175100A (en) | Network coverage problem processing method and device, server and storage medium | |
CN112584313B (en) | Weak coverage area positioning method, device, equipment and computer storage medium | |
CN111263382A (en) | Method, device and equipment for determining problem source cell causing overlapping coverage | |
CN118301658B (en) | Common site detection method, apparatus, device, storage medium and program product | |
CN112258880B (en) | Vehicle management system based on intelligent traffic | |
CN115087098B (en) | Method, system and readable storage medium for identifying attribution of base station sector of communication base station | |
CN113133049B (en) | Method, apparatus, device and medium for determining primary coverage cell | |
CN115915151A (en) | Method, device and equipment for classifying cell coverage scenes | |
CN115942351A (en) | Network quality problem processing method, device, server and storage medium | |
CN118804045A (en) | Method, device and equipment for investigating wireless bandwidth guarantee scheme of 5G private network | |
CN115396908A (en) | Network evaluation method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |