CN116528282B - Coverage scene recognition method, device, electronic equipment and readable storage medium - Google Patents

Coverage scene recognition method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116528282B
CN116528282B CN202310809241.XA CN202310809241A CN116528282B CN 116528282 B CN116528282 B CN 116528282B CN 202310809241 A CN202310809241 A CN 202310809241A CN 116528282 B CN116528282 B CN 116528282B
Authority
CN
China
Prior art keywords
base station
scene
sample
coverage
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310809241.XA
Other languages
Chinese (zh)
Other versions
CN116528282A (en
Inventor
高松岩
李权力
李文文
欧阳晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asiainfo Technologies China Inc
Original Assignee
Asiainfo Technologies China Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asiainfo Technologies China Inc filed Critical Asiainfo Technologies China Inc
Priority to CN202310809241.XA priority Critical patent/CN116528282B/en
Publication of CN116528282A publication Critical patent/CN116528282A/en
Application granted granted Critical
Publication of CN116528282B publication Critical patent/CN116528282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/18Network planning tools
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the application provides a coverage scene identification method, a coverage scene identification device, electronic equipment and a readable storage medium, and relates to the technical field of communication. The method comprises the following steps: according to traffic information, user quantity information and usage amount information of any base station to be identified in a target area to be identified, scene characteristic information of the base station to be identified is obtained, the scene characteristic information of the base station to be identified is input into an identification model of a target coverage scene type, an identification result of the target coverage scene type of the base station to be identified is obtained, and whether the coverage scene of the base station to be identified is the target coverage scene type can be determined through the identification result of the target coverage scene type; the recognition model of the target coverage scene type is obtained through training according to the positive sample and the negative sample; the positive samples are acquired according to the scene characteristic information of each positive sample base station, and the negative samples are acquired according to the scene characteristic information of each negative sample base station. The scene characteristic information of the base station is used for training and identifying the model, so that the accuracy of the identification result can be improved.

Description

Coverage scene recognition method, device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a coverage scene identification method, apparatus, electronic device, and readable storage medium.
Background
The coverage scenario refers to an environment in which communication signals can be effectively covered and transmitted in a specific area by setting a base station and adopting appropriate technical means and parameter configuration in a wireless communication system. Because different coverage scenes have different topography, buildings, population density and other factors, the number, the positions and the coverage areas of the base stations are reasonably planned according to the characteristics and the requirements of the different coverage scenes, and better signal coverage quality and communication quality can be obtained.
The identification of the coverage scene of the base station is beneficial to better optimizing the network performance, saving the construction cost, enhancing the management and maintenance of the network, ensuring the stable operation and rapid development of the network, and having important significance for the design, planning and optimization of the wireless communication system. At present, a spatial association algorithm is mainly adopted, a user track is acquired according to geographic position information and signal data, a connection between a user and a base station is established, and then a coverage scene of the base station is identified, but because the geographic position information of interest points (Point of Interest, POIs) is required to be acquired in real time, the accuracy and timeliness of the geographic position information acquisition are limited, and the spatial association algorithm has poor performance effect in a complex indoor environment.
Under such circumstances, it is desirable to provide an overlay scene recognition scheme that improves the accuracy of the target overlay scene type recognition result.
Disclosure of Invention
The application aims to at least solve one of the technical defects, and the technical scheme provided by the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides a coverage scene identifying method, including:
acquiring scene characteristic information of a base station to be identified according to traffic information, user quantity information and usage quantity information of any base station to be identified in a target area to be identified;
inputting the scene characteristic information of the base station to be identified into an identification model of the target coverage scene type, and acquiring an identification result of the target coverage scene type of the base station to be identified; the identification result of the target coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the target coverage scene type or not;
wherein, the recognition model of the target coverage scene type is obtained by the following modes:
dividing the sample base stations into positive sample base stations and negative sample base stations according to the geographical position scene information of each sample base station in the sample base station set; wherein the coverage scene of the positive sample base station is a target coverage scene type; the coverage scene of the negative-sample base station is not the target coverage scene type;
Acquiring a positive sample according to the scene characteristic information of each positive sample base station, and acquiring a negative sample according to the scene characteristic information of each negative sample base station;
and acquiring an identification model of the target coverage scene type according to the positive sample and the negative sample.
In an alternative embodiment of the application, the method further comprises:
for any base station, acquiring traffic information according to traffic data of the base station in different time periods, acquiring user quantity information according to user quantity data of the base station in different time periods and different traffic types, and acquiring usage quantity information according to traffic data and duration data of the base station in different traffic types.
In an alternative embodiment of the application, the method further comprises:
taking a base station with a coverage scene in a target area to be identified as a target coverage scene type as a target base station, acquiring longitude and latitude information of each target base station geographic position point, and carrying out space density clustering on each target base station according to the longitude and latitude information of each target base station geographic position point to obtain at least one base station cluster;
and acquiring a target coverage scene base station distribution effect diagram of the target to-be-identified area according to each base station cluster.
In an alternative embodiment of the application, the method further comprises:
If the identification result of the target coverage scene type of the base station to be identified indicates that the coverage scene of the base station to be identified is not the target coverage scene type, respectively inputting the scene characteristic information of the base station to be identified into different identification models of the identification model set to obtain an identification result set; wherein the set of recognition models includes a plurality of recognition models of different other overlay scene types; the recognition result set comprises at least one recognition result of the coverage scene type; the identification result of the coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the coverage scene type or not;
and acquiring the coverage scene type of the base station to be identified according to the identification result set.
In an optional embodiment of the present application, inputting scene feature information of a base station to be identified into different identification models of an identification model set, respectively, to obtain an identification result set, including:
sequentially inputting scene characteristic information of a base station to be identified into each identification model in an identification model set in sequence, sequentially obtaining identification results corresponding to each identification model until a first identification result exists, and obtaining the identification result set according to the first identification result; the first identification result is used for indicating that the coverage scene of the base station to be identified is a first coverage scene type;
Or respectively inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, acquiring the identification result corresponding to each identification model, and acquiring the identification result set according to the identification results corresponding to all the identification models.
In an alternative embodiment of the present application, the method for acquiring the identification model of the target coverage scene type according to the positive sample and the negative sample specifically includes:
performing outlier rejection, sample class equalization and data conversion pretreatment on the positive samples and the negative samples to obtain pretreated positive samples and pretreated negative samples;
and acquiring an identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample.
In an alternative embodiment of the present application, the method for obtaining the identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample specifically includes:
acquiring a training sample set according to the preprocessed positive sample and the preprocessed negative sample; wherein the training sample set comprises a plurality of training samples; the training samples comprise training sample data and corresponding training sample labels; the training sample label is used for indicating positive and negative categories of samples of training sample data;
According to the training sample set, the following training operation is iteratively executed on the initial neural network until a preset training stop condition is met, so as to obtain an identification model of the target coverage scene:
inputting each training sample in the training sample set into an initial neural network, and obtaining the identification result of each training sample data;
determining training loss according to the identification result of each training sample data and the positive and negative categories of the samples;
and adjusting network parameters of the initial neural network according to the training loss.
In a second aspect, an embodiment of the present application provides an overlay scene recognition apparatus, including:
the characteristic information acquisition module is used for acquiring scene characteristic information of the base station to be identified according to traffic information, user quantity information and usage amount information of any base station to be identified in the target area to be identified;
the coverage scene recognition module is used for inputting scene characteristic information of the base station to be recognized into a recognition model of the target coverage scene type and obtaining a recognition result of the target coverage scene type of the base station to be recognized; the identification result of the target coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the target coverage scene type or not;
wherein, the recognition model of the target coverage scene type is obtained by the following modes:
Dividing the sample base stations into positive sample base stations and negative sample base stations according to the geographical position scene information of each sample base station in the sample base station set; wherein the coverage scene of the positive sample base station is a target coverage scene type; the coverage scene of the negative-sample base station is not the target coverage scene type;
acquiring a positive sample according to the scene characteristic information of each positive sample base station, and acquiring a negative sample according to the scene characteristic information of each negative sample base station;
and acquiring an identification model of the target coverage scene type according to the positive sample and the negative sample.
In an alternative embodiment of the present application, the feature information obtaining module is further configured to:
for any base station, acquiring traffic information according to traffic data of the base station in different time periods, acquiring user quantity information according to user quantity data of the base station in different time periods and different traffic types, and acquiring usage quantity information according to traffic data and duration data of the base station in different traffic types.
In an alternative embodiment of the present application, the coverage scene recognition means further includes: a base station distribution identification module; the base station distribution identification module is used for:
taking a base station with a coverage scene in a target area to be identified as a target coverage scene type as a target base station, acquiring longitude and latitude information of each target base station geographic position point, and carrying out space density clustering on each target base station according to the longitude and latitude information of each target base station geographic position point to obtain at least one base station cluster;
And acquiring a target coverage scene base station distribution effect diagram of the target to-be-identified area according to each base station cluster.
In an alternative embodiment of the present application, the overlay scene recognition module is further configured to:
if the identification result of the target coverage scene type of the base station to be identified indicates that the coverage scene of the base station to be identified is not the target coverage scene type, respectively inputting the scene characteristic information of the base station to be identified into different identification models of the identification model set to obtain an identification result set; wherein the set of recognition models includes a plurality of recognition models of different other overlay scene types; the recognition result set comprises at least one recognition result of the coverage scene type; the identification result of the coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the coverage scene type or not;
and acquiring the coverage scene type of the base station to be identified according to the identification result set.
In an alternative embodiment of the present application, the coverage scene recognition module is specifically configured to:
sequentially inputting scene characteristic information of a base station to be identified into each identification model in an identification model set in sequence, sequentially obtaining identification results corresponding to each identification model until a first identification result exists, and obtaining the identification result set according to the first identification result; the first identification result is used for indicating that the coverage scene of the base station to be identified is a first coverage scene type;
Or respectively inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, acquiring the identification result corresponding to each identification model, and acquiring the identification result set according to the identification results corresponding to all the identification models.
In an alternative embodiment of the present application, the coverage scene recognition means further includes: the model construction module is identified; the identification model construction module is used for:
performing outlier rejection, sample class equalization and data conversion pretreatment on the positive samples and the negative samples to obtain pretreated positive samples and pretreated negative samples;
and acquiring an identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample.
In an alternative embodiment of the present application, the recognition model building module is specifically configured to:
acquiring a training sample set according to the preprocessed positive sample and the preprocessed negative sample; wherein the training sample set comprises a plurality of training samples; the training samples comprise training sample data and corresponding training sample labels; the training sample label is used for indicating positive and negative categories of samples of training sample data;
according to the training sample set, the following training operation is iteratively executed on the initial neural network until a preset training stop condition is met, so as to obtain an identification model of the target coverage scene:
Inputting each training sample in the training sample set into an initial neural network, and obtaining the identification result of each training sample data;
determining training loss according to the identification result of each training sample data and the positive and negative categories of the samples;
and adjusting network parameters of the initial neural network according to the training loss.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored on the memory, and the processor executes the computer program to implement the steps of the coverage scene recognition method provided in any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the coverage scene recognition method provided in any of the above embodiments.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
according to the scheme, scene characteristic information capable of reflecting the coverage scene type characteristics of the base station is obtained through the traffic information, the user quantity information and the usage quantity information of the base station. And inputting the scene characteristic information of the base station to be identified into a trained identification model of the target coverage scene type, and identifying whether the coverage scene of the base station to be identified is the target coverage scene type. Because only the scene characteristic information of the base station is adopted in the training and recognition process of the recognition model of the target coverage scene type, the real-time acquisition of the geographic position information is not required, the problem that the coverage scene recognition effect is poor due to insufficient information precision and timeliness of the geographic position information in the prior art is effectively solved, and the accuracy of the recognition result is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flow chart of a coverage scene recognition method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a method for obtaining a distribution effect diagram of a base station of a target coverage scene in an example of an embodiment of the present application;
FIG. 3 is a schematic diagram of training effects of an identification model in an example of an embodiment of the present application;
FIG. 4 is a schematic diagram of verification effect of an identification model in an example of an embodiment of the present application;
FIG. 5 is a graph of the distribution effect of a target coverage scene base station in an example of an embodiment of the application;
fig. 6 is a schematic structural diagram of an overlay scene recognition device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device for identifying an overlay scene according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the present application. It should be understood that the embodiments described below with reference to the drawings are exemplary descriptions for explaining the technical solutions of the embodiments of the present application, and the technical solutions of the embodiments of the present application are not limited.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and "comprising," when used in this specification, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, all of which may be included in the present specification. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates that at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B".
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The following description of the terminology and related art related to the application:
overlay scene: the coverage scenario refers to an environment in which communication signals can be effectively covered and transmitted in a specific area by setting a base station and adopting appropriate technical means and parameter configuration in a wireless communication system.
Deployment of wireless network coverage scenarios according to geographic location can be broadly divided into three categories, area coverage, line coverage, and point coverage. The surface coverage and the line coverage are mostly outdoor coverage, and the point coverage is indoor coverage.
The surface coverage refers to outdoor large-scale coverage so as to realize wide coverage of the whole area, and comprises scenes such as dense urban areas, general urban areas, suburban areas (county villages) and rural areas.
Line coverage refers to the coverage of linear targets such as roads, rivers and the like, and comprises scenes such as high-speed railways, common railways, subways, tunnels and the like. The high-speed railway coverage has high requirements on signal switching, otherwise, the user perception is easily influenced. The terrains passed by common railways and highways are often complex and changeable, only the signal strength needs to be ensured, the coverage area of a base station is generally larger, and the system has the characteristics of high speed movement, high user density, high service requirement and the like.
Point coverage refers to deep coverage of heavy-point buildings, underground structures, etc., including scenes such as central business areas (Central Business District, CBD), business centers, residential areas, villages in cities, universities, transportation hubs, large convention centers, industrial areas, leisure areas, and scenic areas.
The common point coverage types and corresponding characteristics are shown in table 1.
TABLE 1
And (3) a base station: i.e. public mobile communication base stations, are one form of radio stations, which are radio information transceiver stations that communicate information with mobile telephone terminals through a mobile communication switching center in a limited radio coverage area.
Cell (one cell contains one base station): also called a cell, refers to an area covered by one of base stations or a part of a base station (sector antenna) in a cellular mobile communication system, in which a mobile station can reliably communicate with the base station through a wireless channel.
Machine learning: machine learning is a multidisciplinary cross-specialty covering probabilistic knowledge, statistical knowledge, approximate theoretical knowledge, and complex algorithmic knowledge, uses a computer as a tool and aims at simulating human learning in real time, and performs knowledge structure division on existing content to effectively improve learning efficiency. Machine learning is a common research hotspot in the fields of artificial intelligence and pattern recognition, and its theory and method have been widely applied to solve complex problems in engineering application and scientific fields. The research directions of traditional machine learning mainly comprise research in Decision Tree (Decision Tree), random Forest (Random Forest), artificial neural network, bayes (Bayes) learning and the like.
The current method for identifying the coverage scene of the base station in the field of mobile communication mainly focuses on identifying the coverage scene of the base station by using information such as a map or a grid of a geographic information system (Geographic Information System, GIS), POI (point of interest) in geographic information and the like and combining a spatial association algorithm.
Taking three types of prior art as examples, the specific schemes and defects of the prior art are described in detail as follows:
prior art 1: in the initial stage of network planning construction, operators manually survey in the field, and with the aid of a GIS map, a planning network optimization center can input coverage scenes (such as universities, hospitals, residential areas, business areas and the like) of base stations in base station engineering parameter data information according to the geographic position of the base stations, and the coverage scenes of the base stations generally adopt an input mode of manual filling, so that the coverage scenes of the base station cells have no corresponding data basis, are high in subjectivity, cannot accurately and objectively represent the scene information of the base stations, and cannot effectively guarantee the accuracy.
When the base station is maintained, the coverage scene of the base station is updated in the same mode, but the data information corresponding to the GIS map needs to be referred to, and the problem of low updating frequency exists in the data information, so that certain hysteresis exists in the updating of the coverage scene of the base station, the accuracy of the coverage scene of the base station is lower, and the real attribute of the coverage area cannot be truly reflected.
Prior art 2: based on the acquired position information of the base station and the position information of the POI; constructing the position information relation between the base station and the POI so as to determine the mutual matching relation; and drawing the position track of the user based on the target historical behavior data of the matched POI and combining the user historical behavior data in the communication carrier database, thereby obtaining a scene recognition network diagram.
The method for determining the coverage scene of the base station by combining the service data of the service of the user in the POI area based on the position information of the base station and the position information of the POI has the problems that the service history data of the POI point is difficult to obtain, the positioning algorithm has certain deviation, the realization has certain difficulty, the operable space is limited, and the accuracy and the timeliness of the acquisition of the position information are limited, so that the identification accuracy of the coverage scene of the base station is poor.
Prior art 3: determining the coverage area of a base station, and searching interest point information in the coverage area of the base station on an electronic map; and determining the coverage scene of the base station according to the interest point information positioned in the coverage area of the base station in the electronic map. The coverage area of the base station is automatically determined, interest point information in the coverage area of the base station is automatically searched by utilizing the electronic map, and the coverage scene of the base station is automatically determined according to the interest point information.
The scheme for determining the coverage scene of the base station based on the coverage area of the base station and the POI points is high in automation degree, and can effectively save manpower. However, for indoor base stations, the coverage is limited by a free space loss propagation model, the coverage is only indoor, and a plurality of indoor coverage sites have no POI points at all, so that coverage scenes of the indoor sites cannot be confirmed, the application scene of the scheme is only outdoor, the indoor sites cannot be covered in a limited manner, the coverage scene identification is limited, and the identification effect on the indoor coverage scenes is poor.
Aiming at least one technical problem or the place needing improvement in the related art, the application provides a coverage scene identification method scheme.
The technical solutions of the embodiments of the present application and technical effects produced by the technical solutions of the present application are described below by describing several exemplary embodiments. It should be noted that the following embodiments may be referred to, or combined with each other, and the description will not be repeated for the same terms, similar features, similar implementation steps, and the like in different embodiments.
Fig. 1 is a flow chart of a coverage scene recognition method provided in an embodiment of the present application, and as shown in fig. 1, the embodiment of the present application provides a coverage scene recognition method, which includes:
Step S101, scene characteristic information of the base station to be identified is obtained according to traffic information, user quantity information and usage quantity information of any base station to be identified in the target area to be identified.
Specifically, considering the difference of base station user services under different coverage scenes, after any base station to be identified in a target area to be identified is determined, the service data of the base station to be identified (such as a mobile phone end) corresponding to the base station to be identified can be utilized to obtain the service volume information, the user volume information and the usage amount information of the base station to be identified, the scene characteristics of the base station are comprehensively evaluated according to the service volume information, the user volume information and the usage amount information, data capable of reflecting the service difference of the base station user is selected, and the scene characteristic information of the base station to be identified is obtained.
It can be understood that the geographic scene is geographic information data which is continuously formed into pieces within a certain area range and reflects the position and the form of the geographic space in the real world, different types of geographic scenes have the characteristics of the geographic scene, and the geographic scene is reflected on the base station coverage scene corresponding to the geographic scene, so that the flow, the traffic volume, the tidal effect, the service characteristics and the like of people in the coverage scene have the change rule of the geographic scene in a certain time period, and the image analysis of people flow and the service characteristics in different coverage scenes can be constructed by comprehensively researching the relevant factors such as various living habits, residence time and the service characteristics of users in different coverage scenes.
For example, the number and traffic (i.e., voice and data traffic) of users in a central business area coverage scenario has significant tidal phenomena during the day and night, significant correlation of traffic of users in a college coverage scenario with time periods and location areas, etc. In addition, the difference of the voice and data services of the user in different coverage scenes can be further considered, such as higher voice traffic in daytime, lower data traffic in daytime, lower voice traffic in night, higher data traffic and the like in a certain coverage scene. Or further analyzing the difference of business characteristics under different coverage scenes, such as that the number of payment business (such as WeChat payment, payment treasury payment, bank card payment and the like) is larger than that of other coverage scenes under the coverage scene of a business center.
Step S102, inputting scene characteristic information of a base station to be identified into an identification model of a target coverage scene type, and acquiring an identification result of the target coverage scene type of the base station to be identified; the identification result of the target coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the target coverage scene type or not;
wherein, the recognition model of the target coverage scene type is obtained by the following modes:
Dividing the sample base stations into positive sample base stations and negative sample base stations according to the geographical position scene information of each sample base station in the sample base station set; wherein the coverage scene of the positive sample base station is a target coverage scene type; the coverage scene of the negative-sample base station is not the target coverage scene type;
acquiring a positive sample according to the scene characteristic information of each positive sample base station, and acquiring a negative sample according to the scene characteristic information of each negative sample base station;
and acquiring an identification model of the target coverage scene type according to the positive sample and the negative sample.
Specifically, before any target coverage scene type is identified, a trained identification model of the target coverage scene type needs to be acquired. And selecting a plurality of base stations with known geographical position scene information as sample base stations during recognition model training, and acquiring a sample base station set.
It can be understood that the geographical location scene information is used for judging whether the sample base station is the target coverage scene type, and the specific information type can be longitude and latitude information of the base station, geographical location area type information (such as schools, markets, subway stations and the like) of the base station, unique codes of the base station and the like.
Taking the target coverage scene type as a business center as an example, the method for acquiring the identification model of the target coverage scene type in this embodiment is described in detail as follows:
Judging whether each sample base station is positioned in a commercial center according to the geographical position scene information of each sample base station in the sample base station set, dividing the base stations positioned in the commercial center into positive sample base stations, and dividing the base stations not positioned in the commercial center into negative sample base stations.
And acquiring a positive sample according to the scene characteristic information of each positive sample base station in the sample base station set, and acquiring a negative sample according to the scene characteristic information of each negative sample base station in the sample base station set. And training according to the positive sample and the negative sample to obtain a commercial center type identification model.
It can be understood that the specific type of the scene feature information used in the training of the recognition model is consistent with the specific type of the scene feature information input into the model when the trained recognition model is used for the overlay scene recognition, and the type of the scene feature information can be determined according to actual requirements.
In addition, the recognition model of the target coverage scene type obtained through training can be used for realizing whether the recognition result of the base station coverage scene is the classification of the target coverage scene type, and algorithms such as logistic regression (Logistic Regression), decision trees, support vector machines (Support Vector Machine), random forests, naive Bayes and the like can be adopted in the process of training the recognition model.
After the scene characteristic information of the base station to be identified is determined, inputting the scene characteristic information into a trained recognition model of the target coverage scene type, acquiring a recognition result of the target coverage scene type of the base station to be identified, and determining whether the coverage scene of the base station to be identified is the target coverage scene type according to the recognition result of the target coverage scene type.
According to the technical scheme provided by the embodiment, scene characteristic information capable of reflecting the type characteristics of the coverage scene of the base station is obtained through the traffic information, the user quantity information and the usage quantity information of the base station. And inputting the scene characteristic information of the base station to be identified into a trained identification model of the target coverage scene type, and identifying whether the coverage scene of the base station to be identified is the target coverage scene type. Because only the scene characteristic information of the base station is adopted in the training and recognition process of the recognition model of the target coverage scene type, the real-time acquisition of the geographic position information is not required, the problem that the coverage scene recognition effect is poor due to insufficient information precision and timeliness of the geographic position information in the prior art is effectively solved, and the accuracy of the recognition result is effectively improved.
In an alternative embodiment of the application, the method further comprises:
For any base station, acquiring traffic information according to traffic data of the base station in different time periods, acquiring user quantity information according to user quantity data of the base station in different time periods and different traffic types, and acquiring usage quantity information according to traffic data and duration data of the base station in different traffic types.
Specifically, for any base station (including a base station to be identified and a sample base station), data information corresponding to the base station is obtained. Sources of data information corresponding to a base station include, but are not limited to, PM (Performance Management ) data, DPI (Deep Packet Inspection, deep packet inspection) data, and industrial parameter data of a corresponding base station cell in a wireless network communication system.
Wherein, PM performance data: the method is characterized in that various performance indexes generated in the running process of the base station cell are monitored and counted, and the results are presented in the form of a data report, wherein the results comprise channel quality, radio signaling state, telephone traffic condition, cross-cell switching success rate, access success rate and the like.
DPI data: the method is data obtained after the base station cell captures, decodes, reverse protocol stack operates, and tunnel decapsulates the network data flow. The method can provide accurate application layer flow identification capability and help to know specific internet surfing behavior and application requirements of the user.
And (3) working parameters: the method refers to various physical parameters and configuration information of a base station cell, including cell coverage, carrier frequency resources, power levels, antenna heights, mechanical downtilt angles, electrical downtilt angles, sector coverage angles, frequency planning and the like.
After the data information corresponding to the base station is acquired, data which can represent the traffic volume, the user volume and the usage volume of the base station are selected from the data information, the scene characteristics of a coverage scene can be reflected by considering the difference performance of the data under different time (including time, time period, duration and the like) and different traffic types, the traffic volume information is acquired according to the traffic volume data of the base station in different time periods, the user volume information is acquired according to the user volume data of the base station in different time periods and different traffic types, and the usage volume information is acquired according to the traffic volume data and the duration data of the base station in different traffic types.
For example, the service amount information, the user amount information and the usage amount information are obtained by taking the annual data information of xxxx corresponding to the base station as an example. And acquiring traffic information according to the number of the month average users, the total traffic, the traffic and the uplink/downlink traffic of each month of the base station. And acquiring user quantity information according to the average daytime (such as 8:00 a.m. to 8:00 a.m.), the evening user (such as 8:00 a.m. to 8:00 a.m.), the game user quantity, the payment large user quantity, the video user quantity, the music user quantity and the instant messaging user quantity of the user. And acquiring the usage amount information according to the daily average instant messaging flow, the WeChat payment flow, the Payment device payment flow, the WeChat payment duration and the Payment device payment duration of the base station.
It will be appreciated that the specific type and amount of data included in the traffic information, the subscriber amount information and the usage amount information may be determined according to actual needs.
The method comprises the steps of constructing a scene service index system in mobile communication service under different scenes by adopting data of three aspects of traffic information, user quantity information and usage quantity information, wherein indexes in the scene service index system can comprehensively reflect characteristics such as user service usage condition, user quantity and traffic tide change under different coverage scenes, and objectively and accurately evaluate coverage scene characteristics of a base station.
The technical scheme provided by the embodiment obtains the traffic information according to the traffic data of the base station in different time periods, obtains the user quantity information according to the user quantity data of the base station in different time periods and different traffic types, and obtains the usage quantity information according to the traffic data and the duration data of the base station in different traffic types. The scene characteristics of the base station are comprehensively evaluated through the three aspects of the traffic information, the user quantity information and the usage amount information of the base station, so that the differences of user services in different time and different service types under different coverage scenes can be more comprehensively reflected, the accuracy of an identification model trained based on the scene characteristic information is ensured, and the accuracy of an identification result is effectively improved.
In an alternative embodiment of the application, the method further comprises:
taking a base station with a coverage scene in a target area to be identified as a target coverage scene type as a target base station, acquiring longitude and latitude information of each target base station geographic position point, and carrying out space density clustering on each target base station according to the longitude and latitude information of each target base station geographic position point to obtain at least one base station cluster;
and acquiring a target coverage scene base station distribution effect diagram of the target to-be-identified area according to each base station cluster.
Specifically, the target to-be-identified area may include a plurality of base stations, and whether the coverage scene of each base station in the target to-be-identified area is the target coverage scene is identified through an identification model of the target coverage scene type.
And taking the base station with the coverage scene in the target area to be identified as the target coverage scene type as a target base station, and acquiring longitude and latitude information corresponding to each target base station geographic position point in the target area to be identified. Longitude and latitude are coordinate systems composed of longitude and latitude, and a space on the earth is defined by a spherical surface of a three-dimensional space, so that any position on the earth can be marked. The longitude and latitude information of the target base station is used for representing the geographic position point of the target base station.
Determining the spatial distribution condition of each base station in the target to-be-identified area according to the longitude and latitude information of each target base station geographical position point in the target to-be-identified area, and carrying out spatial density clustering processing on the geographical position points of each target base station according to the spatial distribution density of each target base station to obtain at least one base station cluster.
For example, the base station cluster is determined by dividing the area with sufficient density into clusters according to the spatial distribution density of each target base station in the target area to be identified by a density-based clustering method (density-based spatial clustering of applications with noise, dbscan) with noise. By the dbscan algorithm, a cluster of any shape can be found and abnormal points (noise points free outside the cluster) can be found while clustering is performed without inputting the number of categories.
In addition, density-based spatial clustering algorithms such as an OPTICS (Ordering points to identify the clustering structure, point ordering recognition clustering structure) algorithm, a CURD (clustering using references and density, clustering based on reference points and density) algorithm, an SDBDC (scalable density-based distributed clustering, distributed clustering based on density) algorithm and the like can be adopted according to actual requirements.
And acquiring a target coverage scene base station distribution effect diagram (hereinafter referred to as distribution effect diagram) of the target to-be-identified area according to each base station cluster. It can be understood that in the distribution effect graph, the distribution effect display form of the target coverage scene base station can be determined according to actual requirements.
For example, geographic location points of all base stations may be marked in a map of the target area to be identified, the target base stations therein may be marked in a special way (such as red marking, etc.), each base station cluster may be marked in the form of a dashed frame, or, in the map of the target area to be identified, the coverage area of each base station cluster may be marked in the map according to each base station cluster and the coverage area of each target base station.
According to the technical scheme provided by the embodiment, the base stations with all the coverage scenes as the target coverage scene types in the target area to be identified are identified, and the base stations are clustered according to the spatial distribution density of the base stations with all the coverage scenes as the target coverage scene types, so that a distribution effect diagram is further obtained. The distribution effect diagram can intuitively and clearly describe the distribution situation of the base stations with all coverage scenes in the target area to be identified as the types of the target coverage scenes, can assist staff in analyzing the distribution characteristics, coverage area, strength and other information of the base stations in the target area to be identified, realize the comprehensive analysis and evaluation of the distribution situation of the base stations in the target area to be identified, determine the base station optimization scheme (such as optimizing and improving the base station signals by adopting means of signal processing technology, antenna selection, power control, optimization algorithm and the like), provide support for planning construction, optimization and maintenance of the communication network base stations, effectively improve the efficiency of planning construction, optimization and maintenance, and reduce the required cost.
In an alternative embodiment of the application, the method further comprises:
if the identification result of the target coverage scene type of the base station to be identified indicates that the coverage scene of the base station to be identified is not the target coverage scene type, respectively inputting the scene characteristic information of the base station to be identified into different identification models of the identification model set to obtain an identification result set; wherein the set of recognition models includes a plurality of recognition models of different other overlay scene types; the recognition result set comprises at least one recognition result of the coverage scene type; the identification result of the coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the coverage scene type or not;
and acquiring the coverage scene type of the base station to be identified according to the identification result set.
Specifically, since the recognition model of the target coverage scene type is used to realize whether the recognition result of the coverage scene of the base station is two kinds of the target coverage scene type, when the coverage scene recognition of the base station to be recognized is performed by using only the recognition model of the single target coverage scene type, if the recognition result of the target coverage scene type of the base station to be recognized indicates that the coverage scene of the base station to be recognized is not the target coverage scene type, the specific type of the coverage scene of the base station to be recognized cannot be determined.
In order to solve the above-mentioned problems, the present embodiment constructs a recognition model set including a plurality of recognition models of different other coverage scene types, through which the recognition of a specific type of coverage scene of a base station to be recognized can be achieved.
It should be noted that, in this embodiment, in order to facilitate distinguishing whether the coverage scene type of the base station to be identified is the identification of the target coverage scene by the identification model of the target coverage scene type, and identifying the specific type of the coverage scene of the base station to be identified according to the two schemes of the identification model set, the identification model of the target coverage scene type is independent from the identification model set. In another alternative embodiment of the present invention, the recognition model of the target overlay scene type may be included in a set of recognition models.
It will be appreciated that the specific number and type of types of overlay scene in the recognition model set may be determined based on actual requirements, for example: and selecting a remote scenic spot located in a mountain area as a target area to be identified, wherein when the base station in the target area to be identified carries out coverage scene type identification, the identification models are concentrated to comprise identification models corresponding to four types of coverage scenes of scenic spots, suburban areas, rural areas and tunnels.
And respectively inputting the scene characteristic information of the base station to be identified into different identification models of the identification model set, and obtaining output results corresponding to the identification models to obtain an identification result set. It can be understood that when the scene characteristic information of the base station to be identified is input into different identification models in the identification model set, multiple input modes such as sequential input and simultaneous input can be adopted, and the specific input mode can be determined according to actual requirements.
The identification result set comprises at least one identification result of the coverage scene types, and the identification result of each coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the corresponding coverage scene type. And acquiring an identification result which indicates that the coverage scene of the base station to be identified is a corresponding scene type in the identification result set, and acquiring the corresponding coverage scene type as the coverage scene type of the base station to be identified.
For example, the recognition model set includes recognition models corresponding to four types of coverage scenes of a scenic spot, a suburb, a rural area and a tunnel, the corresponding recognition result set includes a scenic spot F (i.e. 0, false), a suburb F, a rural area T (i.e. 1, true) and a tunnel F, and the coverage scene type of the base station to be recognized is obtained as rural according to the recognition result set.
According to the technical scheme provided by the embodiment, an identification model set consisting of a plurality of identification models of different other coverage scene types is constructed, scene characteristics of the base station to be identified are identified through the identification model set, an identification result set is obtained, and the coverage scene type of the base station to be identified is determined according to the identification result set. Compared with the scheme of carrying out the coverage scene recognition of the base station according to the single model, in the scheme, the scene characteristic information of the base station is only adopted in the training and recognition process of each recognition model in the recognition model set, the real-time acquisition of geographic position information is not needed, and the problem that the coverage scene recognition effect is poor due to insufficient information precision and timeliness of the geographic position information in the prior art is effectively solved. By adopting the mode that each recognition model recognizes the corresponding coverage scene type, the robustness of the recognition model is stronger, the difference of two classification categories is distinguished more accurately, and the accuracy of the recognition result is effectively improved.
In an optional embodiment of the present application, inputting scene feature information of a base station to be identified into different identification models of an identification model set, respectively, to obtain an identification result set, including:
sequentially inputting scene characteristic information of a base station to be identified into each identification model in an identification model set in sequence, sequentially obtaining identification results corresponding to each identification model until a first identification result exists, and obtaining the identification result set according to the first identification result; the first identification result is used for indicating that the coverage scene of the base station to be identified is a first coverage scene type;
Or respectively inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, acquiring the identification result corresponding to each identification model, and acquiring the identification result set according to the identification results corresponding to all the identification models.
Specifically, the scene features of the base station to be identified are respectively input into different identification models of the identification model set, and can be sequentially or simultaneously input in two modes, specifically as follows:
sequentially inputting: and sequentially inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, judging whether the identification result can indicate that the coverage scene of the base station to be identified is of the corresponding coverage scene type or not every time the identification result corresponding to one identification model is obtained, if the identification result cannot indicate, continuously inputting the next identification model to obtain the corresponding identification result, repeating the judging process until a first identification result exists, wherein the first identification result can indicate that the coverage scene of the base station to be identified is of the corresponding first coverage scene type, and obtaining the identification result set according to the first identification result.
It may be appreciated that the recognition result set may be obtained according to the first recognition result, and the recognition result set may include only the first recognition result, or include the first recognition result and all recognition results before the first recognition result.
In addition, when the scene characteristic information of the base station to be identified is sequentially input into different identification models of the identification model set in sequence, modes of input according to a preset input sequence or random sequence can be adopted.
Taking input according to a preset input sequence as an example to illustrate the implementation, the recognition models are concentrated and comprise recognition models corresponding to four types of coverage scenes of scenic spots, suburban areas, rural areas and tunnels, the preset input sequence during recognition is that the scenic spots are the suburban areas, the rural areas are the tunnels, recognition results are the scenic spots F-suburban areas F-rural areas T in sequence, at the moment, the presence of the rural T recognition results can indicate that the coverage scenes of the base stations to be recognized are rural, the recognition is stopped, and the obtained recognition results are concentrated and comprise rural T recognition results.
Simultaneous input: and simultaneously and respectively inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, acquiring the identification result corresponding to each identification model, and acquiring the identification result set according to the identification results corresponding to all the identification models.
Illustrating: the identification model set comprises identification models corresponding to four types of coverage scenes of scenic spots, suburban areas, rural areas and tunnels, scene characteristic information of a base station to be identified is simultaneously and respectively input into the four identification models, four identification results of scenic spots F, suburban areas F, rural areas T and tunnels F are obtained, and an identification result set is obtained according to the four identification results.
The technical scheme provided by the embodiment provides that the scene characteristics of the base station to be identified can be respectively input into different identification models of the identification model set to obtain the identification result set by adopting two modes of sequential input or simultaneous input. The mode of sequential input can effectively save the calculation resources and the storage space required by determining the coverage scene of the model to be identified, but the time is longer. The scene feature type of the base station to be identified can be rapidly identified by adopting a mode of simultaneous input, but more calculation resources and storage space are occupied. The input mode is flexibly selected according to the number of the concentrated models of the recognition models and the requirements during recognition, so that the scene feature type recognition requirements of the base station to be recognized under different conditions can be met, and the feasibility of the scheme is improved.
In an alternative embodiment of the present application, the method for acquiring the identification model of the target coverage scene type according to the positive sample and the negative sample specifically includes:
performing outlier rejection, sample class equalization and data conversion pretreatment on the positive samples and the negative samples to obtain pretreated positive samples and pretreated negative samples;
and acquiring an identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample.
Specifically, in order to ensure the accuracy of the sample during the training of the recognition model of the object coverage scene type and improve the capability of the recognition model to extract the characteristics, the sample data needs to be preprocessed before the recognition model training is performed. In this embodiment, preprocessing includes three steps of outlier rejection, sample class equalization, and data conversion.
Wherein, outlier rejection: refers to eliminating abnormal values (such as null values, outliers, repeated values and the like) which cannot be taken in reality. The abnormal value usually has adverse effects on the fitting of the model, such as reducing the recognition accuracy of the model, increasing the risk of overfitting and the like, and eliminating the abnormal value can effectively avoid the adverse effects caused by the abnormal value, increase the prediction capability of the coverage scene characteristic information and improve the accuracy of the recognition model.
Sample class equalization: the method refers to the method of carrying out equalization treatment on positive and negative sample types by sampling (such as undersampling, oversampling or combination of oversampling and undersampling) or setting punishment weights of the positive and negative samples. The problem of sample class imbalance may result in a model that predicts a few classes poorly. Sample class equalization can help the recognition model to learn the characteristics of few classes better, and the accuracy of the recognition model in predicting the few classes is effectively improved.
Data conversion: the data is standardized by means of BOX-COX conversion (BOX-Cox transformation), power law conversion or normal distribution standardization, and further the data can be subjected to normalization, dimension reduction and other operations. The data can be unified into a form which is more suitable for recognition model training through data conversion, and the rules and characteristics existing in the data can be better revealed.
And carrying out the pretreatment on the positive sample and the negative sample, and obtaining the pretreated positive sample and the pretreated negative sample.
Training the recognition model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample, and acquiring the recognition model of the trained target coverage scene type.
According to the technical scheme provided by the embodiment, through abnormal value rejection, sample class equalization and data conversion pretreatment on the positive sample and the negative sample, errors and interference caused by data quality and other reasons can be avoided, the positive sample and the negative sample are more standard, and different characteristics can be compared fairly. According to the pre-processed positive sample and the pre-processed negative sample, the target coverage scene type recognition model is trained, the recognition model training effect is effectively improved, and the accuracy and the robustness of the recognition model are improved.
In an alternative embodiment of the present application, the method for obtaining the identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample specifically includes:
acquiring a training sample set according to the preprocessed positive sample and the preprocessed negative sample; wherein the training sample set comprises a plurality of training samples; the training samples comprise training sample data and corresponding training sample labels; the training sample label is used for indicating positive and negative categories of samples of training sample data;
according to the training sample set, the following training operation is iteratively executed on the initial neural network until a preset training stop condition is met, so as to obtain an identification model of the target coverage scene:
inputting each training sample in the training sample set into an initial neural network, and obtaining the identification result of each training sample data;
determining training loss according to the identification result of each training sample data and the positive and negative categories of the samples;
and adjusting network parameters of the initial neural network according to the training loss.
Specifically, when the recognition model of the target coverage scene type is trained, a training sample set is obtained according to a preprocessed positive sample and a preprocessed negative sample, the training sample set comprises a plurality of training samples, each training sample comprises training sample data and a corresponding training sample label, and the training sample label is used for indicating the positive and negative categories of the samples of the training sample data.
It will be appreciated that in addition to training sample sets, test sample sets and validation sample sets may also be obtained from pre-processed positive samples and pre-processed negative samples. The number of samples in the training sample set, the test sample set and the verification sample set can be determined according to actual requirements.
After determining the training sample set, the following training operations are iteratively performed on the initial neural network according to the training sample set. The training operation specifically includes:
inputting each training sample in the training sample set into the initial neural network, obtaining the identification result of each training sample data, comparing the identification result of each training sample data with the positive and negative types of the samples, determining whether the identification result of the sample data is accurate, determining the training loss of the identification model according to whether the identification result of each sample identification data is accurate, and adjusting the network parameters of the initial neural network according to the training loss.
Repeating the training operation until the preset training stopping condition is met, and obtaining the recognition model of the target coverage scene. It may be appreciated that the preset training stop condition may be that the number of iterative training reaches a preset number of times threshold, the change amount of the loss function of the recognition model is smaller than a preset change threshold, or the change of the accuracy of the prediction result of the recognition model is smaller than a preset accuracy threshold, etc.
According to the technical scheme provided by the embodiment, the identification model of the target coverage scene type is trained through the preprocessed positive sample and the preprocessed negative sample, the characteristic training is carried out on the base station cell under the target coverage scene type, the identification model of the target coverage scene type is generated, the identification efficiency of the base station coverage scene aiming at the target coverage scene type is improved, and the defects of low identification efficiency and identification lag of the traditional multi-classification identification model are effectively overcome. In addition, in the training process of the identification model of the target coverage scene type, only the scene characteristic information of the base station is adopted, the real-time acquisition of geographic position information is not needed, the problem that in the prior art, the coverage scene identification effect is poor due to insufficient information precision and timeliness of the geographic position information is effectively solved, and the accuracy of the identification result is effectively improved.
The following describes a specific application of the embodiment of the present application in detail by a specific example:
and comprehensively evaluating scene characteristics of the base station according to three aspects of traffic information, user quantity information and usage amount information of the base station, and selecting data capable of reflecting the service difference of the base station user. In this embodiment, by taking indexes of dimensions such as 31 traffic volumes, traffic categories, usage amounts, etc. as examples, a characteristic index system in a mobile communication system is constructed, and scene characteristic information is formed together, where the scene characteristic information is shown in table 2 below.
TABLE 2
/>
Fig. 2 is a schematic diagram of a target coverage scene base station distribution effect diagram obtaining method in an example of the embodiment of the present application, and the target coverage scene base station distribution effect diagram obtaining method in the embodiment is described taking a target coverage scene type as a business area as an example.
Scene characteristic information (shown in table 2) of each sample base station in the sample base station set is acquired, whether the coverage scene type of the sample base station is a commercial area is judged according to the geographical position scene information of the sample base station, the sample base station with the coverage scene type being the commercial area is divided into positive sample base stations, and the sample base stations with the coverage scene type not being the commercial area are divided into negative sample base stations. And acquiring a positive sample according to the scene characteristic information of each positive sample base station, acquiring a negative sample according to the scene characteristic information of each negative sample base station, and preprocessing the data in the positive sample and the negative sample.
And according to the preprocessed positive samples and the preprocessed negative samples, a decision tree algorithm is used for learning and classifying the commercial area coverage scene base station and the non-commercial area coverage scene base station, a commercial area identification model is constructed, the coverage scene features are identified according to the identification model, feature threshold learning is carried out, and the trained commercial area identification model is obtained after repeated iterative training.
It will be appreciated that the decision tree algorithm is a method of approximating discrete function values, which is a typical classification method. The internal decision tree in the algorithm is a regression tree, an L1 regular term and an L2 regular term are added on the basis of the original error function, and the loss function can be square loss or logic loss.
Adding a regularization term can effectively prevent overfitting from both: pre-pruning: the regularization term defines the number of leaf nodes; smoothing: the coefficients of the L2 modulo squares of the leaf score in the regularization term are smoothed for the leaf score.
Objective function used by decision tree:
wherein, the liquid crystal display device comprises a liquid crystal display device,the error sum of all samples is predicted for the decision tree model; />A predicted value for the i-th sample; />Is the actual value of the i-th sample.
;/>Is an L1 regular term; />Is the L1 regular term coefficient; t is the number of leaf nodes; />Is an L2 regular term; />Is an L2 regular term coefficient; />For leaf scan, the information gain covered by the corresponding leaf node represents the predictive power of the leaf node.
It will be appreciated that leaf computers are typically represented by a score, and if a leaf node is determined to be higher, the decision tree will split the corresponding leaf node, create two branches, and classify all samples of the split leaf node by characteristic values, respectively into two leaf nodes.
Based on the decision tree algorithm model, the two-class regression of the positive and negative samples of the coverage scene features can be realized.
When the decision tree algorithm is used for training the identification model, a sample type label is added on the basis of the preprocessed positive sample and the preprocessed negative sample, the positive sample is marked as 1, and the negative sample is marked as 0. And according to 7:2: the scale of 1 divides the sample data into a training data set, a validation data set, and a test data set. Randomly dividing data in a training data set, selecting a certain amount of positive sample data and negative sample data as input data of the recognition model, performing iterative training on the recognition model, optimizing parameters (such as booster, objective, max _depth and save_period) of the decision tree recognition model during each iterative training, and acquiring the trained recognition model if the recognition model meets the training stop condition after multiple iterative training is determined.
Taking the training of the recognition model of the commercial area in this embodiment as an example, fig. 3 is a schematic diagram of the training effect of the recognition model in one example of the embodiment of the present application, where the training effect of the recognition model of the commercial area is shown in fig. 3, and the abscissa in the effect schematic diagram is 0 Positive Rate (FPR) and 1 Positive Rate (TPR), respectively, where FPR refers to the proportion of the number of samples with real labels being negative but with prediction results being Positive to the number of all real negative samples, and TPR refers to the proportion of the number of samples with real labels being Positive and with prediction results being Positive to the number of all real Positive samples. The closer the ROC curve (Receiver Operating Characteristic Curve) is to the upper left corner, the larger the area AUC (Area Under Curve) under the curve, indicating better performance of the recognition model.
When the AUC in FIG. 3 is 0.94, the Accuracy (Accuracy) of the recognition model is 91.62%, the Precision (Precision) is 90.17%, the Recall (Recall) is 79.39%, and the F1 value (F1-score) is 84.44%. At this time, the recognition model is determined to reach a preset training effect, the iterative training is stopped, and the recognition model of the commercial area after training is obtained.
With verification of the identification model of the commercial area after training using the verification data set, fig. 4 is a schematic diagram showing verification effects of the identification model in an example of the embodiment of the present application, and when AUC is 0.85, as shown in fig. 4, accuracy of the identification model is 84.72%, precision is 81.02%, recall is 63.43%, and F1-score is 71.15%. At this time, the comprehensive evaluation of the verification result of the identification model is determined to be more than 70%, and the identification model is determined to meet the verification requirement.
In addition, the classification evaluation standard F-score can be calculated according to the difference between Precision and Recall importance, by adopting the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,βfor adjusting weights whenβWhen the two are in the same weight as the value of 1; if Precision is considered more important, then decreaseβIf Recall is considered more important, then it is increasedβ。
Referring to fig. 2 again, the validated recognition model is tested by using a test data set (detailed process is omitted), and the accuracy of the recognition model in this embodiment can be confirmed to be more than 80% by field investigation.
After the identification model is determined to meet the test requirement, the identification model is adopted to identify the coverage scene of the base station, whether the coverage scene type of each base station in the target area to be identified is a commercial area or not is determined, and the base station with the coverage scene type being the commercial area is taken as the target base station.
The latitude and longitude information of the geographic position points of all the target base stations are obtained, and the latitude and longitude information of the geographic latitude and longitude information of the own base station cell data is taken as an example, and the latitude and longitude information is shown in the following table 3.
TABLE 3 Table 3
Clustering each target base station in the target to-be-identified area through a dbscan algorithm to obtain a base station cluster, and generating a business area base station distribution effect diagram of the target to-be-identified area according to the base station cluster, wherein fig. 5 is a target coverage scene base station distribution effect diagram in an example of the embodiment of the application, and the business area base station distribution effect diagram is shown in fig. 5.
According to the coverage scene recognition method provided by the embodiment, recognition and visual display of distribution conditions of the target coverage scene base station can be realized.
Fig. 6 is a schematic structural diagram of an overlay scene recognition device according to an embodiment of the present application, and as shown in fig. 6, the device 60 may include: a feature information acquisition module 601 and an overlay scene recognition module 602;
The feature information obtaining module 601 is configured to obtain scene feature information of a base station to be identified according to traffic information, user amount information and usage amount information of any base station to be identified in a target area to be identified;
the coverage scene recognition module 602 is configured to input scene feature information of a base station to be recognized into a recognition model of a target coverage scene type, and obtain a recognition result of the target coverage scene type of the base station to be recognized; the identification result of the target coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the target coverage scene type or not;
wherein, the recognition model of the target coverage scene type is obtained by the following modes:
dividing the sample base stations into positive sample base stations and negative sample base stations according to the geographical position scene information of each sample base station in the sample base station set; wherein the coverage scene of the positive sample base station is a target coverage scene type; the coverage scene of the negative-sample base station is not the target coverage scene type;
acquiring a positive sample according to the scene characteristic information of each positive sample base station, and acquiring a negative sample according to the scene characteristic information of each negative sample base station;
and acquiring an identification model of the target coverage scene type according to the positive sample and the negative sample.
According to the technical scheme provided by the embodiment, scene characteristic information capable of reflecting the type characteristics of the coverage scene of the base station is obtained through the traffic information, the user quantity information and the usage quantity information of the base station. And inputting the scene characteristic information of the base station to be identified into a trained identification model of the target coverage scene type, and identifying whether the coverage scene of the base station to be identified is the target coverage scene type. Because only the scene characteristic information of the base station is adopted in the training and recognition process of the recognition model of the target coverage scene type, the real-time acquisition of the geographic position information is not required, the problem that the coverage scene recognition effect is poor due to insufficient information precision and timeliness of the geographic position information in the prior art is effectively solved, and the accuracy of the recognition result is effectively improved.
The device of the embodiment of the present application may perform the method provided by the embodiment of the present application, and its implementation principle is similar, and actions performed by each module in the device of the embodiment of the present application correspond to steps in the method of the embodiment of the present application, and detailed functional descriptions of each module of the device may be referred to the descriptions in the corresponding methods shown in the foregoing, which are not repeated herein.
In an alternative embodiment of the present application, the feature information obtaining module is further configured to:
for any base station, acquiring traffic information according to traffic data of the base station in different time periods, acquiring user quantity information according to user quantity data of the base station in different time periods and different traffic types, and acquiring usage quantity information according to traffic data and duration data of the base station in different traffic types.
In an alternative embodiment of the present application, the coverage scene recognition means further includes: a base station distribution identification module; the base station distribution identification module is used for:
taking a base station with a coverage scene in a target area to be identified as a target coverage scene type as a target base station, acquiring longitude and latitude information of each target base station geographic position point, and carrying out space density clustering on each target base station according to the longitude and latitude information of each target base station geographic position point to obtain at least one base station cluster;
and acquiring a target coverage scene base station distribution effect diagram of the target to-be-identified area according to each base station cluster.
In an alternative embodiment of the present application, the overlay scene recognition module is further configured to:
if the identification result of the target coverage scene type of the base station to be identified indicates that the coverage scene of the base station to be identified is not the target coverage scene type, respectively inputting the scene characteristic information of the base station to be identified into different identification models of the identification model set to obtain an identification result set; wherein the set of recognition models includes a plurality of recognition models of different other overlay scene types; the recognition result set comprises at least one recognition result of the coverage scene type; the identification result of the coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the coverage scene type or not;
And acquiring the coverage scene type of the base station to be identified according to the identification result set.
In an alternative embodiment of the present application, the coverage scene recognition module is specifically configured to:
sequentially inputting scene characteristic information of a base station to be identified into each identification model in an identification model set in sequence, sequentially obtaining identification results corresponding to each identification model until a first identification result exists, and obtaining the identification result set according to the first identification result; the first identification result is used for indicating that the coverage scene of the base station to be identified is a first coverage scene type;
or respectively inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, acquiring the identification result corresponding to each identification model, and acquiring the identification result set according to the identification results corresponding to all the identification models.
In an alternative embodiment of the present application, the coverage scene recognition means further includes: the model construction module is identified; the identification model construction module is used for:
performing outlier rejection, sample class equalization and data conversion pretreatment on the positive samples and the negative samples to obtain pretreated positive samples and pretreated negative samples;
and acquiring an identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample.
In an alternative embodiment of the present application, the recognition model building module is specifically configured to:
acquiring a training sample set according to the preprocessed positive sample and the preprocessed negative sample; wherein the training sample set comprises a plurality of training samples; the training samples comprise training sample data and corresponding training sample labels; the training sample label is used for indicating positive and negative categories of samples of training sample data;
according to the training sample set, the following training operation is iteratively executed on the initial neural network until a preset training stop condition is met, so as to obtain an identification model of the target coverage scene:
inputting each training sample in the training sample set into an initial neural network, and obtaining the identification result of each training sample data;
determining training loss according to the identification result of each training sample data and the positive and negative categories of the samples;
and adjusting network parameters of the initial neural network according to the training loss.
The embodiment of the application provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to realize the steps of the coverage scene identification method, and compared with the related technology, the steps of the coverage scene identification method can be realized: and acquiring scene characteristic information capable of reflecting the characteristics of the coverage scene type of the base station through the traffic information, the user quantity information and the usage quantity information of the base station. And inputting the scene characteristic information of the base station to be identified into a trained identification model of the target coverage scene type, and identifying whether the coverage scene of the base station to be identified is the target coverage scene type. Because only the scene characteristic information of the base station is adopted in the training and recognition process of the recognition model of the target coverage scene type, the real-time acquisition of the geographic position information is not required, the problem that the coverage scene recognition effect is poor due to insufficient information precision and timeliness of the geographic position information in the prior art is effectively solved, and the accuracy of the recognition result is effectively improved.
In an alternative embodiment, an electronic device is provided, as shown in fig. 7, the electronic device 70 shown in fig. 7 comprising: a processor 701 and a memory 703. The processor 701 is coupled to a memory 703, such as via a bus 702. Optionally, the electronic device 700 may further comprise a transceiver 704, the transceiver 704 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 704 is not limited to one, and the structure of the electronic device 700 is not limited to the embodiment of the present application.
The processor 701 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 701 may also be a combination that performs computing functions, such as including one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 702 may include a path to transfer information between the components. Bus 702 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect Standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. Bus 702 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
The Memory 703 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media, other magnetic storage devices, or any other medium that can be used to carry or store a computer program and that can be Read by a computer, without limitation.
The memory 703 is used for storing a computer program for executing an embodiment of the present application and is controlled to be executed by the processor 701. The processor 701 is arranged to execute a computer program stored in the memory 703 for carrying out the steps shown in the foregoing method embodiments.
The electronic device in the embodiment of the present application may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a car-mounted terminal (e.g., car navigation terminal), a wearable device, etc., and a fixed terminal such as a digital TV, a desktop computer, etc.
Embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the foregoing method embodiments and corresponding content.
The computer readable storage medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and in the claims and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such that the embodiments of the application described herein may be implemented in other sequences than those illustrated or otherwise described.
It should be understood that, although various operation steps are indicated by arrows in the flowcharts of the embodiments of the present application, the order in which these steps are implemented is not limited to the order indicated by the arrows. In some implementations of embodiments of the application, the implementation steps in the flowcharts may be performed in other orders as desired, unless explicitly stated herein. Furthermore, some or all of the steps in the flowcharts may include multiple sub-steps or multiple stages based on the actual implementation scenario. Some or all of these sub-steps or phases may be performed at the same time, or each of these sub-steps or phases may be performed at different times, respectively. In the case of different execution time, the execution sequence of the sub-steps or stages can be flexibly configured according to the requirement, which is not limited by the embodiment of the present application.
The foregoing is merely an optional implementation manner of some of the implementation scenarios of the present application, and it should be noted that, for those skilled in the art, other similar implementation manners based on the technical ideas of the present application are adopted without departing from the technical ideas of the scheme of the present application, and the implementation manner is also within the protection scope of the embodiments of the present application.

Claims (9)

1. An overlay scene recognition method, comprising:
acquiring scene characteristic information of any base station to be identified in a target area to be identified according to traffic information, user quantity information and usage quantity information of the base station to be identified; the scene characteristic information is used for reflecting the difference of the base station user service;
inputting the scene characteristic information of the base station to be identified into an identification model of the target coverage scene type, and acquiring an identification result of the target coverage scene type of the base station to be identified; the identification result of the target coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the target coverage scene type or not;
wherein the recognition model of the target coverage scene type is obtained by the following method:
dividing the sample base stations into positive sample base stations and negative sample base stations according to the geographical position scene information of each sample base station in the sample base station set; the geographic position scene information is used for judging whether the coverage scene of the sample base station is a target coverage scene type or not; the coverage scene of the positive sample base station is a target coverage scene type; the coverage scene of the negative-sample base station is not a target coverage scene type;
Acquiring a positive sample according to the scene characteristic information of each positive sample base station, and acquiring a negative sample according to the scene characteristic information of each negative sample base station;
acquiring an identification model of the target coverage scene type according to the positive sample and the negative sample;
the method further comprises the steps of:
for any base station, acquiring the traffic information according to the traffic data of the base station in different time periods, acquiring the user quantity information according to the user quantity data of the base station in different time periods and different traffic types, and acquiring the usage quantity information according to the traffic data and the duration data of the base station in different traffic types.
2. The overlay scene recognition method of claim 1, wherein the method further comprises:
taking a base station with a coverage scene in the target area to be identified as a target coverage scene type as a target base station, acquiring longitude and latitude information of each target base station geographic position point, and carrying out space density clustering on each target base station according to the longitude and latitude information of each target base station geographic position point to obtain at least one base station cluster;
and acquiring a target coverage scene base station distribution effect diagram of the target to-be-identified area according to each base station cluster.
3. The overlay scene recognition method of claim 1, wherein the method further comprises:
if the identification result of the target coverage scene type of the base station to be identified indicates that the coverage scene of the base station to be identified is not the target coverage scene type, respectively inputting the scene characteristic information of the base station to be identified into different identification models of an identification model set to obtain an identification result set; wherein the set of recognition models includes a plurality of recognition models of different other overlay scene types; the recognition result set comprises at least one recognition result of the coverage scene type; the identification result of the coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the coverage scene type or not;
and acquiring the coverage scene type of the base station to be identified according to the identification result set.
4. The method for identifying a coverage scene according to claim 3, wherein the step of inputting the scene feature information of the base station to be identified into different identification models of an identification model set, respectively, and obtaining an identification result set includes:
sequentially inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set in sequence, sequentially obtaining identification results corresponding to each identification model until a first identification result exists, and obtaining an identification result set according to the first identification result; the first identification result is used for indicating that the coverage scene of the base station to be identified is a first coverage scene type;
Or respectively inputting the scene characteristic information of the base station to be identified into each identification model in the identification model set, acquiring an identification result corresponding to each identification model, and acquiring an identification result set according to the identification results corresponding to all the identification models.
5. The overlay scene recognition method according to claim 1, wherein the acquiring the recognition model of the target overlay scene type according to the positive sample and the negative sample specifically comprises:
performing outlier rejection, sample class equalization and data conversion preprocessing on the positive samples and the negative samples to obtain preprocessed positive samples and preprocessed negative samples;
and acquiring an identification model of the target coverage scene type according to the preprocessed positive sample and the preprocessed negative sample.
6. The overlay scene recognition method according to claim 5, wherein the acquiring the recognition model of the target overlay scene type according to the preprocessed positive sample and the preprocessed negative sample specifically comprises:
acquiring a training sample set according to the preprocessed positive sample and the preprocessed negative sample; wherein the training sample set comprises a plurality of training samples; the training samples comprise training sample data and corresponding training sample labels; the training sample label is used for indicating the positive and negative categories of samples of the training sample data;
According to the training sample set, the following training operation is iteratively executed on the initial neural network until a preset training stopping condition is met, so as to obtain an identification model of the target coverage scene:
inputting each training sample in the training sample set into an initial neural network, and obtaining the identification result of each training sample data;
determining training loss according to the identification result of each training sample data and the positive and negative categories of the samples;
and according to the training loss, adjusting the network parameters of the initial neural network.
7. An overlay scene recognition apparatus, comprising:
the characteristic information acquisition module is used for acquiring scene characteristic information of any base station to be identified according to traffic information, user quantity information and usage amount information of the base station to be identified in the target area to be identified; the scene characteristic information is used for reflecting the difference of the base station user service;
the coverage scene recognition module is used for inputting the scene characteristic information of the base station to be recognized into a recognition model of the target coverage scene type and obtaining a recognition result of the target coverage scene type of the base station to be recognized; the identification result of the target coverage scene type is used for indicating whether the coverage scene of the base station to be identified is the target coverage scene type or not;
Wherein the recognition model of the target coverage scene type is obtained by the following method:
dividing the sample base stations into positive sample base stations and negative sample base stations according to the geographical position scene information of each sample base station in the sample base station set; the geographic position scene information is used for judging whether the coverage scene of the sample base station is a target coverage scene type or not; the coverage scene of the positive sample base station is a target coverage scene type; the coverage scene of the negative-sample base station is not a target coverage scene type;
acquiring a positive sample according to the scene characteristic information of each positive sample base station, and acquiring a negative sample according to the scene characteristic information of each negative sample base station;
acquiring an identification model of the target coverage scene type according to the positive sample and the negative sample;
wherein:
for any base station, acquiring the traffic information according to the traffic data of the base station in different time periods, acquiring the user quantity information according to the user quantity data of the base station in different time periods and different traffic types, and acquiring the usage quantity information according to the traffic data and the duration data of the base station in different traffic types.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to carry out the steps of the method according to any one of claims 1-6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-6.
CN202310809241.XA 2023-07-04 2023-07-04 Coverage scene recognition method, device, electronic equipment and readable storage medium Active CN116528282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310809241.XA CN116528282B (en) 2023-07-04 2023-07-04 Coverage scene recognition method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310809241.XA CN116528282B (en) 2023-07-04 2023-07-04 Coverage scene recognition method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116528282A CN116528282A (en) 2023-08-01
CN116528282B true CN116528282B (en) 2023-09-22

Family

ID=87401599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310809241.XA Active CN116528282B (en) 2023-07-04 2023-07-04 Coverage scene recognition method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116528282B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778308A (en) * 2023-08-23 2023-09-19 北京阿帕科蓝科技有限公司 Object recognition method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104731A (en) * 2018-07-04 2018-12-28 广东海格怡创科技有限公司 Construction method, device and the computer equipment of cell scenario category classification model
CN109919244A (en) * 2019-03-18 2019-06-21 北京字节跳动网络技术有限公司 Method and apparatus for generating scene Recognition model
WO2022002242A1 (en) * 2020-07-02 2022-01-06 北京灵汐科技有限公司 Scene recognition method and system, and electronic device and medium
CN114423025A (en) * 2021-12-29 2022-04-29 中国电信股份有限公司 Scene recognition method, device, equipment and storage medium
CN115618290A (en) * 2021-06-30 2023-01-17 中国联合网络通信集团有限公司 Cell scene identification method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4024297A4 (en) * 2019-09-16 2022-11-09 Huawei Cloud Computing Technologies Co., Ltd. Artificial intelligence (ai) model evaluation method and system, and device
US20220383151A1 (en) * 2021-05-26 2022-12-01 At&T Intellectual Property I, L.P. Steering of roaming optimization with subscriber behavior prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104731A (en) * 2018-07-04 2018-12-28 广东海格怡创科技有限公司 Construction method, device and the computer equipment of cell scenario category classification model
CN109919244A (en) * 2019-03-18 2019-06-21 北京字节跳动网络技术有限公司 Method and apparatus for generating scene Recognition model
WO2022002242A1 (en) * 2020-07-02 2022-01-06 北京灵汐科技有限公司 Scene recognition method and system, and electronic device and medium
CN115618290A (en) * 2021-06-30 2023-01-17 中国联合网络通信集团有限公司 Cell scene identification method, device, equipment and storage medium
CN114423025A (en) * 2021-12-29 2022-04-29 中国电信股份有限公司 Scene recognition method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于AI的5G基站节能技术研究;张志荣 等;电子技术应用;第45卷(第10期);第3页 *
张志荣 等.基于AI的5G基站节能技术研究.电子技术应用.2019,第45卷(第10期),第3页. *

Also Published As

Publication number Publication date
CN116528282A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN110166943B (en) Method for processing terminal position information
Zhang et al. The Traj2Vec model to quantify residents’ spatial trajectories and estimate the proportions of urban land-use types
US20130260771A1 (en) Generation and use of coverage area models
US11966424B2 (en) Method and apparatus for dividing region, storage medium, and electronic device
US11468349B2 (en) POI valuation method, apparatus, device and computer storage medium
CN116528282B (en) Coverage scene recognition method, device, electronic equipment and readable storage medium
Ghafourian et al. Wireless localization with diffusion maps
CN112699201B (en) Navigation data processing method and device, computer equipment and storage medium
CN117079148B (en) Urban functional area identification method, device, equipment and medium
CN111126422A (en) Industry model establishing method, industry determining method, industry model establishing device, industry determining equipment and industry determining medium
Maaloul et al. Bluetooth beacons based indoor positioning in a shopping malls using machine learning
Yang et al. AI-enabled data-driven channel modeling for future communications
CN114630269B (en) Signal processing method, device, equipment and storage medium
CN110691336A (en) Double-scale positioning algorithm based on integrated learning and relative positioning
Tennekes et al. A Bayesian approach to location estimation of mobile devices from mobile network operator data
CN113890833B (en) Network coverage prediction method, device, equipment and storage medium
Georgati et al. Spatial Disaggregation of Population Subgroups Leveraging Self-Trained Multi-Output Gradient Boosting Regression Trees
CN112235723B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN115203348A (en) Information processing method, information processing apparatus, storage medium, and server
CN108495262B (en) Sparse representation and matching positioning method for indoor space ubiquitous positioning signal fingerprint database
Lin et al. Two-stage clustering for improve indoor positioning accuracy
Ma et al. Service coverage optimization for facility location: considering line-of-sight coverage in continuous demand space
Tennekes et al. Bayesian location estimation of mobile devices using a signal strength model
Salnikov et al. The geography and carbon footprint of mobile phone use in Côte d’Ivoire
CN110619086B (en) Method and apparatus for processing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant