CN111132027B - Scene recognition network graph drawing method, scene recognition method and device - Google Patents

Scene recognition network graph drawing method, scene recognition method and device Download PDF

Info

Publication number
CN111132027B
CN111132027B CN201911293292.1A CN201911293292A CN111132027B CN 111132027 B CN111132027 B CN 111132027B CN 201911293292 A CN201911293292 A CN 201911293292A CN 111132027 B CN111132027 B CN 111132027B
Authority
CN
China
Prior art keywords
poi
base station
scene
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911293292.1A
Other languages
Chinese (zh)
Other versions
CN111132027A (en
Inventor
魏国华
郭翔宇
郭向红
孙颖飞
王波
屈立学
白晶晶
范荣娜
张景钊
计潇怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Inner Mongolia Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Inner Mongolia Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Inner Mongolia Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911293292.1A priority Critical patent/CN111132027B/en
Publication of CN111132027A publication Critical patent/CN111132027A/en
Application granted granted Critical
Publication of CN111132027B publication Critical patent/CN111132027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/003Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method for drawing a scene recognition network graph, a method and a device for recognizing a scene, wherein the method for drawing the scene recognition network graph comprises the following steps: respectively acquiring position information of a base station and position information of a point of interest (POI); determining a base station and a POI which are matched with each other according to the position information of the base station and the position information of the POI; target historical behavior data of the base-matched POI; acquiring historical user behavior data from a database of an operator; and drawing the position track of the user according to the target historical behavior data, thereby obtaining a scene recognition network graph. The scene image recognition network graph drawn by the invention can realize accurate, real-time and effective scene image recognition.

Description

Scene recognition network graph drawing method, scene recognition method and device
Technical Field
The invention relates to the field of artificial intelligence processing, in particular to a scene recognition network graph drawing method, a scene recognition method and a scene recognition device.
Background
The method can be used for identifying the scene where the user is located by utilizing big data and Artificial Intelligence (AI) technology, the user scene refers to a specific place, the scene picture formed by specific people at a specific time can be sent out timely according to the scene where the user is located, and the scene picture is related to the scene where the user is located.
In the prior art, there is a scheme for identifying a scene where a user is located by using Wireless Fidelity (WIFI) information and a tag, but only limited user scenes are obtained by processing the WIFI information and the tag, or only a user scene is constructed according to obtained user-related characteristics by analyzing log data of the user.
However, the existing methods for identifying a user scene to obtain a user scene representation have the disadvantage of low accuracy.
Disclosure of Invention
The invention provides a drawing method of a scene recognition network graph, a scene recognition method and a scene recognition device, which can accurately recognize scene images of points of Interest (POI).
In a first aspect, the present invention provides a method for drawing a scene recognition network graph, where the method includes: respectively acquiring position information of a base station and position information of a point of interest (POI); determining a base station and a POI which are matched with each other according to the position information of the base station and the position information of the POI; determining target historical behavior data of the user at the POI matched with the base station based on the base station and the POI matched with each other and the historical behavior data of the user; acquiring historical user behavior data from a database of an operator; and drawing the position track of the user according to the target historical behavior data, thereby obtaining a scene recognition network graph.
In some implementation manners of the first aspect, determining a base station and a POI that match each other according to the location information of the base station and the location information of the POI specifically includes: determining whether the POI exists in the area where the base station is located according to the position information of the base station and the position information of the POI;
and when only one first POI exists in the area where the base station is located, the identification of the first POI is associated to the base station.
In some implementation manners of the first aspect, determining a base station and a POI that match each other according to the location information of the base station and the location information of the POI further includes: when at least two second POIs exist in the area where the base station is located, the following operations are respectively executed for each second POI: acquiring the matching degree of the base station and a second POI according to the matching degree model of the relationship between the base station and the POI;
respectively associating the identifiers of the first n second POIs with the highest matching degree to the base station; the base station and POI relation matching degree model is obtained by establishing a corresponding relation between the base station and the POI, and n is a positive integer.
In some implementations of the first aspect, the target historical behavior data includes: and in the POI matched with the base station, the position of the POI where the user resides, the residence time corresponding to the position of the POI and the time point when the user arrives at the position of the POI.
In some implementation manners of the first aspect, the drawing a position track of a user according to the target historical behavior data to obtain a scene recognition network graph specifically includes: the following operations are respectively performed for each user: sequentially connecting POIs where the user resides by using lines according to the sequence of the corresponding arrival time points of the user on the different resident POIs, thereby drawing and obtaining a position track of the user; wherein the locus of the positions of the plurality of users forms a scene recognition network map.
In some implementations of the first aspect, the POI where the user resides is displayed as a location point in the location trajectory of the user, wherein an area of the location point is positively correlated with the residence time.
In a second aspect, the present invention provides a scene recognition method, including: determining a target point of interest (POI);
acquiring user online information of a target POI;
identifying a scene of a target POI according to user online information and a scene identification network graph to obtain first scene description information of the target POI, wherein the scene identification network graph is obtained based on the first aspect or the drawing method of the scene identification network graph in any one of the realizable modes of the first aspect;
the first scene description information is determined as a scene image of the target POI.
In some implementations of the second aspect, after determining the first scene description information as a scene representation of the target POI, the method further includes: performing text processing on the first scene description information to obtain second scene description information; and determining the second scene description information as a scene image of the target POI.
In a third aspect, the present invention provides an apparatus for drawing a scene recognition network map, including: the first acquisition module is used for respectively acquiring the position information of the base station and the position information of the POI;
the first determining module is used for determining the base station and the POI which are matched with each other according to the position information of the base station and the position information of the POI;
the second determination module is used for determining target historical behavior data of the user at the POI matched with the base station based on the base station and the POI matched with each other and the historical behavior data of the user; acquiring historical user behavior data from a database of an operator;
and the drawing module is used for drawing the position track of the user according to the target historical behavior data so as to obtain the scene recognition network graph.
In a fourth aspect, the present invention provides a scene recognition apparatus, including: the third determination module is used for determining a target point of interest (POI);
the second acquisition module is used for acquiring user online information of the target POI;
the identification module is used for obtaining first scene description information of a target POI according to user online information and a scene identification network graph, wherein the scene identification network graph is obtained based on the first aspect or the drawing method of the scene identification network graph in any one of the realizable modes of the first aspect;
and the fourth determining module is used for determining the first scene description information as a scene image of the target POI.
According to the scene recognition network graph drawing method and the scene recognition method, accurate POI positions corresponding to the base stations can be accurately determined by acquiring the position information of the base stations and the position information of POI (point of interest), further, the position track of each user is determined by combining the historical behavior data of the user with the position information of the base stations and the POI, and the scene image obtained by the online verification of the position track of the user is used for recognizing the network graph, so that the scene image can be accurately, timely and effectively recognized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for drawing a scene recognition network diagram according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a location of a base station for GIS map positioning according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a base station corresponding to multiple POI locations according to an embodiment of the present invention;
FIG. 4 is a diagram of a scene recognition network according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a scene recognition method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a scene representation generation system according to an embodiment of the present invention;
FIG. 7 is a schematic view of a scene image recognition process according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a device for drawing a scene recognition network diagram according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a scene recognition apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a scene recognition device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The existing common scene portrait identification method includes methods of portrait based on Wireless local area network (WIFI) and user holographic portrait construction, but the used WIFI information and tags identify the scene where the user is located, the user scene processed by using the WIFI information is limited, the user characteristics are processed by using the user holographic portrait only, and the geographic information where the user is located is not depended on, so that the user portrait obtained through identification is low in precision and poor in accuracy.
In order to solve the problems, the scene portrait of the user is identified by combining the geographic information of the user and the online information of the user, namely, the fixed scene is superposed with the living mobile scene, so that the precision and the accuracy of scene portrait identification are realized.
In the scene image according to the embodiment of the present application, the scene refers to a specific location, and the scene image is composed of specific people at the specific location at a specific time. That is, the scene image is composed of a geographical fixed scene and a life movement scene.
For example, a certain shop portrait is composed of fixed shop information and buyer information. The merchant information is a fixed scene, and the buyer information is a mobile scene.
In order to improve the recognition precision and accuracy of a scene portrait, the embodiment of the application provides a specific implementation manner of a drawing method of a scene recognition network diagram.
The following describes a method for drawing a scene recognition network diagram according to an embodiment of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a method for drawing a scene recognition network map according to an embodiment of the present invention. As shown in fig. 1, the method of drawing the scene recognition network map may include S101 to S104.
S101, respectively acquiring the position information of the base station and the position information of the POI.
In some embodiments, the location dotting of the latitude and longitude of the base station on the map can be determined according to the location information of the base station, and the specific location of the base station can be determined. Optionally, the location information of the base station may include: Cell-ID of the operator base station, Area Code (LAC-ID), base station name, base station longitude, and base station latitude table.
Optionally, the location information of the POI may include: is longitude and latitude information of a house, a shop, a mailbox, a bus station and other positions.
To facilitate understanding of the location information of the base station, the location information of the base station may be as shown in table 1 below:
TABLE 1
Name of base station Longitude (G) Latitude
Pit winery 111.57837000 40.83020000
Green artificial gas 111.58302000 40.81288000
In some embodiments, the position Information of the base station may be pushed to a Geographic Information System (GIS) map, and then the position Information is located on the GIS map to determine the position point of the base station.
As a specific example, the location of a base station mapped in a GIS may be as shown in fig. 2. Fig. 2 shows a location of a base station for GIS map positioning according to an embodiment of the present invention.
S102, determining the base station and the POI which are matched with each other according to the position information of the base station and the position information of the POI.
Specifically, base stations appearing within the POI range are defined by spatial calculation based on a map including base station position information and POI position information.
In some embodiments, whether the POI exists in the area where the base station is located is determined according to the position information of the base station and the position information of the POI; and when only one first POI exists in the area where the base station is located, the identification of the first POI is associated to the base station.
Optionally, base stations appearing in the POI range are defined through spatial calculation, and the base stations appearing in the defined POI range only correspond to the defined POI, so that the positions of the base stations are determined to be the position of the POI. Optionally, it is finally determined that base stations appearing within the range of the defined POI correspond to the POI according to verification of historical data in the operator database.
In some embodiments, the identification of the POI is associated with a base station that is present within the range of the identified POI. In this case, matching for the base station and the POI one-to-one mapping stable relationship may be referred to as a static matching process.
Alternatively, the name of the POI may be assigned to the corresponding base station.
In some embodiments, when there are at least two second POIs in the area where the base station is located, the following operations are respectively performed for each second POI:
acquiring the matching degree of the base station and a second POI according to the matching degree model of the relationship between the base station and the POI; respectively associating the identifiers of the first n second POIs with the highest matching degree to the base station; the base station and POI relation matching degree model is obtained by establishing a corresponding relation between the base station and the POI, and n is a positive integer.
In order to more clearly illustrate the situation that one base station corresponds to multiple POI locations, the following description is made with reference to the schematic diagram of the base station corresponding to multiple POI locations shown in fig. 3 according to the embodiment of the present invention.
As shown in fig. 3, the base station 1 covers both the mall and the house, and the base station 2 covers only the mall.
As a specific example, a base station corresponds to the locations of multiple POIs, such as a mall for user a and a house for user B, but the POIs are different results from the presence of multiple POIs under the same base station. This situation may be referred to as one-to-many unstable relationship matching, which is a dynamic matching process.
Optionally, based on the operator database, first, a relationship table of base stations and POIs is established, and the base stations are placed in the relationship table as long as the base stations appear at the positions of the POIs. And then, establishing a matching degree model of the association relationship between the base station and the POI position according to the relationship table.
Optionally, according to the matching degree model of the association relationship between the base station and the positions of the POIs, the identifiers of the top n POIs which are the highest in matching pair with the base station are obtained to be associated with the determined base station. Alternatively, the names of POIs may be assigned to the determined base stations, respectively.
N is a positive integer, and the value of n can be set according to specific needs, and is not limited specifically herein.
And matching all base stations with historical data and periods to corresponding POI positions respectively according to the static stable corresponding relation and the dynamic unstable corresponding relation through the static matching process and the dynamic matching process.
S103, determining target historical behavior data of the user at the POI matched with the base station based on the base station and the POI matched with each other and the historical behavior data of the user; the user historical behavior data is obtained from a database of an operator.
In some embodiments, the historical behavior data of the user is obtained from the database of the operator, and the target historical behavior data is determined according to the historical behavior data of the user and the base station and the POI which are matched with each other. And finally confirming a uniquely determined POI for the user according to the target historical behavior data.
In some embodiments, the target historical behavior data may be the POI locations where the user resides among the POIs matching the base station, and their corresponding residence time and the time point when the user arrives at the POI locations.
Optionally, the base station is finally determined to respectively belong to different unique POIs for different users according to the obtained matching relationship between the base station and the POIs, and by combining historical behavior data of the users, POI attribute values and different period statistics. Illustratively, the POI attribute value may be a mall, a house, or the like.
Optionally, the historical behavior data of the user may be counted according to different periods, and when the statistical periods are different, the matching between the base station and the POI may change dynamically.
And S104, drawing the position track of the user according to the target historical behavior data, thereby obtaining the scene recognition network graph.
Specifically, according to the target historical behavior data, a position track of the user is drawn, so that a scene recognition network graph is obtained, and the method specifically comprises the following steps: the following operations are respectively performed for each user: and sequentially connecting the POIs where the user resides by using lines according to the sequence of the corresponding arrival time points of the user on the different POIs where the user resides, thereby drawing and obtaining the position track of the user.
Wherein the locus of the positions of the plurality of users forms a scene recognition network map.
As a specific example, a scene recognition network is shown in fig. 4.
In some embodiments, the historical behavior data of the user may be summarized at different periods according to the base station and the POI matched with each other and the historical behavior data of the user. When the periods are different, the track lines can be drawn and obtained according to the historical behavior data of the user in different periods.
In some embodiments, the sequence and duration of residence time for the user to reach the POI point are determined.
Specifically, the POI where the user resides is displayed as a location point in the location trajectory of the user, wherein the area of the location point is positively correlated with the residence time.
Further, in some embodiments, according to the sequence of the users reaching the POI points and the residence time, each POI passed by the users is connected, the connection point is the POI position point, and the user position trajectory line is drawn and obtained.
Optionally, the size of the dot is rounded by taking the residence time of the user at the dot as a radius, and the longer the residence time is, the larger the dot area is. Further alternatively, the dwell time period may be calculated in units of seconds.
Optionally, the user trajectory line is a naturally progressive thick and thin line between two points according to the size of the points.
In some embodiments, the line width of the line connecting the two different resident POIs is gradually changed according to the area size relationship of the corresponding position points of the two connected resident POIs.
Optionally, the user trajectory line is a naturally progressive thick and thin line between two points according to the size of the points.
In some embodiments, all of the user location trajectories form a network of location trajectories.
After the location track network is formed according to all the user location track lines, the number of users at a certain POI position, the residence time of the users and the track situation of the users can be presented on the location track network graph.
According to the method for drawing the scene recognition network graph, provided by the embodiment of the invention, the base stations and the POIs are matched in a static matching or dynamic process mode by combining the real and reliable position information of the base stations and the POIs of the operators, and then the POIs corresponding to each user are determined by combining the historical behavior data of the users; furthermore, a scene recognition network graph is drawn through the sequence of the residence time and the arrival time of each POI of each user. The POI positions in the scene recognition network graph can show the number of users, the residence time of the users and the track conditions of the users, the offline position track information online of the users is realized, and a foundation is laid for accurately recognizing scene images.
Optionally, the constructed scene recognition network graph is combined with a big data set to intelligently analyze and recognize the scene where the user is located, so that the pushing of relevant information can be triggered on the scene where the user is located at proper time, and the effectiveness of information pushing is improved.
The above is a specific implementation manner of the method for drawing a scene recognition network graph provided in the embodiment of the present application. Based on the scene recognition network graph obtained by the implementation mode, the embodiment of the application also provides a specific implementation mode of the scene recognition method.
Fig. 5 is a schematic flowchart of a scene recognition method according to an embodiment of the present invention, fig. 6 is a schematic diagram of generating a scene image according to an embodiment of the present invention, fig. 7 is a schematic flowchart of recognizing a scene image according to an embodiment of the present invention, and with reference to fig. 5, fig. 6, and fig. 7, the method mainly includes the following steps:
s501, determining a target point of interest (POI).
In some embodiments, the position of a specific POI where the scene recognition representation is desired to be obtained is determined, and based on the position of the specific POI, the stable crowd information at the position of the specific POI can be obtained through the scene recognition network map.
Illustratively, the stable crowd information at the position of the specific POI is related user information including the number of users, the residence time of the user, and the like at the position of the specific POI in the scene identification representation.
And S502, acquiring user online information of the target POI.
In some embodiments, after determining the position of a specific POI where the scene recognition representation is desired, the online information of the floating crowd at the specific POI position is determined through big data processing.
As a specific example, the big data processing method may include a regular, crawler text mining processing method, which processes online information to generate a piece of descriptive text.
For example, the online information may be mobile phone software (APP) used by the user, keywords searched at the current specific POI location, short messages, and the like.
S503, acquiring first scene description information of the target POI according to the user online information and the scene recognition network map.
The scene recognition network graph is obtained based on the drawing method of the scene recognition network graph shown in fig. 1.
In some embodiments, the scene image of the specific POI location is identified and obtained through data processing according to the online information of the user and the offline information of the specific POI location presented on the scene identification network map.
And S504, determining the first scene description information as a scene image of the target POI.
In some embodiments, the scene image of the specific POI location is obtained by recognition after data processing according to the online information of the user and the offline information of the specific POI location presented on the scene recognition network map.
Alternatively, the scene imagery can be descriptive text of the scene about the particular POI location.
In some embodiments, after determining the first scene description information as a scene representation of the target POI, the method further comprises: performing text processing on the first scene description information to obtain second scene description information; and determining the second scene description information as a scene image of the target POI.
As a specific example, a scene representation of the particular POI location is determined based on the user's online information and offline information at the particular POI location. The scene image is used as an input, and an image abstract of the scene image is output through a data mining technology. Optionally, the portrait summary is determined as a scene image of the target POI, and recognition of the target POI scene is completed.
As a specific example, in order to obtain an image of a Z museum at 8 am, 6 persons, the number of which is found in front of the Z museum, can be obtained by a scene recognition network map.
Further, as a specific example, in combination with a big data intelligent recognition scene image: 40 resident users, 4 mobile users in the outdoor province, and 2 users in the local province. Wherein 2 users out of the province search the beef jerky, and 1 user out of the province search the yoghourt.
Therefore, it can be determined that the trajectories within 12 hours of 40 users are all nearby without much fluctuation, and the trajectories within 12 hours of 4 users are airports, hotels, museums; the track of the user in 2-family province within 12 hours is Y-cell, bus station and museum.
Furthermore, based on the current scene image of the Z museum, the method can be used for sending marketing short messages to specific users to inform the positions and business hours of beef jerky shops at the corners of the southeast of the museum at 50 meters and nearby yoghurt merchants at 20 meters.
The scene recognition network graph provided by the embodiment of the invention can be independently and openly used. The scene portrait recognition method provided by the embodiment of the invention is characterized in that the information data of the user base station of an operator is well combined with the geographic information data, the position track information under the user is presented by a position track network map in an online mode, and the scene portrait of the user is comprehensively output by combining the online information of the user under the specific position track environment, so that the accurate and effective recognition of the scene portrait can be realized.
Furthermore, through the scene recognition network graph provided by the embodiment of the invention, the group dynamics under the position of the target POI is obtained in real time, the scene image of the target POI is obtained, and the scene recognition network graph can be further used for directly triggering and facilitating the transaction when being applied to a shop.
The above is a specific implementation manner for implementing scene recognition according to a scene recognition network graph provided by the embodiment of the present application. Based on the scene image obtained by the implementation manner, the embodiment of the application also provides a drawing device of the scene recognition network diagram and a specific implementation manner of the scene recognition device.
Fig. 8 is a schematic structural diagram of a device for drawing a scene recognition network diagram according to an embodiment of the present invention. As shown in fig. 8, the drawing device of the scene recognition network map may include: a first obtaining module 801, a first determining module 802, a second determining module 803, and a drawing module 804.
The first obtaining module 801 is configured to obtain location information of a base station and location information of a point of interest (POI) respectively;
a first determining module 802, configured to determine a base station and a POI that are matched with each other according to the location information of the base station and the location information of the POI;
a second determining module 803, configured to determine, based on the base station and the POI that are matched with each other and the historical behavior data of the user, the historical behavior data of the user at the POI matched with the base station; acquiring historical user behavior data from a database of an operator;
and the drawing module 804 is used for drawing the position track of the user according to the target historical behavior data so as to obtain the scene recognition network graph.
Wherein the target historical behavior data comprises: and in the POI matched with the base station, the position of the POI where the user resides, the residence time length corresponding to the position of the POI and the time point when the user arrives at the position of the POI.
Further, the first determining module 802 is specifically configured to determine whether a POI exists in an area where the base station is located according to the location information of the base station and the location information of the POI; and when only one first POI exists in the area where the base station is located, the identification of the first POI is associated to the base station.
Further, the first determining module 802 is specifically configured to, when there are at least two second POIs in the area where the base station is located, perform the following operations for each second POI: acquiring the matching degree of the base station and a second POI according to the matching degree model of the relationship between the base station and the POI; respectively associating the identifiers of the first n second POIs with the highest matching degree to the base station; the base station and POI relation matching degree model is obtained by establishing a corresponding relation between the base station and the POI, and n is a positive integer.
Further, the drawing module 804 is specifically configured to perform the following operations for each user: sequentially connecting POIs where the user resides by using lines according to the sequence of the corresponding arrival time points of the user on the different resident POIs, thereby drawing and obtaining a position track of the user; wherein the locus of the positions of the plurality of users forms a scene recognition network map.
Further, the POI where the user resides is displayed as a location point in the location track of the user, wherein the area of the location point is positively correlated with the residence time.
Further, as a specific embodiment, the line width of the line used for connecting two different resident POIs is in a gradual change trend according to the area size relationship of the corresponding position points of the two connected resident POIs.
The scene recognition network graph drawing device provided by the embodiment of the invention can accurately determine the accurate POI position of the corresponding base station by acquiring the position information of the base station and the position information of the POI, further determine the position track of each user by combining the historical behavior data of the user with the position information of the base station and the POI, and realize accurate, real-time and effective recognition of the scene image by online verification of the position track of the user and the obtained scene image recognition network graph.
The above is a specific implementation manner of the device for drawing according to the scene recognition network diagram provided in the embodiment of the present application. Based on the scene recognition method provided by the embodiment of the invention, the embodiment of the invention also provides a specific implementation mode of the scene recognition device.
Fig. 9 is a schematic structural diagram of a scene recognition apparatus according to an embodiment of the present invention. As shown in fig. 9, the scene recognition apparatus may include: a third determining module 901, a second obtaining module 902, a recognizing module 903, and a fourth determining module 904.
The third determining module 901 is configured to determine a target point of interest POI;
a second obtaining module 902, configured to obtain user online information of a target POI;
an identifying module 903, configured to obtain first scene description information of a target POI according to user online information and a scene identification network map, where the scene identification network map is obtained based on a drawing method of the scene identification network map shown in fig. 1;
a fourth determining module 904, configured to determine the first scene description information as a scene image of the target POI.
Further, in some embodiments, the fourth determining module 904 may be specifically configured to perform text processing on the first scene description information to obtain second scene description information; and determining the second scene description information as a scene image of the target POI.
It can be understood that the drawing device and the scene recognition device of the scene recognition network diagram in the embodiments of the present invention may respectively correspond to the drawing method of the scene recognition network diagram and the execution subject of the scene recognition method in fig. 1 and fig. 5 in the embodiments of the present invention, and specific details of the operation and/or function of each module/unit of the drawing device and the scene recognition device of the scene recognition network diagram may refer to the descriptions of the corresponding parts in the drawing method and the scene recognition method of the scene recognition network diagram in fig. 1 and fig. 5 in the embodiments of the present invention, and are not repeated herein for brevity.
The scene recognition network graph provided by the embodiment of the invention can be independently and openly used. The scene image recognition device provided by the embodiment of the invention is characterized in that the information data of the user base station of an operator is well combined with the geographic information data, the position track information under the user is presented by a position track network map in an online mode, and the scene image of the user is comprehensively output by combining the online information of the user under the specific position track environment, so that the scene image can be accurately and effectively recognized.
Furthermore, through the scene recognition network graph provided by the embodiment of the invention, the group dynamics under the position of the target POI is obtained in real time, the scene image of the target POI is obtained, and the scene recognition network graph can be further used for directly triggering and facilitating the transaction when being applied to a shop.
Based on the scene recognition network diagram drawing method and the scene recognition method in the implementation modes provided by the embodiment of the invention, the embodiment of the invention also provides a specific implementation mode of the scene recognition equipment.
Fig. 10 is a schematic structural diagram of a scene recognition device according to an embodiment of the present invention.
As shown in fig. 10, the apparatus 1000 for scene recognition in the present embodiment includes an input device 1001, an input interface 1002, a central processor 1003, a memory 1004, an output interface 1005, and an output device 1006. The input interface 1002, the central processing unit 1003, the memory 1004, and the output interface 1005 are connected to each other through a bus 1010, and the input device 1001 and the output device 1006 are connected to the bus 1010 through the input interface 1002 and the output interface 1005, respectively, and further connected to other components of the device 1000 for scene recognition.
Specifically, the input device 1001 receives input information from the outside, and transmits the input information to the central processor 1003 via the input interface 1002; the central processor 1003 processes input information based on computer-executable instructions stored in the memory 1004 to generate output information, stores the output information temporarily or permanently in the memory 1004, and then transmits the output information to the output device 1006 through the output interface 1005; the output device 1006 outputs the output information to the outside of the scene recognition device 1000 for use by the user.
That is, the apparatus for scene recognition shown in fig. 10 may also be implemented to include: a memory storing computer-executable instructions; and a processor which, when executing the computer executable instructions, may implement the method of rendering a scene recognition network graph and the method of scene recognition described in connection with the examples shown in fig. 1 and 5.
In one embodiment, the apparatus 1000 for scene recognition shown in fig. 10 includes: a memory 1004 for storing programs; the processor 1003 is configured to run a program stored in the memory to execute the method for drawing a scene recognition network map and the method for scene recognition provided by the embodiment of the present invention.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium has computer program instructions stored thereon; the computer program instructions are executed by a processor to realize the scene recognition network graph drawing method and the scene recognition method provided by the embodiment of the invention.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuits, semiconductor Memory devices, Read-Only memories (ROMs), flash memories, erasable ROMs (eroms), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (10)

1. A method for drawing a scene recognition network graph is characterized by comprising the following steps:
respectively acquiring position information of a base station and position information of a point of interest (POI);
determining a base station and a POI which are matched with each other according to the position information of the base station and the position information of the POI;
determining target historical behavior data of the user at the POI matched with the base station based on the base station and the POI matched with each other and the historical behavior data of the user; the user historical behavior data is obtained from a database of an operator;
and drawing the position track of the user according to the target historical behavior data, thereby obtaining a scene recognition network graph.
2. The method according to claim 1, wherein the determining the base station and the POI matched with each other according to the location information of the base station and the location information of the POI specifically comprises:
determining whether the POI exists in the area where the base station is located according to the position information of the base station and the position information of the POI;
and when only one first POI exists in the area where the base station is located, associating the identification of the first POI to the base station.
3. The method according to claim 2, wherein the determining the base station and the POI matched with each other according to the position information of the base station and the position information of the POI further comprises:
when at least two second POIs exist in the area where the base station is located, the following operations are respectively executed for each second POI: acquiring the matching degree of the base station and a second POI according to a base station and POI relation matching degree model;
respectively associating the identifiers of the first n second POIs with the highest matching degree to the base station; and establishing a matching degree model of the relationship between the base station and the POI according to the base station and the corresponding relationship between the base station and the POI, wherein n is a positive integer.
4. The method of claim 1, wherein the target historical behavior data comprises: and in the POI matched with the base station, the position of the POI where the user resides, the residence time corresponding to the position of the POI and the time point when the user arrives at the position of the POI.
5. The method according to claim 4, wherein the step of drawing the position trajectory of the user according to the target historical behavior data to obtain a scene recognition network map specifically comprises:
the following operations are respectively performed for each user:
sequentially connecting POIs where the user resides by using lines according to the sequence of the corresponding arrival time points of the user on the different resident POIs, thereby drawing and obtaining a position track of the user;
wherein the locus of the positions of a plurality of the users forms a scene recognition network map.
6. The method of claim 5, wherein the POI where the user resides is displayed as a location point in the user's location trajectory, wherein the area of the location point is positively correlated to the duration of the residence.
7. A method for scene recognition, the method comprising:
determining a target point of interest (POI);
acquiring user online information of a target POI;
identifying the scene of the target POI according to the user online information and a scene identification network graph to obtain first scene description information of the target POI, wherein the scene identification network graph is obtained based on the method for drawing the scene identification network graph according to any one of claims 1 to 6;
and determining the first scene description information as a scene image of the target POI.
8. The method of claim 7, wherein after determining the first scene description information as a scene representation of the target POI, further comprising:
performing text processing on the first scene description information to obtain second scene description information;
and determining the second scene description information as a scene image of the target POI.
9. An apparatus for mapping a scene recognition network map, the apparatus comprising:
the first acquisition module is used for respectively acquiring the position information of the base station and the position information of the POI;
the first determining module is used for determining the base station and the POI which are matched with each other according to the position information of the base station and the position information of the POI;
the second determination module is used for determining target historical behavior data of the user at the POI matched with the base station based on the base station and the POI matched with each other and the historical behavior data of the user; the user historical behavior data is obtained from a database of an operator;
and the drawing module is used for drawing the position track of the user according to the target historical behavior data so as to obtain a scene recognition network graph.
10. A scene recognition apparatus, characterized in that the method comprises:
the third determination module is used for determining a target point of interest (POI);
the second acquisition module is used for acquiring user online information of the target POI;
an identifying module, configured to identify a scene of the target POI according to the online information of the user and a scene identification network map to obtain first scene description information of the target POI, where the scene identification network map is obtained based on the method for drawing the scene identification network map according to any one of claims 1 to 6;
a fourth determining module, configured to determine the first scene description information as a scene image of the target POI.
CN201911293292.1A 2019-12-16 2019-12-16 Scene recognition network graph drawing method, scene recognition method and device Active CN111132027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911293292.1A CN111132027B (en) 2019-12-16 2019-12-16 Scene recognition network graph drawing method, scene recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911293292.1A CN111132027B (en) 2019-12-16 2019-12-16 Scene recognition network graph drawing method, scene recognition method and device

Publications (2)

Publication Number Publication Date
CN111132027A CN111132027A (en) 2020-05-08
CN111132027B true CN111132027B (en) 2021-10-01

Family

ID=70499009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911293292.1A Active CN111132027B (en) 2019-12-16 2019-12-16 Scene recognition network graph drawing method, scene recognition method and device

Country Status (1)

Country Link
CN (1) CN111132027B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182132B (en) * 2020-09-28 2024-03-26 北京红山信息科技研究院有限公司 Subway user identification method, system, equipment and storage medium
CN114071366B (en) * 2022-01-17 2022-05-24 北京融信数联科技有限公司 Figure portrait depicting method and system in combination with knowledge graph and readable storage medium
CN116136415A (en) * 2023-02-07 2023-05-19 深圳市冠标科技发展有限公司 Navigation guidance method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984763A (en) * 2014-05-30 2014-08-13 厦门云朵网络科技有限公司 Trajectory chart display device, trajectory chart display device method and monitor terminal
BR102013033090A2 (en) * 2013-12-20 2015-12-15 Accenture Global Services Ltd system and method for tracking mobile device and non-transient readable electronic support
CN106028444A (en) * 2016-07-01 2016-10-12 国家计算机网络与信息安全管理中心 Method and device for predicting location of mobile terminal
CN107784046A (en) * 2016-11-14 2018-03-09 平安科技(深圳)有限公司 POI treating method and apparatus
CN109635190A (en) * 2018-11-28 2019-04-16 四川亨通网智科技有限公司 User characteristics method for digging based on position and behavior Conjoint Analysis
CN110096645A (en) * 2019-05-07 2019-08-06 北京百度网讯科技有限公司 Information recommendation method, device, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR102013033090A2 (en) * 2013-12-20 2015-12-15 Accenture Global Services Ltd system and method for tracking mobile device and non-transient readable electronic support
CN103984763A (en) * 2014-05-30 2014-08-13 厦门云朵网络科技有限公司 Trajectory chart display device, trajectory chart display device method and monitor terminal
CN106028444A (en) * 2016-07-01 2016-10-12 国家计算机网络与信息安全管理中心 Method and device for predicting location of mobile terminal
CN107784046A (en) * 2016-11-14 2018-03-09 平安科技(深圳)有限公司 POI treating method and apparatus
CN109635190A (en) * 2018-11-28 2019-04-16 四川亨通网智科技有限公司 User characteristics method for digging based on position and behavior Conjoint Analysis
CN110096645A (en) * 2019-05-07 2019-08-06 北京百度网讯科技有限公司 Information recommendation method, device, equipment and medium

Also Published As

Publication number Publication date
CN111132027A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111132027B (en) Scene recognition network graph drawing method, scene recognition method and device
US10587711B2 (en) Method and apparatus for pushing information
CN107172209B (en) Information pushing method and device
CN110347777B (en) Point of interest (POI) classification method, device, server and storage medium
JP2013045319A (en) Information processing apparatus, information processing method, and program
CN110674423A (en) Address positioning method and device, readable storage medium and electronic equipment
CN105187237A (en) Method and device for searching associated user identifications
US8843480B2 (en) Server, information-management method, information-management program, and computer-readable recording medium with said program recorded thereon, for managing information input by a user
US10963917B2 (en) Method and system for determining fact of visit of user to point of interest
US20160323159A1 (en) Determining Semantic Place Names from Location Reports
CN104618869A (en) Indoor positioning method and device
CN105354226A (en) Method and apparatus for positioning Wi-Fi signal transmitting devices to geographic information points
US20230049839A1 (en) Question Answering Method for Query Information, and Related Apparatus
CN112653748A (en) Information pushing method and device, electronic equipment and readable storage medium
CN103810615A (en) Target client searching method and target client searching device
CN112559663A (en) POI data processing method, device, equipment, storage medium and program product
CN111209487B (en) User data analysis method, server, and computer-readable storage medium
CN106776867A (en) Information-pushing method and device
CN111400520B (en) Face recognition library construction method, face payment method, device and system
CN105072169A (en) Intelligent information display system of culture exhibition hall
CN110740418A (en) Method and device for generating user visit information
Marakkalage et al. Identifying indoor points of interest via mobile crowdsensing: An experimental study
CN111782973A (en) Interest point state prediction method and device, electronic equipment and storage medium
CN114189806B (en) Method and device for generating wireless signal fingerprint database and electronic equipment
JP2012043264A (en) Comment evaluation device, comment evaluation method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant