Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application.
The live information intelligent perception method provided by the embodiment of the application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, Augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computers (UMPCs), netbooks or Personal Digital Assistants (PDAs) and other terminal devices, and the embodiment of the application does not limit the specific types of the terminal devices at all.
The embodiment of the application can sense the live information of the to-be-live sensing site, and the to-be-live sensing site can be any type of site. The method can be divided into, but not limited to, sites requiring emergency command and sites not requiring emergency command. The sites where emergency command is required can include but are not limited to fire sites, earthquake sites and the like, and the sites where emergency command is not required can include but are not limited to fire safety situation inspection sites and the like.
For example, in a fire in the No. 10 building 2 unit 303 of a certain cell, a firefighter a who arrives at the fire scene needs to know the quantity information and the position information of flammable and combustible materials around the fire scene, and at this time, the scene to be perceived as the fire scene is the scene, and a firefighter a who arrives at the scene is the user. After the A firefighter arrives downstairs of a No. 10 building, the mobile phone obtains the geographical position information of a fire scene through the GPS positioning of the mobile phone, and the position of the fire scene is displayed in the electronic map. A firefighters determine that all flammable and combustible articles in the No. 10 building need to be known according to the fire condition of a fire scene. A firefighters select buildings as targets and flammable and explosive substances as objects of interest on an interface of a live information perception application program. The background server determines a target and an attention object according to the selection operation of the user on the mobile phone. And then, the background server executes a sensing algorithm on the inflammable and explosive substances to obtain the data of the number 10 building, selects the quantity information and the position information of the inflammable and explosive substances from the data of the acquisition and the check as live sensing information, fuses the quantity information and the position information of the inflammable and explosive substances, and displays the fused quantity information and the fused position information on a screen interface of the mobile phone. Therefore, the firefighter A can sense the information of the flammable and combustible articles on the fire scene so as to better carry out rescue work, or the leader of the firefighter A can better direct the rescue work and the scheduling work on the fire scene according to the acquired information.
It should be noted that the above-mentioned application scenario is only an example, and the application scenario of the embodiment of the present application is not limited herein.
The technical solutions of the embodiments of the present application will be described below by specific embodiments.
Referring to fig. 1, a schematic flow diagram of a live information intelligent perception method provided in an embodiment of the present application is shown, where the method may include the following steps:
s101, obtaining geographical position information of a scene to be perceived live, and determining a region to be perceived live according to the geographical position information.
It should be noted that the geographic location information may specifically be latitude and longitude information, and the geographic location information may be obtained through a positioning signal of a user terminal device located at a scene to be perceived live, or obtained by obtaining location information input by a user.
Specifically, in some embodiments, the positioning signal of the mobile terminal device located in the scene to be perceived live may be acquired first, and then the geographic location information of the scene to be perceived live may be obtained according to the positioning signal.
The mobile terminal device may be, but is not limited to, a mobile phone, a smart band, a smart watch, or a tablet computer. The user carries the mobile terminal device to go to the scene to be perceived live, and when the user arrives at the scene to be perceived live, the geographic position of the scene to be perceived live is obtained by acquiring the positioning signal of the mobile terminal device.
For example, the position of the user B is displayed on an electronic map in real time through a mobile phone GPS signal carried by the user B. And when the user B arrives at the scene to be perceived live, the position displayed on the electronic map is the geographical position of the scene to be perceived live.
In other embodiments, the user may also input corresponding location information on the electronic map to obtain the geographic location of the scene to be perceived live. In other words, the terminal device may acquire a map coordinate moving operation of the user in the electronic map, and then, in response to the map coordinate moving operation, take the geographic position information of the position where the map coordinate is located as the geographic position information of the scene to be perceived live. That is to say, a user who is not at the scene to be perceived live can move the map coordinates on the electronic map, and the geographic position corresponding to the final position of the map coordinates is the scene to be perceived live.
Of course, the user may also directly input the location information of the scene to be perceived live in the electronic map, so that the terminal device obtains the geographical location information.
After the geographical position information of the scene to be perceived as live is obtained, the region to be perceived as live can be determined according to the geographical position information of the scene.
It should be noted that the sensing area to be live may be an area determined according to the situation of the fire, for example, a firefighter determines whether to sense only the area of the fire scene or the surrounding area including the fire scene according to the situation of the fire on the scene.
The perception area to be live can be equal to the perception field to be live, and the perception field to be live is the perception area to be live; the area to be sensed live can also be a peripheral area including a field to be sensed live, for example, 303 of a 10 th building 2 unit at a fire scene, and the area to be sensed live is a 10 th building and a whole building. Of course, the area to be perceived live may not include the scene to be perceived live.
In some embodiments, the process of determining the area to be perceived live according to the geographical location information of the scene to be perceived live may be as follows:
firstly, displaying the geographical position information in the electronic map, namely displaying the position of the scene to be perceived live on the electronic map after obtaining the geographical position information of the scene to be perceived live. Then, the perception area position information input by the user in the electronic map is acquired. And finally, determining a perception area to be live based on the position information of the perception area. Namely, the user selects the area to be sensed live on the electronic map according to the requirement.
For example, when 8 buildings in a certain community have a fire, a fireman who arrives at the site leads the fireman to determine the geographical location information of the site to be sensed live, and then selects the peripheral area of the 8 buildings as the area to be sensed live on an electronic map displayed by a mobile phone, or specific location information can be input, and the peripheral area of the 8 buildings is used as the area to be sensed live.
Of course, the area to be perceived live may also be determined according to a preset area radius. That is to say, a circular area with the geographical location information of the scene to be perceived as the center of a circle may be used as the region to be perceived, the radius of the circular area is a preset value, and the preset value is preset. For example, if the geographical position of the scene to be perceived as live is point a and the radius is X m, a circle is drawn with point a as the center of the circle and X m as the radius, and the area corresponding to the circle is determined as the area to be perceived as live.
And step S102, determining a target according to a target selection instruction of a user.
It should be noted that the above target may be a target selected by the user as needed. For example, if the user needs to sense the relevant information of a building, the building is selected as the target.
In specific application, the user terminal equipment can obtain a target selection instruction of a user, and the background server determines which target the user selects according to the target selection instruction.
In the specific application, the user terminal device displays a plurality of objects in the interface, the user selects one or more objects as targets according to needs, and the background server can determine which object the user selects according to the selection instruction of the user, so that the target is determined.
For example, objects displayed on the cell phone interface include: buildings, bank ATMs, vending machines, kiosks, and custom. The user can select the building as a target if the user needs to sense the condition of the building. By custom is meant that a user can customize an object, which can include, but is not limited to, gas stations, fire facilities, parking lots, billboards, charging stations, micro fire stations, and fire kiosks.
Step S103, according to an attention object selection instruction of a user, an attention object is determined, and the attention object is the target or an object associated with the target.
It should be noted that the attention object refers to an object that the user needs to pay attention to, and the attention object can be determined by the user according to the needs.
The object of interest may be a target, i.e. what the user needs to be interested in is the target itself. For example, the target is a building, and what the user needs to pay attention to is also the building, i.e., the attention object is also the building.
The object of interest may also be an associated object associated with the target. The target is associated with at least one associated object, the associated object comprising an object of interest. For example, the target is a building, associated objects of the building include, but are not limited to, fire fighting facilities, organizations, pregnant women, disabled persons, units, and houses, and a user needs to pay attention to the fire fighting facilities, and then selects the fire fighting facilities as the object of interest.
It should be noted that the target is associated with at least one associated object, and the associated object refers to an object associated with the target. For example, the associated objects of a building are fire-fighting facilities and personnel within the building; and can also be peripheral fire-fighting facilities outside the building.
For example, where the target is a building, the associated objects of the building include fire, hazardous materials, personnel, security exits, organizations, and evacuation channels within the building. The related objects of the building can also be the periphery outside the building, fire-fighting facilities and the like. When the scene to be sensed is a fire scene and the user is a fireman, the fireman needs to sense the fire-fighting facilities and dangerous goods of the building, the building can be selected as a target, the fire-fighting facilities and the dangerous goods are selected as attention objects, and therefore the fire-fighting facilities and the dangerous goods of the building within a certain range are sensed.
Referring to the interface diagram for selecting an object of interest shown in fig. 2, as shown in fig. 3, the associated objects of a building include: dangerous goods, organizations, fire-fighting facilities, evacuation routes, pregnant women, old people, infants, safety exits and valuable articles, and a user clicks corresponding icons in the interface, so that corresponding associated objects can be selected as attention objects needing attention. The user can select an object which needs to be focused by the user according to needs, for example, when a scene to be sensed is a fire scene, a fireman can select a building as a target, fire-fighting facilities, a dredging path, infants, old people, pregnant women, dangerous goods and a safety exit of the building are used as focused objects, after the fireman finishes selecting, the background server uses sensing identities for the focused objects, live information is obtained and pushed to a mobile phone of the fireman, and the mobile phone interface can display information of the focused objects, so that the fireman can sense the information of the focused objects of the building within a certain range of the fire scene, and can arrange personnel dredging work, fire extinguishing work scheduling and the like according to the live sensing information.
And S104, obtaining the live information of the attention object in the area to be live perceived by using a perception algorithm for the attention object.
It should be noted that the perception algorithm may include a geographic location calculation algorithm and a traversal algorithm, and the traversal algorithm may be, but is not limited to, a depth-first traversal algorithm or a breadth-first traversal algorithm.
In some embodiments, referring to the specific flowchart schematic block diagram of step S104 shown in fig. 3, if the attention object is a target, the above specific process of obtaining the live information of the attention object in the area to be live perceived by using a perception algorithm for the attention object may include:
step S301, calculating the distance between each concerned object and the scene to be perceived live based on the geographical position information of each concerned object and the geographical position information of the scene to be perceived live.
Step S302, acquiring and searching data of the attention object with the distance smaller than or equal to a preset value, and taking the acquiring and searching data of the attention object as live information.
It should be noted that when the object of interest is a target, the geographical location information of each target is obtained, and the geographical location information of the target may be obtained from the data acquired and searched by the target, which is acquired in advance. For example, when the target is a building, data such as the geographical position of the building, the building number, the building name and the like are collected in advance to form the data collected and searched by the building.
Specifically, the distance between each target and the geographical position information of the scene to be perceived in the live is calculated according to the geographical position information of each target and the geographical position information of the scene to be perceived in the live, whether the target falls into the region to be perceived in the live is determined according to the distance, and the target falling into the region to be perceived in the live is determined as the target to be perceived.
The geographical position calculation algorithm is as follows, that is, a formula for calculating the distance between the two places according to the geographical positions of the places a and B can be as follows:
d ═ arc cos ((sin north a × sin north B) + (cos north a × cos north B × cosAB longitude difference)) × earth mean radius (Shormin). Wherein the average radius of the earth is 6371.004km, and the unit of D is km.
The preset value is the radius of the area to be sensed live, and the value is preset. In the specific application, after the distance between each target and a scene to be perceived live is calculated respectively, the distance is compared with the radius value of the region to be perceived live, if the distance is larger than the radius value, the target does not fall into the region to be perceived live, otherwise, if the distance is smaller than or equal to the radius value, the target falls into the region to be perceived live, the target is used as the target to be perceived, the data to be perceived of the target to be perceived is read, and the data to be reviewed are used as live information.
In other embodiments, referring to another specific flowchart schematic block diagram of step S104 shown in fig. 4, if the attention object is an object associated with a target, the specific process of obtaining the live information of the attention object in the area to be live perceived by using a perception algorithm for the attention object may include:
step S401, calculating the distance between each target and the scene to be perceived live based on the geographical position information of each target and the geographical position information of the scene to be perceived live.
And S402, acquiring and checking data of the target with the distance smaller than or equal to a preset value.
Specifically, after determining the target within the range of the sensing area to be live, the acquisition data corresponding to the target may be read from the database. The acquisition data refers to data obtained by performing acquisition and inspection on a corresponding object in advance, and the acquisition data may include, but is not limited to, position information, quantity information, and the like of the object. For example, a certain building is patrolled in advance to obtain the location information and the quantity information of objects such as fire-fighting facilities, security exits, evacuation channels and the like of the building.
It should be noted that steps S401 to S402 are similar to steps S301 to S302, and please refer to the above corresponding contents for specific description, which is not described herein again.
And S403, traversing the data acquired and searched by the target by using a traversal algorithm to obtain a query path from the target to the object of interest.
It should be noted that the traversal algorithm may be a depth-first traversal algorithm, or may also be a breadth-first traversal algorithm. Acquiring a pre-established directed graph, and then traversing the acquisition data of the target of the directed graph through a traversal algorithm according to preset traversal parameters to obtain an inquiry path from an initial node to a concerned node, wherein the concerned node represents a concerned object.
The query path refers to a path from the target to the attention object, and information of the attention object can be obtained through the query path. The number of query paths is equal to the number of objects of interest.
For example, when the object of interest is a fire fighting device (e.g., a fire extinguisher), the target is a building, the fire fighting device is located at the periphery of the building, the fire fighting device is also located in the passageway of each floor, the fire fighting device is also located in the house, at this time, three locations in the building are provided with the fire fighting device, and then three paths are provided from the building to the fire fighting device, and three query paths are provided correspondingly.
In a specific application, step S403 may include the following steps:
the first step is as follows: and acquiring a pre-established directed graph. The directed graph comprises a starting node, an associated node and directed edges, wherein the starting node represents a target, and the associated node represents an object associated with the target.
It should be noted that the directed graph is pre-established according to the requirements of the user.
The target is associated with at least one associated object, and the object of interest is an object selected from the associated objects. For example, the target is a building including a plurality of related objects such as a building periphery, a unit, a floor, a house, a fire fighting facility, a person (e.g., a pregnant woman, a disabled person, an old person, etc.) and an organization, and the object of interest is a fire fighting facility.
The target is used as an initial node, the associated object is used as a node, and the relationship between the node and the node is expressed by a directed edge. For example, a building corresponds to two related objects of a unit and the periphery of the building, the unit corresponds to a floor, and the floor corresponds to a house and a fire fighting facility.
The second step is that: and traversing the acquired data of the target of the directed graph by using a traversal algorithm according to preset traversal parameters to obtain a query path from the initial node to the concerned node, wherein the concerned node represents the concerned object.
In specific application, the perception algorithms are different, and the preset traversal parameters are correspondingly different. For example, when the perception algorithm is a depth-first traversal algorithm, the preset traversal parameters include a traversal depth parameter and a direction of a directed edge; when the perception algorithm is a breadth-first traversal algorithm, the preset traversal parameter comprises the direction of the directed edge.
Different perception algorithms are adopted, and the process of traversing the directed graph to obtain the query path is correspondingly different. The corresponding processes of the depth-first traversal algorithm and the breadth-first traversal algorithm will be described below.
Depth-first traversal algorithm
Referring to the schematic diagram of the depth-first traversal algorithm shown in fig. 5, as shown in fig. 5, the starting node of the directed graph is a building, i.e., the target is the building, each node is an associated object corresponding to the building, and the object of interest is a fire extinguisher (fire fighting equipment). The related objects of the building are as follows: peripheral (i.e., building periphery), cells, sentry boxes, peripheral fire extinguishers, floors, houses, floor extinguishers, people, organizations (i.e., organizations), and house extinguishers.
The numbers next to the nodes in fig. 5 indicate the order of the nodes, for example, 0 next to a building refers to node 0, and so on, and fig. 5 includes node 0, node 1, node 2, and node …, and each node corresponds to an object.
When the object of interest is a fire extinguisher, all the paths from the building to the fire extinguisher need to be found. And after the user sets the attention object as a fire extinguisher, performing a traversal directed graph according to the traversal depth and the direction of the directed edge. The traversal depth is preset according to needs, the maximum depth and the minimum depth of traversal can be set according to needs, and only the maximum depth can be set. Traversal depth can be understood as speed, with smaller depth sensing speed being about fast. For example, set the minimum depth of traversal to 1 and the maximum depth to 5; alternatively, only the maximum depth is set to 3.
The depth values of the respective nodes are indicated in fig. 5, such as the depth of node 4 and node 1 being 1, the depth of node 2, node 3 and node 5 being 2, the depth of node 6 and node 10 being 3, and the depth of node 7, node 8 and node 9 being 4.
The direction of the directed edge is also preset, and traversal is performed from the starting node to each node according to the set directed edge. The directed edges include an incoming directed edge and an outgoing directed edge. For example, node 4 in fig. 5 (i.e., the node corresponding to the cell) has an incoming directed edge from building to cell and an outgoing directed edge from cell to floor.
Traversing the directed graph of FIG. 5 to obtain a plurality of paths from the building to the fire extinguishers, and sensing information of the fire extinguishers inside and outside the building through the paths. The query path in fig. 5 includes a total of 3 paths, which are as follows:
the starting node 0 (building) -node 1 (periphery) -node 4 (fire extinguisher); the path represents that fire extinguishers are arranged on the periphery of the building.
The starting node 0 (building) -node 4 (unit) -node 5 (floor) -node 10 (fire extinguisher); the path characterizes the presence of fire extinguishers within the floors of the building (e.g., floor walkways).
Starting node 0 (building) -node 4 (unit) -node 5 (floor) -node 6 (house)
Node 9 (fire extinguisher), the path characterizing the presence of a fire extinguisher in the house of the building.
Therefore, a plurality of query paths from the target perception object to the attention object can be obtained through the depth-first traversal algorithm.
Breadth-first traversal algorithm
Referring to the schematic diagram of the breadth-first traversal algorithm shown in fig. 6, as shown in fig. 6, the starting node of the directed graph is a building, that is, the sensing object is a building, each node is an associated object of the building, and the interested object is a fire extinguisher (fire fighting equipment). The related objects of the building are as follows: peripheral (i.e., building periphery), cells, sentry boxes, peripheral fire extinguishers, floors, houses, floor extinguishers, people, organizations (i.e., organizations), and house extinguishers.
The numbers in fig. 6 indicate the order of nodes, for example, 0 next to a building refers to node 0, and so on, and fig. 6 includes node 0, node 1, node 2, and node …, each node corresponds to an object, and the specific correspondence can be referred to fig. 6.
Under the breadth-first traversal algorithm, the preset traversal parameters comprise the direction of the directed edge. And traversing the directed graph based on the distance from each associated node to the initial node according to the specified directed edge direction.
And traversing nodes with short distance first and then traversing nodes with longer distance. For example, starting from the starting node 0 (building), the node 1 or the node 2 with a short distance is first reached, then the node 1 is passed through to the node 3 or the node 4 with a long distance (fire extinguisher), and all the nodes are finally traversed, so that a plurality of paths from the starting node 0 (building) to the target node (fire extinguisher) are obtained.
At this moment, the data to be searched is sensed through a breadth first algorithm, and a plurality of query paths from the target sensing object to the attention object are obtained.
And S404, acquiring the acquisition data of the object of interest through the query path, and taking the acquisition data of the object of interest as live information.
Specifically, after obtaining a plurality of query paths through a perception algorithm, information of the object of interest may be obtained through the query paths.
In some embodiments, the query data of the object of interest corresponding to all the query paths may be queried through all the query paths. The acquired data of the concerned objects corresponding to all the query paths are all used as live perception information, and the information of the concerned objects corresponding to all the query paths is all displayed to the user. Thus, the user can perceive information such as position information and number information of all the objects of interest.
For example, based on fig. 5, the object of interest is a fire extinguisher, and there are 3 query paths in total, as follows:
the starting node 0 (building) -node 1 (periphery) -node 4 (fire extinguisher);
the starting node 0 (building) -node 4 (unit) -node 5 (floor) -node 10 (fire extinguisher);
starting node 0 (building) -node 4 (unit) -node 5 (floor) -node 6 (house)
Node 9 (fire extinguisher);
the fire extinguishers corresponding to the three paths are used as live sensing information, so that a user can sense the number and the positions of all the fire extinguishers inside and outside the fire extinguisher building according to the live sensing information. The user can take one of the fire extinguishers to extinguish the fire according to the requirement. That is, the relevant information of all fire extinguishers inside and outside the building is displayed, so that a user can know how many fire extinguishers are arranged inside and outside the building.
In other embodiments, after obtaining the plurality of paths, the sum of the weights of each query path may be calculated first; selecting the query path with the minimum weight sum value from the query paths as a target path; and finally, inquiring the information of the attention object corresponding to the target path through the target path.
That is to say, the total weight of each query path may be calculated, and the path with the minimum total weight may be recommended to the user, that is, the information of the attention object with the minimum total weight is used as the live sensing information.
Referring to the specific flow schematic block diagram of step S404 shown in fig. 7, the step S404 may include:
and step S701, calculating the weight sum of each query path.
Step S702, selecting the query path with the minimum weight sum as the target path.
And step S703, inquiring acquisition data of the attention object corresponding to the target path through the target path.
It should be noted that the weight of each directed edge in the directed graph may be set by the user according to the distance, time, or priority. In specific application, after a user selects an attention object, the user can select which way to search and traverse through a weight search option, if the user selects to search and traverse according to the distance, the weight is set according to the distance, and if the user selects to search and traverse according to the priority, the weight is set according to the priority.
In general, the query path with the smallest sum of the weights corresponds to the object of interest with the shortest distance or the highest priority.
Referring to the schematic diagram of path weight calculation shown in fig. 8, fig. 8 is a diagram obtained by calculating weights of respective paths based on fig. 5.
As shown in FIG. 8, each directional edge has a corresponding weight, for example, the weight of the directional edge of building → cell is 0.1, and the weight of the directional edge of building → periphery is 1. Other specific values for directed edges can be seen in fig. 8. The path with the minimum sum of the weights is finally selected as follows: the starting node 0 (building) -node 4 (unit) -node 5 (floor) -node 10 (fire extinguisher) takes the path as a target path, and recommends the fire extinguisher corresponding to the target path to the user, namely displays the fire extinguisher as live perception information to the user. That is, fire extinguishers are arranged at a plurality of positions in the building, but at the moment, all the fire extinguishers inside and outside the building are not displayed, and information of the fire extinguishers at one position is recommended to a user as live sensing information according to the sum of the weights and then displayed to the user.
And step S105, pushing the live information to the user terminal to instruct the user terminal to display the live information.
Specifically, after the background server obtains the sensing result (i.e. the live information), the live information may be pushed to the user terminal device, and the live information is displayed by the user terminal device. The live information may be embodied as one or more of text, pictures, video, and the like.
For example, the user terminal is a mobile phone, and the background server can push the live information to the mobile phone and display the live information on a screen of the mobile phone after obtaining the live information, so that the user can perceive the live information of the corresponding attention object.
The objects of interest are different and the last live information displayed and the display mode may be correspondingly different.
When the object of interest is the target, the live information is the relevant information of the target. For example, the target is a building, the information of the building includes the position and the number, at this time, the information of the building is displayed through a mobile phone interface, the area to be perceived live is city a, the geographical position information and the number information of all buildings in the city are displayed on an electronic map of city a, and the display mode can be a statistical chart or a thermodynamic diagram. Specifically, the city a comprises a B area, a C area and a D area, and information such as the positions and the number of buildings in each area is displayed in a statistical chart mode.
When the attention object is an object associated with the target, the live information is related information of the attention object.
For example, when the target is a building and the focused object is a dangerous good, a evacuation passageway, an infant, a dangerous good, an old person, a fire fighting facility, a safety exit, and a pregnant woman, referring to a schematic view of a live sensing information display interface of the building shown in fig. 9, as shown in fig. 9, a dangerous good (23), an evacuation passageway (10), an infant (10), a dangerous good (1), an old person (5), a fire fighting facility (63), a safety exit (10), and a pregnant woman (5) are displayed on the interface, wherein a value in parentheses behind the object indicates the number of the object, for example, the dangerous good (23) indicates the existence of 23 dangerous goods, and the pregnant woman (5) indicates the existence of 5 pregnant women. The interface is a radar-scanning interface that includes a plurality of circles with different live awareness information displayed at different locations.
According to the method and the device, the geographical position information of the scene to be perceived is obtained, and the region to be perceived is determined according to the geographical position information; after the target and the attention object are selected by the user, a perception algorithm is used for the attention object to obtain live information, and the live information is displayed on the user terminal, so that the user can perceive the corresponding live information, and the perception of the live information is realized.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 10 shows a schematic block diagram of a structure of a live information intelligent sensing apparatus provided in an embodiment of the present application, and only shows a part related to the embodiment of the present application for convenience of explanation.
Referring to fig. 10, the apparatus may include:
the sensing area determining module 101 is configured to acquire geographic position information of a sensing site to be live, and determine a sensing area to be live according to the geographic position information;
a target determining module 102, configured to determine a target according to a target selection instruction of a user;
an attention object determination module 103, configured to determine an attention object according to an attention object selection instruction of a user, where the attention object is a target or an object associated with the target;
a live information obtaining module 104, configured to use a perception algorithm for the attention object to obtain live information of the attention object in the to-be-live perception area;
a display module 105, configured to push the live information to the user terminal to instruct the user terminal to display the live information.
In a possible implementation manner, the sensing region determining module is specifically configured to:
and taking a circular area with the geographical position information of the scene to be perceived as the circle center as the scene to be perceived, wherein the radius of the circular area is a preset numerical value.
In a possible implementation manner, if the object of interest is a target, the live information obtaining module is specifically configured to:
calculating the distance between each concerned object and the scene to be perceived live based on the geographical position information of each concerned object and the geographical position information of the scene to be perceived live;
acquiring and searching data of the attention object with the distance smaller than or equal to a preset value, and taking the acquiring and searching data of the attention object as live information.
In a possible implementation manner, if the object of interest is an object associated with the target, the live information obtaining module is specifically configured to:
calculating the distance between each target and the scene to be perceived live based on the geographical position information of each target and the geographical position information of the scene to be perceived live;
acquiring the data of the targets with the distance less than or equal to a preset value;
traversing the data acquired and searched by the target by using a traversal algorithm to obtain a query path from the target to the attention object;
acquiring the data of the object of interest through the query path, and taking the data of the object of interest as live information.
In a possible implementation, the live information obtaining module is specifically configured to:
acquiring a pre-established directed graph; the directed graph comprises a starting node, an associated node and directed edges, wherein the starting node represents a target, and the associated node represents an object associated with the target;
and traversing the acquired data of the target of the directed graph by using a traversal algorithm according to preset traversal parameters to obtain a query path from the initial node to the concerned node, wherein the concerned node represents the concerned object.
In a possible implementation, the live information obtaining module is specifically configured to:
calculating the weight sum of each query path;
selecting a query path with the minimum weight sum as a target path;
inquiring acquisition data of the concerned object corresponding to the target path through the target path;
or,
and inquiring the acquisition data of the concerned objects corresponding to all the inquiry paths through all the inquiry paths.
In a possible implementation manner, the sensing region determining module is specifically configured to:
acquiring a positioning signal of a mobile terminal device positioned on a scene to be sensed live;
obtaining geographical position information of a scene to be sensed in a live state according to the positioning signal;
or
Obtaining map coordinate moving operation of a user in an electronic map;
and according to the map coordinate moving operation, taking the geographic position information of the position of the map coordinate as the geographic position information of the scene to be perceived live.
The live information intelligent perception device has the function of realizing the live information intelligent perception method, the function can be realized by hardware, and can also be realized by hardware executing corresponding software, the hardware or the software comprises one or more modules corresponding to the function, and the modules can be software and/or hardware.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
Fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 11, the terminal device 11 of this embodiment includes: at least one processor 110, a memory 111, and a computer program 112 stored in the memory 111 and operable on the at least one processor 110, the processor 110 implementing the steps in any of the various live information intelligent perception method embodiments described above when executing the computer program 112.
The terminal device 11 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 11 is only an example of the terminal device 11, and does not constitute a limitation to the terminal device 11, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include an input/output device, a network access device, and the like.
The Processor 110 may be a Central Processing Unit (CPU), and the Processor 110 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may in some embodiments be an internal storage unit of the terminal device 11, such as a hard disk or a memory of the terminal device 11. In other embodiments, the memory 111 may also be an external storage device of the terminal device 11, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 11. Further, the memory 111 may also include both an internal storage unit and an external storage device of the terminal device 11. The memory 111 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 111 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.