CN111220173B - POI (Point of interest) identification method and device - Google Patents

POI (Point of interest) identification method and device Download PDF

Info

Publication number
CN111220173B
CN111220173B CN201811415643.7A CN201811415643A CN111220173B CN 111220173 B CN111220173 B CN 111220173B CN 201811415643 A CN201811415643 A CN 201811415643A CN 111220173 B CN111220173 B CN 111220173B
Authority
CN
China
Prior art keywords
poi
road
acquisition
preset
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811415643.7A
Other languages
Chinese (zh)
Other versions
CN111220173A (en
Inventor
徐宁
刘树明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811415643.7A priority Critical patent/CN111220173B/en
Publication of CN111220173A publication Critical patent/CN111220173A/en
Application granted granted Critical
Publication of CN111220173B publication Critical patent/CN111220173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3679Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities
    • G01C21/3682Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities output of POI information on a road map
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method and a device for identifying POIs, which relate to the technical field of electronic maps and mainly aim at identifying whether the acquisition positions of POIs are reliable or not by utilizing the acquisition information of the POIs and background map data and marking unreliable POIs. The main technical scheme of the invention is as follows: acquiring background map data of the POI in a preset electronic map database according to the acquired information of the POI; and judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data, and if the POI is positioned in the preset acquisition scene, marking the POI as the POI to be verified. The method and the device are mainly used for identifying the POI to be verified.

Description

POI (Point of interest) identification method and device
Technical Field
The invention relates to the technical field of maps, in particular to a method and a device for identifying POIs.
Background
A point of interest (Points Of Interest, POI) is a point of information with geospatial features. Residential communities, parks, schools, restaurants, malls, etc. in the real world may all be expressed by points of interest. In practice, a user may search for and select a desired POI (e.g., restaurant) via the map navigation application software and arrive at the POI along a navigation route provided by the map navigation application software. From the application scenario, it is important to determine whether the user can successfully reach the POI.
Currently, POIs are generally collected by field operators using a handheld collection device, specifically, the field operators collect POIs by photographing or taking videos through a camera device of the handheld collection device, and meanwhile, the collection device records the positions of the photographed POIs, the positions are processed into the collected positions of the POIs, and then, the collected positions of the POIs are processed by the field operators to obtain the actual positions of the POIs. The inventor finds that the acquisition positions of some POIs are unreliable under the influence of the positioning accuracy of the handheld acquisition equipment and the handheld posture of field operators when the handheld acquisition equipment is held by the field operators when the process is studied, and the actual positions of the POIs obtained based on the acquisition positions need to be subjected to post-processing so as to ensure the accuracy of the actual positions of the POIs. Therefore, how to identify unreliable POI acquisition locations is a problem that needs to be addressed in the prior art.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for identifying a POI, which mainly aims to identify whether the acquisition position of the POI is reliable.
In order to achieve the above purpose, the present invention mainly provides the following technical solutions:
in one aspect, the present invention provides a method for identifying a POI, specifically including:
Acquiring background map data of the POI in a preset electronic map database according to the acquired information of the POI;
and judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data, and if the POI is positioned in the preset acquisition scene, marking the POI as the POI to be verified.
In another aspect, the present invention provides a device for identifying a POI, specifically including:
the data acquisition unit is used for acquiring background map data of the POI in a preset electronic map database according to the acquired information of the POI;
the scene judging unit is used for judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data acquired by the data acquisition unit;
and the POI marking unit is used for marking the POI as the POI to be verified if the scene judging unit determines that the POI is positioned in a preset acquisition scene.
On the other hand, the invention provides a processor for running a program, wherein the program runs to execute the POI identification method provided by the invention.
By means of the technical scheme, the POI identification method and device provided by the invention judge whether the POI is located in the preset acquisition scene by acquiring the background map data around the acquisition position of the POI, wherein the acquisition scene is a scene with complex geographic environment and the positioning accuracy of the handheld acquisition equipment is easy to deviate or is easy to be influenced by the handheld gesture so as to cause inaccurate acquisition position. Because the background map data acquired by the method is the verified accurate data, the geographical environment around the POI can be accurately restored through the background map data, so that whether the acquisition scene where the POI is located can cause unreliable acquisition positions of the POI or not can be accurately identified, and whether verification processing on the actual positions of the POI is needed or not is further determined.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a flowchart of a method for identifying a POI according to an embodiment of the present invention;
FIG. 2a shows an image of a live shot POI in an intersection acquisition scene;
FIG. 2b shows a map schematic showing POI locations in an intersection acquisition scene;
FIG. 3 shows a map schematic showing POI locations in a parallel acquisition scene;
FIG. 4 shows a map schematic showing POI locations in a multi-building acquisition scene;
FIG. 5 shows a flow chart of identifying a POI located intersection acquisition scenario in an embodiment of the invention;
FIG. 6 shows a map schematic of generating a target road profile in an intersection acquisition scenario;
FIG. 7 is a flowchart of a method for identifying a POI located parallel acquisition scenario in an embodiment of the invention;
FIG. 8 is a flowchart of another method for identifying POI in a parallel acquisition scenario in an embodiment of the invention;
FIG. 9 shows a schematic diagram of identifying parallel road acquisition scenes based on an electronic map;
FIG. 10 is a flow chart illustrating a method for identifying a POI as being located in a multi-building acquisition scenario in accordance with an embodiment of the present invention;
fig. 11 shows a block diagram of a POI identification apparatus according to an embodiment of the present invention;
fig. 12 shows a block diagram of another POI identification device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention provides a POI identification method, which is used for identifying the acquisition scene of POIs, marking the POIs according to the preset acquisition scene, and further carrying out post-processing (such as data filtering or manual auxiliary correction and the like) on the actual positions of the marked POIs after obtaining the actual positions based on the acquisition positions of the POIs so as to improve the positioning accuracy of the actual positions of the POIs. The specific steps of the method are shown in fig. 1, and the method comprises the following steps:
Step 101, acquiring background map data of the POI in a preset electronic map database according to the acquired information of the POI.
The collecting information of the POI may include a collecting position, a collecting direction (refer to a photographing direction of the collecting device when the collecting device photographs the POI), a collecting road, a collecting task direction and the like, and the collecting task direction and the collecting road are generally issued to the collecting device of the field collecting personnel through the POI collecting task, so that the field collecting personnel can know which direction (collecting task direction) of the POI should be collected on which road (collecting road). In general, field collection personnel are difficult to collect POIs strictly according to the collection task direction, so the collection process of the POIs can also record the collection direction.
In the embodiment of the invention, the acquired background map data is centered on the acquisition position of the POI, and background map data in a certain range around the acquisition position of the POI is acquired from a preset electronic map database, wherein the background map data may include data such as roads, intersections, buildings, POIs and the like, and the specific data is determined according to the construction condition of the periphery of the acquisition position of the POI in the real world, and the list does not indicate that the background map data of each POI necessarily includes the map data.
Step 102, based on the acquired information of the POI and the background map data, judging whether the POI is located in a preset acquisition scene, and if so, marking the POI as the POI to be verified.
The method comprises the step of identifying the surrounding environment of the acquisition position of the POI by utilizing the acquisition information and the background map data of the POI so as to judge whether the POI is positioned in a preset acquisition scene. The preset acquisition scene is a scene with complicated geographic environment and inaccurate acquisition position, which is easily caused by the deviation of the positioning accuracy of the handheld acquisition equipment or the influence of the handheld gesture.
Therefore, for determining the POI located in the preset acquisition scene, the POI needs to be marked as the POI to be verified, so that the actual position of the POI is subjected to post-processing operation, the actual position of the POI is further determined, and the positioning accuracy of the actual position of the POI is improved.
In the embodiment shown in fig. 1, since the acquired background map data is the verified accurate data, the geographical environment around the POI can be accurately restored according to the acquired information of the POI, and whether the POI is located in the preset acquisition scene is judged by accurately restoring the geographical environment, so that the accuracy of the judgment result is ensured, and the accuracy of marking the POI as the POI to be verified is ensured. Therefore, the method and the device can accurately mark the POI to be verified, so that the marked POI to be verified can be subjected to post-processing in a targeted manner when the post-processing operation is executed, and the positioning accuracy of the actual position of the POI is improved. Meanwhile, the accuracy of marking the POI to be verified is high, so that the number of POIs needing post-processing is reduced, the processing pressure of post-processing operation is reduced, and the POI processing efficiency is improved.
Further, in the embodiment of the present invention, after verifying the collected data of a large number of POIs, it is found that the collected scene may be specifically divided into: intersection acquisition scene, parallel road acquisition scene, and multi-building acquisition scene. Specifically, these three acquisition scenarios can have the following problems:
first, a scene is collected at an intersection.
As shown in fig. 2a, in order to capture an image taken when a POI is captured, a POI captured by a field capturing person is located near an intersection, there may be a plurality of actual positions obtained by processing the capturing position of the POI, as shown in fig. 2b, the black dot positions are several actual positions determined based on the capturing position of the POI, and it can be seen from fig. 2b that the actual positions are distributed on both sides of a road connecting the intersection, so that in an intersection capturing scene, since one intersection is connected to a plurality of roads, the capturing position of the POI is affected by positioning errors, a plurality of actual positions of the POI are obtained, and then post-processing is required to select an actual position capable of accurately reflecting the actual situation of the POI, so that it is necessary to identify the POI captured at the intersection, that is, the POI located in the intersection capturing scene.
Secondly, a scene is acquired by parallel paths.
The parallel road mainly refers to a plurality of roads parallel to the acquisition road designated in the POI acquisition task in the real world, the parallel road may be a large road or may be a small road, as an example, an electronic map shown in fig. 3 is an acquisition road, a region below the acquisition road in the drawing is an acquisition region pointed by the acquisition task, it is seen from the drawing that there are a plurality of parallel roads in the acquisition region, and there are two cases for the parallel road, one case is that the POI acquired on the road a is located on other roads parallel to the road a, and the other case is that the POI acquired on the road a is located on the parallel road or between the parallel roads (see black point position in the drawing) due to the influence of positioning accuracy, so that it is necessary to identify the POI acquired on the parallel road, that is, the POI located in the acquisition scene of the parallel road.
Finally, a scene is collected by a plurality of buildings.
Multi-building means that there are a plurality of densely-packed buildings near the collection position of the POI, and as shown in fig. 4, arrows in the drawing are used to represent the collection position and the collection direction of the POI, and black spot positions are several actual positions determined based on the collection position and the collection direction of the POI, and the actual positions are respectively located in different buildings. It can be seen that, in a multi-building acquisition scene, due to the influence of positioning errors, the actual position of the POI determined based on the acquisition position and the acquisition direction of the POI is very likely to be located in different buildings, so that post-processing is required to select an actual position capable of accurately reflecting the actual situation of the POI from the POI, and therefore, it is necessary to identify the POI acquired near the dense building, that is, the POI located in the multi-building acquisition scene.
According to the problem analysis of the acquisition scenes, the POIs in the preset acquisition scenes need to be identified and post-processing operation is executed to determine the accurate positions of the POIs in the actual environment, so that the positioning accuracy of the POIs is improved.
It should be noted that the three acquisition scenes analyzed above are only typical acquisition scenes listed in the present invention, and not all the acquisition scenes.
The specific flow of identifying the POI in the preset acquisition scene will be described one by one for the three acquisition scenes.
1. And acquiring a scene at the intersection.
To identify POIs located in an intersection acquisition scene, the required POI acquisition information includes: the specific identification flow is shown in fig. 5, and includes:
step 201, searching whether an intersection exists in a first distance range preset around an acquisition position of the POI in background map data.
Specifically, the step is to determine the range of the periphery of the acquisition position of the POI by taking the acquisition position of the POI as the center and taking the preset first distance as the radius, and search the intersection in the range. The preset first distance is a tested value, and the definition of the intersection refers to the intersection of a plurality of roads, and the number of the associated roads is greater than or equal to three.
If no crossing exists in the range, determining that the POI is not located in the crossing acquisition scene, and ending the identification process. Otherwise, if there is an intersection in the range, the process continues to step 202.
Step 202, judging whether an included angle between the POI vector of the POI and a road vector of a road connected with the intersection accords with a preset angle range, and if so, determining the road as a target road.
The POI vector takes the acquisition position of the POI as a starting point, takes the acquisition direction of the POI as a direction, and takes a preset first length as a length value, wherein the value of the first length is a tested value; the road vector of a road is a vector that starts at an intersection and that is oriented in a direction along which a vehicle exits the intersection.
When the included angle of the two vectors is within the preset angle range, it is indicated that the road connected towards the intersection is the road when the POI is shot, at this time, the road is determined as the target road, and whether the POI is located in the intersection acquisition scene is continuously determined, that is, step 203 is executed. When the included angle is not within the preset angle range, the condition that the POI is shot does not face the road connected with the intersection is indicated, and the POI is not needed to be identified at the moment, so that the identification flow can be ended. For example, the predetermined angle may range between 0 and 100 degrees.
And 203, searching for buildings in a second distance range preset at two sides of the target road in the background map data, and generating a road contour line of the target road according to the building searching result.
The purpose of this step is to determine the road profile of the target road to which the intersection is connected. The road outline is mainly distinguished from the surrounding buildings by the road, and the preset second distance is used for judging whether the buildings exist on two sides of the road within a reasonable range. The preset second distance range is a range determined by extending a preset second distance from the center of the target road to both sides of the target road, and within the range, if a building is present, the road contour is determined by the building, and if no building is present, the road contour is determined based on the preset width. Typically, the preset second distance is less than twice the width of the target link.
Specifically, a preferred embodiment of generating the road contour is as follows:
when a building is searched, determining one of the contour edges of the building closest to the target road, taking one point on the edge, which has the largest vertical distance to the target road, as a parallel line of the target road, and taking the parallel line as a road contour of the target road. If a plurality of buildings are searched, the road contour of the target road is determined by one building closest to the target road.
And when no building is searched, the parallel line parallel to the target road with the distance to the target road equal to the preset width is taken as the road contour line. The preset width may be a preset second distance, or a width value set in a self-defining manner.
The road contour of the target road is generated by referring to three mutually parallel broken-line arrows shown in fig. 6, wherein the middle arrow represents the road vector of the target road, and the arrows on both sides thereof are the road contour determined according to the above-described manner.
Step 204, when the POI vector intersects with the road contour line or is located in the area determined by the road contour line, it is determined that the POI is located in the intersection acquisition scene.
Referring specifically to fig. 6, the POI vector (black solid arrow) in the figure intersects the road contour line on the left side of the target road, and thus, it can be determined that the POI is a POI located in the intersection acquisition scene. And if the POI vector is located in the area between the two road contours of the target road, it may also be determined that the POI is located in the intersection acquisition scene. That is, any point in the POI vector is located in the road outline of the target road, and the POI is determined to be located in the intersection acquisition scene.
2. And acquiring scenes in parallel paths.
To identify POIs located in a parallel acquisition scene, the required POI acquisition information includes: the specific identification flow is shown in fig. 7, and includes:
step 301, a plumb line is drawn to a collected road based on the collected position of the POI, and a foot drop point is obtained.
And 302, acquiring a road with an intersection point with the acquisition task vector of the POI in the background map data as a target road.
The acquisition task vector is a length value with the foot drop point obtained in step 301 as a starting point, a preset acquisition task direction as a direction, and a preset second length as a length value. The second length is a length value that is a tested value.
Step 303, judging whether the included angle between the straight line of the target road and the straight line of the acquisition road is in a preset angle interval.
The included angle of the two straight lines in the plane is within 0-180 degrees, so the preset angle interval in the step can be set to be an angle interval of 20-160 degrees for judging whether the two roads have a parallel relationship.
If the included angle is in the preset angle interval, the target road and the acquisition road are not in parallel relation, the POI can be determined not to be in a parallel road acquisition scene, and the identification process is ended. And when the included angle is not in the preset angle interval, determining that the target road and the acquisition road have a parallel relationship, and executing step 304 to further determine whether the POI is located in the parallel road acquisition scene.
Step 304, searching whether a building exists between the target road and the acquisition road in the background map data.
Step 305, if there is no building, determining that the POI is located in the parallel acquisition scene.
In the embodiment shown in fig. 7, it is first determined whether the target road has a parallel relationship with the acquisition road, and if so, whether the POI is located in the parallel road acquisition scene is determined by determining whether a building exists between the two roads.
In addition, since two roads having a parallel relationship may be an uplink and downlink road or a main and auxiliary road having a small mutual distance, for the uplink and downlink road or the main and auxiliary road, there is generally no building between the two roads, in order to avoid misjudging such a situation as POI being located in a parallel road acquisition scene. Another preferred embodiment of the present invention is to determine whether the target road and the acquisition road are uplink and downlink roads or main and auxiliary roads when the included angle is determined not to be within the preset angle interval after step 303 is executed, that is, when the target road and the acquisition road have a parallel relationship, as shown in fig. 8, including:
step 306, judging whether the target road and the acquisition road form an uplink and downlink road or a main road and an auxiliary road.
The judgment can be performed by the distance between the target road and the acquisition road or the road name, for example, when the distance is smaller than a threshold value, the threshold value is 10-15m generally, or the road names are the same, the target road and the acquisition road can be determined to be uplink and downlink, or the main and auxiliary road.
When the POI is an uplink and downlink road or a main road and an auxiliary road, the POI can be determined not to be located in a parallel road acquisition scene, and the identification process is ended.
When the target road and the acquisition road are not uplink and downlink roads or main and auxiliary roads, step 304 is executed, i.e. in the background map data, whether a building exists between the target road and the acquisition road is searched, and whether the POI is located in the parallel road acquisition scene is determined according to whether the building exists.
In addition to the above two recognition methods, in another possible embodiment of the present invention, the step 306 may further perform the determination after performing the steps 301 to 304. That is, when there is no building between the target road and the collection road, it is determined whether the target road and the collection road constitute an uplink/downlink road or a main/auxiliary road.
If so, determining that the POI is not located in the parallel acquisition scene, otherwise, determining that the POI is located in the parallel acquisition scene.
The above identification process for the parallel road collection scene is to identify whether a building exists between the target road and the collection road, and in practical application, a simpler identification method is provided, and the specific implementation of the method is that after the steps 301 to 303 are executed, step 306 is executed to determine whether the target road and the collection road form an uplink and downlink road or a main and auxiliary road after the parallel relationship between the target road and the collection road is determined.
When the target road and the acquisition road do not form an uplink road, a downlink road or a main road and an auxiliary road, determining that the POI is positioned in a parallel road acquisition scene; otherwise, determining that the POI is not located in the parallel acquisition scene.
The following describes a preferred recognition procedure for recognizing a parallel road acquisition scene specifically by fig. 9: the solid black arrow in fig. 9 indicates a POI vector, the tail of the arrow is the acquisition position, the solid black arrow is the acquisition road, and the dashed black arrow is the perpendicular line to the acquisition road based on the acquisition position of the POI, i.e., the acquisition task vector of the POI. In the background map data shown in fig. 9, it can be seen that the target road having an intersection with the acquisition task vector is: the road A and the road B are judged by the included angle of the straight line where the collected road is located, and the road A and the road B are in parallel relation with the collected road, but by further identifying, the road B and the collected road are determined to be uplink and downlink due to the small distance between the road B and the collected road, so that the road B is eliminated (namely, the POI is not located in a parallel road collection scene formed by the road B). With respect to the road a, a building (family planning office) exists between the road a and the collection road, and thus it can be determined that the POI is not located in the parallel road scene constituted by the road a either. And if there is no building between the road a and the acquisition road, it can be determined that the POI is located in a parallel road scene.
3. And (5) multi-building collection scenes.
To identify POIs in a multi-building acquisition scene, the required POI acquisition information includes: the specific identification flow is shown in fig. 10, and includes:
step 401, searching buildings in a third distance range preset around the acquisition position of the POI in the background map data.
Wherein the predetermined third distance is a tested value. When the number of the searched buildings is more than two, step 402 is executed, otherwise, the identification process is ended, and the POI is determined not to be in the multi-building acquisition scene.
And step 402, merging and deleting the searched buildings according to the direction of the collection task and the collection road to obtain the reserved buildings.
The specific manner of merging and pruning the plurality of buildings searched in step 401 to obtain the reserved building is as follows:
first, from the searched buildings, the building positioned at one side pointed by the collection task direction of the collection road is obtained as the building to be screened.
Secondly, obtaining the building to be screened, which has a building contour side line parallel to the direction of the acquisition task, from the building to be screened as a target building.
Thirdly, if more than two target buildings are sequentially arranged along the direction of the collection task, deleting the target building far from the collection road, and reserving the target building closest to the collection road.
The target buildings which are arranged in sequence are buildings which are arranged from near to far along the direction of the collection task and are routed relative to the collection road within a specified width range. The visual inspection can be regarded as that when an observer looks on the collection road towards the collection task direction, more than two target buildings can be mutually overlapped or shielded. At this time, the target building closest to the collection road will remain.
Fourth, merging the adjacent relations existing in the reserved target buildings into one target building.
The judging of the adjacent relation can be carried out by judging whether overlapping exists between contour edges of the target buildings or whether the distance is smaller than a threshold value, if the overlapping exists or the distance is smaller than the threshold value, the adjacent relation between the two target buildings is determined, and at the moment, the two target buildings are combined into one target building. The situation of adjacent buildings often occurs in the situation that the same building has different layers, and at this time, one building is divided into a plurality of building blocks in the electronic map, so that three-dimensional display is facilitated.
Fifth, the target building which is not combined in the reserved target buildings and the target building obtained after combination form the reserved building.
It should be noted that, the above-mentioned method of obtaining the reserved building is obtained by merging and deleting the collected task directions, and for the building obtained in step 401, there may be a building on the non-collected side of the collected road (the collected side of the POI is determined to be the collected side of the collected road along the collected task direction and the other side is the non-collected side) in the preset third distance range, and the building on the non-collected side does not need to be processed in this embodiment. Therefore, filtering is required for the searched buildings on the non-collecting side of the collecting road, and the premise of the above-mentioned mode is based on the collecting task direction, namely, the buildings on the non-collecting side of the collecting road which are filtered when the buildings to be screened are determined. In practical application, another possible embodiment is as follows:
first, deleting the buildings on the non-collection side of the collection road from the searched buildings, and taking the rest buildings as the buildings to be screened.
Secondly, obtaining the building to be screened, which has a building contour side line parallel to the direction of the acquisition task, from the building to be screened as a target building.
Thirdly, if more than two target buildings are sequentially arranged along the direction of the collection task, deleting the target building far from the collection road, and reserving the target building closest to the collection road.
Fourth, merging the adjacent relations existing in the reserved target buildings into one target building.
Fifth, the target building which is not combined in the reserved target buildings and the target building obtained after combination form the reserved building.
In the above processing manner of acquiring the reserved buildings, if the number of buildings processed when the deletion or merging operation is performed is 1 or less, the identification flow is terminated, and it is determined that the POI is not in the multi-building acquisition scene.
Step 403, if the number of reserved buildings is greater than or equal to 2, determining that the POI is located in the multi-building acquisition scene.
The above is a detailed description of the intersection acquisition scene, the parallel road acquisition scene, and the multi-building acquisition scene recognition flow, respectively. By the method, whether the POI is located in a preset acquisition scene or not is identified according to the acquisition information of the POI and the background map data, and therefore whether the POI is marked as the POI to be verified or not is determined.
Further, as an implementation of the methods shown in fig. 1, fig. 5, fig. 7, fig. 8 and fig. 10, an embodiment of the present invention provides a device for identifying POIs, where the device is mainly used for identifying an acquisition scene of a POI, marking the POI according to a preset acquisition scene, and after obtaining an actual position based on an acquisition position of the marked POI, further post-processing the actual positions of the POIs is required to improve positioning accuracy of the actual positions of the POIs. For convenience of reading, the details of the foregoing method embodiment are not described one by one in the embodiment of the present apparatus, but it should be clear that the apparatus in this embodiment can correspondingly implement all the details of the foregoing method embodiment. The device is shown in fig. 11, and specifically includes:
a data obtaining unit 51, configured to obtain background map data of a POI in a preset electronic map database according to collection information of the POI;
a scene judging unit 52, configured to judge whether the POI is located in a preset acquisition scene based on the acquisition information of the POI and the background map data acquired by the data acquiring unit 51;
and a POI marking unit 53, configured to mark the POI as a POI to be verified if the scene determining unit 52 determines that the POI is located in a preset acquisition scene.
Further, as shown in fig. 12, the preset collection scene is an intersection collection scene, and the collection information includes: acquisition position and acquisition direction, the scene discrimination unit 52 includes:
the intersection searching module 5211 is configured to search whether an intersection exists in a first distance range preset around the acquisition position of the POI in the background map data;
a first target road determining module 5212, configured to determine, when the intersection searching module 5211 searches for an intersection, whether an included angle between a POI vector of the POI and a road vector of a road connected to the intersection meets a preset angle range, if yes, determine the road as a target road, where the POI vector is a vector with an acquisition position of the POI as a starting point, an acquisition direction of the POI as a direction, and a preset first length as a length value, and the road vector of the road is a vector with the intersection as a starting point and a direction in which a vehicle exits the intersection along the road as a direction;
a road contour generating module 5213, configured to search for a building in the background map data within a second distance range preset on both sides of the target road, and generate a road contour of the target road determined by the first target road determining module 5212 according to a building search result;
The intersection scene determining module 5214 is configured to determine that the POI is located in an intersection acquisition scene when the POI vector intersects with the road contour line obtained by the road contour generating module 5213 or is located in an area determined by the road contour line.
Further, the road profile generation module 5213 specifically includes:
if the building searching result shows that no building is searched, taking parallel lines with the distance from the target road being equal to the preset width as the road contour lines;
if the building searching result is that the building is searched, determining one edge line closest to the target road in the contour edge line of the building, taking one point with the largest vertical distance from the edge line to the target road as a parallel line of the target road, and taking the parallel line as a road contour line of the target road.
Further, as shown in fig. 12, the preset acquisition scene is a parallel acquisition scene, and the acquisition information includes: the scene discrimination unit 52 includes:
the drop foot obtaining module 5221 is used for guiding a vertical line to the collected road based on the collecting position of the POI to obtain a drop foot point;
A second target road determining module 5222, configured to obtain, as a target road, a road having an intersection with an acquisition task vector of the POI in the background map data, where the acquisition task vector uses a foot drop point obtained by the foot drop obtaining module 5221 as a starting point, uses a preset acquisition task direction as a direction, and uses a preset second length as a length value;
the road included angle judging module 5223 is configured to judge whether an included angle between a straight line where the target road is and a straight line where the collected road is obtained by the second target road determining module 5222 is in a preset angle interval;
a first building searching module 5224, configured to search, if the road included angle judging module 5223 determines that the included angle is not in the preset angle interval, for whether a building exists between the target road and the collected road in the background map data;
the parallel path scene determining module 5225 is configured to determine that the POI is located in a parallel path acquisition scene if the first building searching module 5224 does not search for a building.
Further, as shown in fig. 12, the apparatus further includes:
a road judging module 5226 for judging whether the target road and the collected road constitute an uplink and downlink road or a main and auxiliary road before the first building searching module 5224 searches whether a building exists between the target road and the collected road in the background map data;
The first building search module 5224 performs an operation of searching whether a building exists between the target road and the collection road in the background map data when the road determination module 5226 determines that no uplink and downlink road or main and auxiliary road is constructed.
Further, as shown in fig. 12, the preset acquisition scene is a parallel acquisition scene, and the acquisition information includes: acquisition position, acquisition task direction, and acquisition road, in the scene determination unit 52:
the drop foot obtaining module 5221 is configured to draw a vertical line to a collected road based on the collection position of the POI, so as to obtain a drop foot point;
the second target road determining module 5222 is configured to obtain a road having an intersection with an acquisition task vector of the POI in the background map data as a target road, where the acquisition task vector uses a foot drop point obtained by the foot drop obtaining module 5221 as a starting point, uses a preset acquisition task direction as a direction, and uses a preset second length as a length value;
the road included angle judging module 5223 is configured to judge whether an included angle between a straight line where the target road is and a straight line where the collected road is obtained by the second target road determining module 5222 is in a preset angle interval;
The road judging module 5226 is configured to judge whether the target road and the collected road form an uplink/downlink road or a main/auxiliary road if the road included angle judging module 5223 determines that the included angle is not in the preset angle range;
the parallel road scene determining module 5225 is configured to determine that the POI is located in a parallel road acquisition scene when the road judging module 5226 determines that an uplink and downlink road or a main and auxiliary road is formed.
Further, as shown in fig. 12, the preset acquisition scene is a multi-building acquisition scene, and the acquisition information of the POI includes: the scene discrimination unit 52 further includes:
a second building searching module 5231, configured to search, in the background map data, a building within a third distance range preset around the acquisition position of the POI;
the building screening module 5232 is configured to, if the second building searching module 5231 searches for more than two buildings, merge and prune the searched buildings according to the direction of the collection task and the collection road, so as to obtain a reserved building;
the multi-building scene determination module 5233 is configured to determine that the POI is located in a multi-building acquisition scene if the number of reserved buildings determined by the building screening module 5232 is greater than or equal to 2.
Further, the building screening module 5322 is specifically configured to:
acquiring a building positioned at one side pointed by the direction of the collection task of the collection road from the searched buildings as a building to be screened;
obtaining a building to be screened, which is parallel to the direction of the acquisition task, from the building to be screened, wherein the contour line of the building is taken as a target building;
if more than two target buildings are sequentially arranged along the direction of the collection task, deleting the target building far from the collection road, and reserving the target building closest to the collection road;
merging the adjacent relations existing in the reserved target buildings into one target building;
and the target building which is not combined in the reserved target buildings and the target building obtained after combination form the reserved building.
In summary, the method and the device for identifying a POI according to the embodiments of the present invention determine whether the POI is located in a preset acquisition scene by acquiring background map data around the acquisition position of the POI, and specifically describe the identification of the intersection acquisition scene, the parallel road acquisition scene and the multi-building acquisition scene, when determining that the POI is located in the preset acquisition scene, mark the POI as the POI to be verified, and for the POI marked as the POI to be verified, after determining the actual position of the POI based on the acquisition position of the POI, perform post-processing operation on the actual position, thereby ensuring the positioning accuracy of the actual position of the POI. Because the background map data acquired by the method is the verified accurate data, the geographical environment around the POI can be accurately restored through the background map data, so that whether the acquisition scene where the POI is located can cause unreliable acquisition positions of the POI or not can be accurately identified, and whether the POI needs to be marked as the POI to be verified or not is further determined. In addition, through accurately marking the POI, the post-processing operation on the POI without post-processing is effectively avoided, and the data pressure of the post-processing operation is reduced, so that the processing efficiency of POI positioning is improved.
In addition, the embodiment of the invention also provides a processor for running a program, wherein the program runs to execute the POI identification method.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the methods and apparatus described above may be referenced to one another. In addition, the "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent the merits and merits of the embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
Furthermore, the memory may include volatile memory, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), in a computer readable medium, the memory including at least one memory chip.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash memory (flashRAM). Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (12)

1. A method for identifying a POI, the method comprising:
acquiring background map data of a POI in a preset electronic map database according to acquisition information of the POI, wherein the acquisition information comprises the following steps: collecting position and collecting direction;
judging whether the POI is located in a preset acquisition scene or not based on the acquisition information of the POI and the background map data, and if the POI is located in the preset acquisition scene, marking the POI as a POI to be verified, wherein the preset acquisition scene is an intersection acquisition scene;
The step of judging whether the POI is located in a preset acquisition scene based on the acquisition information of the POI and the background map data comprises the following steps:
searching whether an intersection exists in a first distance range preset around the acquisition position of the POI in the background map data;
if the POI vector exists, judging whether an included angle between the POI vector of the POI and a road vector of a road connected with an intersection accords with a preset angle range, if so, determining the road as a target road, wherein the POI vector takes an acquisition position of the POI as a starting point, takes an acquisition direction of the POI as a direction, takes a preset first length as a length value, and the road vector of the road takes the intersection as a starting point and takes a direction of a vehicle driving out of the intersection along the road as a direction;
searching buildings in a second distance range preset at two sides of the target road in the background map data, and generating a road contour line of the target road according to a building searching result;
and determining that the POI is positioned in an intersection acquisition scene when the POI vector intersects the road contour line or is positioned in an area determined by the road contour line.
2. The method according to claim 1, wherein generating the road contour of the target road from the building search results, in particular comprises:
if the building searching result shows that no building is searched, taking parallel lines with the distance from the target road being equal to the preset width as the road contour lines;
if the building searching result is that the building is searched, determining one edge line closest to the target road in the contour edge line of the building, taking one point with the largest vertical distance from the edge line to the target road as a parallel line of the target road, and taking the parallel line as a road contour line of the target road.
3. A method for identifying a POI, the method comprising:
acquiring background map data of a POI in a preset electronic map database according to acquisition information of the POI, wherein the acquisition information comprises the following steps: the collecting position, the collecting task direction and the collecting road;
judging whether the POI is located in a preset acquisition scene or not based on the acquisition information of the POI and the background map data, and if the POI is located in the preset acquisition scene, marking the POI as a POI to be verified, wherein the preset acquisition scene is a parallel acquisition scene;
The step of judging whether the POI is located in a preset acquisition scene based on the acquisition information of the POI and the background map data comprises the following steps:
acquiring a road plumb line based on the acquisition position of the POI to obtain a foot drop point;
acquiring a road with an intersection point with an acquisition task vector of the POI in the background map data as a target road, wherein the acquisition task vector takes the vertical foot point as a starting point, a preset acquisition task direction as a direction and a preset second length as a length value;
judging whether an included angle between a straight line where the target road is located and a straight line where the acquisition road is located is in a preset angle interval or not;
if not, searching whether a building exists between the target road and the acquisition road in the background map data;
and if the building does not exist, determining that the POI is located in the parallel road acquisition scene.
4. A method according to claim 3, wherein before searching for the presence of a building between the target road and the acquisition road in the background map data, the method further comprises:
judging whether the target road and the acquisition road form an uplink road and a downlink road or a main road and an auxiliary road;
If not, the operation of searching whether a building exists between the target road and the acquisition road in the background map data is executed.
5. A method for identifying a POI, the method comprising:
acquiring background map data of a POI in a preset electronic map database according to acquisition information of the POI, wherein the acquisition information comprises the following steps: the collecting position, the collecting task direction and the collecting road;
judging whether the POI is located in a preset acquisition scene or not based on the acquisition information of the POI and the background map data, and if the POI is located in the preset acquisition scene, marking the POI as a POI to be verified, wherein the preset acquisition scene is a parallel acquisition scene;
the step of judging whether the POI is located in a preset acquisition scene based on the acquisition information of the POI and the background map data comprises the following steps:
acquiring a road plumb line based on the acquisition position of the POI to obtain a foot drop point;
acquiring a road with an intersection point with an acquisition task vector of the POI in the background map data as a target road, wherein the acquisition task vector takes the vertical foot point as a starting point, a preset acquisition task direction as a direction and a preset second length as a length value;
Judging whether an included angle between a straight line where the target road is located and a straight line where the acquisition road is located is in a preset angle interval or not;
if the road is not in the preset angle interval, judging whether the target road and the acquisition road form an uplink and downlink road or a main and auxiliary road;
if not, determining that the POI is located in the parallel acquisition scene.
6. A method for identifying a POI, the method comprising:
acquiring background map data of a POI in a preset electronic map database according to the acquisition information of the POI, wherein the acquisition information of the POI comprises the following steps: the collecting position, the collecting task direction and the collecting road;
judging whether the POI is located in a preset acquisition scene or not based on the acquisition information of the POI and the background map data, and if the POI is located in the preset acquisition scene, marking the POI as a POI to be verified, wherein the preset acquisition scene is a multi-building acquisition scene;
the step of judging whether the POI is located in a preset acquisition scene based on the acquisition information of the POI and the background map data comprises the following steps:
and searching for buildings in a third distance range preset around the acquisition position of the POI in the background map data, merging and deleting the searched buildings according to the acquisition task direction and the acquisition road if more than two buildings are searched for, so as to obtain reserved buildings, and determining that the POI is positioned in a multi-building acquisition scene if the number of the reserved buildings is more than or equal to 2.
7. The method of claim 6, wherein the merging and pruning of the searched buildings according to the direction of the collection task and the collection road to obtain the reserved building specifically comprises:
acquiring a building positioned at one side pointed by the direction of the collection task of the collection road from the searched buildings as a building to be screened;
obtaining a building to be screened, which is parallel to the direction of the acquisition task, from the building to be screened, wherein the contour line of the building is taken as a target building;
if more than two target buildings are sequentially arranged along the direction of the collection task, deleting the target building far from the collection road, and reserving the target building closest to the collection road;
merging the adjacent relations existing in the reserved target buildings into one target building;
and the target building which is not combined in the reserved target buildings and the target building obtained after combination form the reserved building.
8. A POI identification device, the device comprising:
the data acquisition unit is used for acquiring background map data of the POI in a preset electronic map database according to acquisition information of the POI, wherein the acquisition information comprises the following components: collecting position and collecting direction;
The scene judging unit is used for judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data acquired by the data acquisition unit;
the POI marking unit is used for marking the POI as the POI to be verified if the scene judging unit determines that the POI is located in a preset acquisition scene, wherein the preset acquisition scene is an intersection acquisition scene;
the scene discrimination unit includes:
the intersection searching module is used for searching whether an intersection exists in a first distance range preset around the acquisition position of the POI in the background map data;
the first target road determining module is used for judging whether the included angle between the POI vector of the POI and the road vector of the road connected with the intersection accords with a preset angle range or not when the intersection is searched, if so, determining the road as a target road, wherein the POI vector takes the acquisition position of the POI as a starting point, takes the acquisition direction of the POI as a direction, takes a preset first length as a length value, and the road vector of the road takes the intersection as a starting point and takes the direction of a vehicle driving out of the intersection along the road as a direction;
The road contour generation module is used for searching buildings in a second distance range preset at two sides of the target road in the background map data, and generating a road contour line of the target road according to a building searching result;
and the intersection scene determining module is used for determining that the POI is positioned in an intersection acquisition scene when the POI vector intersects with the road contour line or is positioned in an area determined by the road contour line.
9. A POI identification device, the device comprising:
the data acquisition unit is used for acquiring background map data of the POI in a preset electronic map database according to acquisition information of the POI, wherein the acquisition information comprises the following components: the collecting position, the collecting task direction and the collecting road;
the scene judging unit is used for judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data acquired by the data acquisition unit;
the POI marking unit is used for marking the POI as the POI to be verified if the scene judging unit determines that the POI is located in a preset acquisition scene, wherein the preset acquisition scene is a parallel acquisition scene;
The scene discrimination unit includes:
the drop foot acquisition module is used for guiding a vertical line to the acquired road based on the acquisition position of the POI to obtain a drop foot point;
the second target road determining module is used for obtaining a road with an intersection point with the acquisition task vector of the POI in the background map data as a target road, wherein the acquisition task vector takes a foot drop point as a starting point, a preset acquisition task direction as a direction and a preset second length as a length value;
the road included angle judging module is used for judging whether the included angle between the straight line where the target road is located and the straight line where the acquisition road is located is in a preset angle interval or not;
the first building searching module is used for searching whether a building exists between the target road and the acquisition road in the background map data if the included angle is not in the preset angle interval;
and the parallel path scene determining module is used for determining that the POI is positioned in the parallel path acquisition scene if the building is not searched.
10. A POI identification device, the device comprising:
the data acquisition unit is used for acquiring background map data of the POI in a preset electronic map database according to acquisition information of the POI, wherein the acquisition information comprises the following components: the collecting position, the collecting task direction and the collecting road;
The scene judging unit is used for judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data acquired by the data acquisition unit;
the POI marking unit is used for marking the POI as the POI to be verified if the scene judging unit determines that the POI is located in a preset acquisition scene, wherein the preset acquisition scene is a parallel acquisition scene;
the scene determination unit:
the drop foot acquisition module is used for guiding a vertical line to the acquired road based on the acquisition position of the POI to obtain a drop foot point;
the second target road determining module is used for obtaining a road with an intersection point with the acquisition task vector of the POI in the background map data as a target road, wherein the acquisition task vector takes the foot drop point as a starting point, takes a preset acquisition task direction as a direction, and takes a preset second length as a length value;
the road included angle judging module is used for judging whether the included angle between the straight line where the target road is located and the straight line where the acquisition road is located is in a preset angle interval or not;
the road judging module is used for judging whether the target road and the acquisition road form an uplink road and a downlink road or a main road and an auxiliary road or not if the included angle is not in the preset angle interval;
And the parallel road scene determining module is used for determining that the POI is positioned in a parallel road acquisition scene when an uplink and downlink road or a main and auxiliary road is not formed.
11. A POI identification device, the device comprising:
the data acquisition unit is used for acquiring background map data of the POI in a preset electronic map database according to the acquired information of the POI, wherein the acquired information of the POI comprises the following components: the collecting position, the collecting task direction and the collecting road;
the scene judging unit is used for judging whether the POI is positioned in a preset acquisition scene or not based on the acquisition information of the POI and the background map data acquired by the data acquisition unit;
the POI marking unit is used for marking the POI as the POI to be verified if the scene judging unit determines that the POI is located in a preset acquisition scene, wherein the preset acquisition scene is a multi-building acquisition scene;
the scene discrimination unit further includes:
the second building searching module is used for searching buildings in a third distance range preset around the acquisition position of the POI in the background map data;
the building screening module is used for merging and deleting the searched buildings according to the direction of the acquisition task and the acquisition road if more than two buildings are searched, so as to obtain reserved buildings;
The multi-building scene determining module is used for determining that the POI is located in a multi-building acquisition scene if the number of reserved buildings is greater than or equal to 2.
12. A processor, characterized in that the processor is configured to run a computer program, wherein the computer program when run performs the POI identification method according to any one of claims 1-7.
CN201811415643.7A 2018-11-26 2018-11-26 POI (Point of interest) identification method and device Active CN111220173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811415643.7A CN111220173B (en) 2018-11-26 2018-11-26 POI (Point of interest) identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811415643.7A CN111220173B (en) 2018-11-26 2018-11-26 POI (Point of interest) identification method and device

Publications (2)

Publication Number Publication Date
CN111220173A CN111220173A (en) 2020-06-02
CN111220173B true CN111220173B (en) 2023-07-28

Family

ID=70827768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811415643.7A Active CN111220173B (en) 2018-11-26 2018-11-26 POI (Point of interest) identification method and device

Country Status (1)

Country Link
CN (1) CN111220173B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115658839A (en) * 2022-12-27 2023-01-31 深圳依时货拉拉科技有限公司 POI data mining method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902596A (en) * 2012-09-29 2013-01-30 北京百度网讯科技有限公司 Point of interest data verification method and point of interest data verification device
US8566029B1 (en) * 2009-11-12 2013-10-22 Google Inc. Enhanced identification of interesting points-of-interest
EP2975555A1 (en) * 2014-07-17 2016-01-20 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for displaying point of interest
CN106028445A (en) * 2016-07-04 2016-10-12 百度在线网络技术(北京)有限公司 Method and apparatus for determining positioning accuracy
CN106331639A (en) * 2016-08-31 2017-01-11 浙江宇视科技有限公司 Method and apparatus of automatically determining position of camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3908056B2 (en) * 2002-02-26 2007-04-25 アルパイン株式会社 Car navigation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8566029B1 (en) * 2009-11-12 2013-10-22 Google Inc. Enhanced identification of interesting points-of-interest
CN102902596A (en) * 2012-09-29 2013-01-30 北京百度网讯科技有限公司 Point of interest data verification method and point of interest data verification device
EP2975555A1 (en) * 2014-07-17 2016-01-20 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for displaying point of interest
CN106028445A (en) * 2016-07-04 2016-10-12 百度在线网络技术(北京)有限公司 Method and apparatus for determining positioning accuracy
CN106331639A (en) * 2016-08-31 2017-01-11 浙江宇视科技有限公司 Method and apparatus of automatically determining position of camera

Also Published As

Publication number Publication date
CN111220173A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
US10198632B2 (en) Survey data processing device, survey data processing method, and survey data processing program
US10032267B2 (en) Automating the assessment of damage to infrastructure assets
KR20190090393A (en) Lane determining method, device and storage medium
KR101689805B1 (en) Apparatus and method for reconstructing scene of traffic accident using OBD, GPS and image information of vehicle blackbox
US20170294036A1 (en) Supporting a creation of a representation of road geometry
CN108151750A (en) Localization method and device
CN105758413B (en) The method and apparatus of automation assessment yaw in navigation engine
KR102359306B1 (en) System for image analysis using artificial intelligence and method using the same
CN110321885A (en) A kind of acquisition methods and device of point of interest
JP2017102672A (en) Geographic position information specification system and geographic position information specification method
CN110175609A (en) Interface element detection method, device and equipment
CN112132853B (en) Method and device for constructing ground guide arrow, electronic equipment and storage medium
CN110175507A (en) Model evaluation method, apparatus, computer equipment and storage medium
CN116484036A (en) Image recommendation method, device, electronic equipment and computer readable storage medium
CN114252884A (en) Method and device for positioning and monitoring roadside radar, computer equipment and storage medium
CN111220173B (en) POI (Point of interest) identification method and device
US20220148216A1 (en) Position coordinate derivation device, position coordinate derivation method, position coordinate derivation program, and system
CN108898617A (en) A kind of tracking and device of target object
EP4250245A1 (en) System and method for determining a viewpoint of a traffic camera
CN106462750A (en) Method for ascertaining and providing a landmark for the vehicle to determine its own position
CN111951328A (en) Object position detection method, device, equipment and storage medium
JP2020144862A (en) Stone gravel detection system, stone gravel detection method and program
CN111143488B (en) POI position determining method and device
CN111504337B (en) POI orientation determining method and device
CN110763203A (en) Positioning method and device of urban component and vehicle-mounted mobile measurement system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant