CN112033396B - Method, apparatus, device, and medium for obtaining guide points around a point of interest - Google Patents

Method, apparatus, device, and medium for obtaining guide points around a point of interest Download PDF

Info

Publication number
CN112033396B
CN112033396B CN202010942895.6A CN202010942895A CN112033396B CN 112033396 B CN112033396 B CN 112033396B CN 202010942895 A CN202010942895 A CN 202010942895A CN 112033396 B CN112033396 B CN 112033396B
Authority
CN
China
Prior art keywords
images
point
image
interest
guide
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010942895.6A
Other languages
Chinese (zh)
Other versions
CN112033396A (en
Inventor
王钦民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010942895.6A priority Critical patent/CN112033396B/en
Publication of CN112033396A publication Critical patent/CN112033396A/en
Application granted granted Critical
Publication of CN112033396B publication Critical patent/CN112033396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3476Special cost functions, i.e. other than distance or default speed limit of road segments using point of interest [POI] information, e.g. a route passing visible POIs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques

Abstract

The present disclosure provides a method for obtaining a guide point around an interest point, and relates to the technical field of computers, and more particularly, to the technical field of image processing and intelligent transportation. The method comprises the following steps: acquiring a plurality of images and corresponding shooting positions thereof; screening the images to obtain images containing guide point features in the plurality of images; matching the screened image with the identification of the interest point to obtain an image associated with the interest point; clustering the obtained images associated with the interest points according to similarity to obtain image clusters; generating a guide point for the point of interest using the shooting positions of the images in the obtained image cluster.

Description

Method, apparatus, device, and medium for obtaining guide points around a point of interest
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for obtaining a guidance point around a point of interest.
Background
The arrival at the destination is a basic appeal of using navigation software, and providing accurate parking points for each point of interest (POI) is an important part of the work of the navigation software. Providing a more appropriate navigation endpoint can provide more intimate navigation service for the user, make the software more intelligent, and make the user experience better.
The existing guidance point excavation mainly utilizes a manual marking or track excavation mode. The manual labeling is inefficient and cannot be mass produced. The track mining method has poor POI adaptability to track sparseness, complex road networks and multiple guide points.
Disclosure of Invention
According to an aspect of the present disclosure, a method for obtaining a guide point around a point of interest may be provided. The method comprises the following steps: acquiring a plurality of images and corresponding shooting positions thereof; screening the images to obtain images containing guide point features in a plurality of images; matching the screened image with the identification of the interest point to obtain an image associated with the interest point; clustering the obtained images associated with the interest points according to the similarity to obtain image clusters; using the shooting positions of the images in the obtained image cluster, a guide point for the point of interest is generated.
According to another aspect of the present disclosure, an apparatus for obtaining a guide point around a point of interest may be provided. The device comprises an image acquisition unit, a storage unit and a display unit, wherein the image acquisition unit is used for acquiring a plurality of images and corresponding shooting positions; the image screening unit is used for screening the images to obtain images containing guide point features in the plurality of images; the interest point matching unit is used for matching the screened images with the identification of the interest point to obtain an image associated with the interest point; the image clustering unit is used for clustering the obtained images associated with the interest points according to the similarity so as to obtain image clusters; and a guide point generating unit for generating a guide point for the point of interest using the captured position of the image in the obtained image cluster.
According to another aspect of the present disclosure, there may also be provided a computing device comprising: a processor; and a memory storing the program. The program comprises instructions which, when executed by a processor, cause the processor to perform the method for obtaining a guidance point around a point of interest according to the above.
According to another aspect of the present disclosure, there may also be provided a computer-readable storage medium storing a program comprising instructions which, when executed by a processor of a computing device, cause the computing device to perform the method for obtaining a guidance point around a point of interest according to the above.
According to another aspect of the present disclosure, there may also be provided a navigation system using the guidance points generated according to the above-described method for obtaining guidance points around a point of interest.
According to one aspect of the present disclosure, a method for obtaining a guide point around a point of interest can generate a more accurate guide point.
Drawings
Fig. 1 is a flow diagram of a method for obtaining guide points around a POI according to an embodiment of the present disclosure;
2 (a) and 2 (b) are schematic diagrams of locations of POIs and guide points, in accordance with some embodiments;
FIG. 3 is a flow diagram of a method for obtaining guide points around a POI, in accordance with another embodiment of the present disclosure;
FIG. 4 is a flow diagram of a method for obtaining guide points around a POI, in accordance with yet another embodiment of the present disclosure;
FIG. 5 is a flow diagram of two-phase clustering of images according to an embodiment of the present disclosure;
6 (a) and 6 (b) are example images for being processed to obtain guide points around a POI, in accordance with the present disclosure;
7 (a) -7 (c) are example reconstructed images for obtaining guide points around a POI, in accordance with further embodiments of the present disclosure;
FIG. 8 is a block diagram of an apparatus for obtaining guide points around a POI in accordance with an embodiment of the present disclosure; and
fig. 9 is a schematic diagram of an exemplary computing device for obtaining guidance points around a POI according to an embodiment of the present disclosure.
Detailed Description
In this context, POIs may be derived from map-based data, such as "XX buildings", "XX glasses stores", etc. The guidance points are navigation guidance points around the vicinity of the POI, and may also be referred to as an end point, a parking point, a departure point, a related point, and the like in an appropriate navigation scene. A common application example of a guidance point is to guide a parking location in navigation software, or a boarding point prompted in taxi software, where a POI can be seen/reached and a car can pass or be located on the road. For example, in connection with the above example, the guidance point may be "building east," "building north," "glass shop front," "glass shop back," or the like. There may be one or more guide points around a POI.
A method for obtaining guide points around a POI according to an embodiment of the present disclosure is described below with reference to fig. 1.
At step S11, a plurality of images and their corresponding shooting positions are acquired. The shooting position refers to a position of the image at the time of shooting, and may be, for example, a GPRS coordinate of the shot image.
At step S12, the images are filtered to obtain images containing guide point features in the plurality of images.
At step S13, the filtered image is matched with the identification of the point of interest, obtaining an image associated with the point of interest.
At step S14, the obtained images associated with the interest points are clustered according to similarity to obtain image clusters.
At step S15, using the shooting positions of the images in the obtained image cluster, a guide point for the interest point is generated.
In the current navigation field, manual marking of a navigation end point is time-consuming and cannot process a large amount of data; and the automated construction generally utilizes the driving track to be combined with road network data for mining, so that the driving track is strongly dependent. For the situations of sparse track (for example, few people park in the car) and track noise (for example, positioning deviation, or behaviors that a user sees a guidance point but does not park in time before and the like), the guidance point is not accurately marked, the excavation difficulty of a navigation terminal is very high, and the precision cannot be guaranteed. In contrast, according to the method shown in fig. 1, the guidance point can be generated without depending on the road network and the trajectory, and the accuracy of the guidance point determination can be improved.
Fig. 2 (a) and 2 (b) illustrate examples of locations of POIs and guide points according to some embodiments. Specifically, fig. 2 (a) shows a single guidance point example around a POI, and fig. 2 (b) shows a multi-guidance point example around a single POI. Although fig. 2 (b) plots the guidance points in the multi-guidance point as two, the present disclosure is not limited thereto, and the number of guidance points may be three or more.
An example method according to another embodiment of the present disclosure will be described below in conjunction with fig. 3.
At S31, a plurality of images and their corresponding shooting positions are acquired. The shooting position refers to a position of the image at the time of shooting, and may be, for example, a GPRS coordinate of the shot image.
At S32, the images are filtered by pattern recognition, and an image containing the feature of the guide point is acquired. In particular, scene recognition may be performed on the images to filter out images that do not include guide point features prior to matching the plurality of images with the points of interest.
According to some embodiments, the guidance point features include features identifying points of interest, features through which vehicles can pass, and features through which people can pass. The identifying points of interest may be features such as signboards, door plaques, store name banners, and the like. Such as the road surface, the road, a parked or running vehicle, etc. Features through which a person can pass such as doors, storefronts, steps, sidewalks, etc. Therefore, the images containing the guide point features can be accurately screened, and the guide point selection of navigation is facilitated.
According to some embodiments, the guide point features include signs, road surfaces, and doors that follow a predetermined positional relationship. Signs, roads and doors that follow a predetermined positional relationship can accurately screen out storefronts, which is a typical guide point image scene. The predetermined positional relationship may be sign up, door in the middle, and road surface down. The predetermined positional relationship may also be that the sign is on the side or in front of the door, etc., and the present invention is not limited thereto. For example, FIG. 6 (a) is an example image that contains guide point features, while FIG. 6 (b) is an image that does not contain guide point features, such as a store interior image.
In order to accurately identify the pictures which accord with the characteristics of the guide points, the model can be trained in advance. Machine learning is adopted for the training of the model, and deep learning is preferably adopted. For example, the model may be trained with pictures containing doors, floors, and signs and conforming to a predetermined pattern as positive examples, and images that do not contain such scenes as negative examples. Machine learning-based recognition means such as target recognition may be employed. Furthermore, in addition to machine learning, conventional algorithm logic for extracting feature points from an image and comparing threshold values to identify features, or other means for identifying an image scene or determining whether an image contains predetermined features may be employed, and the present disclosure is not limited to a specific image screening and scene identification algorithm.
Next, at step S33, the filtered image is matched with the identifier of the interest point, and an image containing the identifier of the interest point in the filtered image is obtained. The image containing the point of interest identification is the image associated with the particular point of interest. According to some embodiments, the identification of the point of interest is a point of interest name, and obtaining the image of the plurality of images associated with the point of interest comprises, for each of the filtered images: recognizing characters in the image; and determining that the image is associated with the point of interest if the identified text includes the point of interest name.
In case the identification of the point of interest is a point of interest name, step S33 may be referred to as a text matching phase. The text matching phase may be accomplished by: first, characters included in an image are detected. OCR recognition may be used on the image to obtain text in the image. The coordinates of the corresponding characters in the image can be recorded while the characters are recognized, so that the characters can be used for similarity matching in the following text. Next, it is detected that the text recognition result matches the POI name, for example, it is determined whether the text recognized in the image includes the designated POI name. Then, the highest value of the character matching degree is recorded. Alternatively, the OCR anchor point position, that is, the position of the recorded character corresponding to the highest matching degree of the character in the graph, for example, is recorded as L (x, y). And finally, comparing the character matching result with a preset matching degree threshold Th. And for each image in the plurality of images, if the highest character matching value is greater than Th, the image and the interest point are considered to be successfully matched. And reserving the corresponding image for the next operation.
The POI name may be known data provided by a data center and may include a formal name of the POI, an alias name of the POI, or other text that can identify the POI.
The identification of the POI may be, in addition to text, other characters, patterns, shapes, logos, etc. that the user can recognize, select, or enter, and is not limited thereto herein.
At step S34, the obtained images associated with the interest points are clustered according to similarity to obtain image clusters.
According to some embodiments, the obtained images are clustered into a plurality, and thus, the guide point generated in the subsequent step is a plurality. Thereby, identification of multiple guide points around a single point of interest can be achieved.
According to some embodiments, clustering the obtained images associated with the points of interest by similarity includes clustering by similarity of text in the images. At least because the recognition of the characters is insensitive to the shooting angle in the place where the character information is rich, the similarity matching is performed through the text, and a more accurate matching result can be obtained.
According to some embodiments, this step is performed using the redrawn image. Specifically, the method comprises the steps of identifying characters in the images for each of the obtained images associated with the interest points, recording coordinates of each character in the images, and drawing the identified characters in blank images according to the corresponding coordinates. Next, similarity clustering is performed on the redrawn images. Therefore, the matching efficiency and accuracy are further improved.
According to some embodiments, wherein clustering the obtained images associated with the points of interest by similarity to obtain image clusters further comprises, if a distance between shooting positions of the images in the obtained image clusters does not satisfy a threshold distance, secondarily clustering the images. Quadratic clustering can be performed for local locations in the image. Thus, the matching precision of the pictures can be improved.
According to some embodiments, the local position for which the quadratic clustering is performed is a local position in the image near a position matching the identification of the point of interest. The refined matching of the local positions near the interest point identification can further improve the matching precision of the picture with smaller calculation amount.
At step S35, a guidance point for the POI is generated using the captured positions of the images in the obtained image cluster.
According to some embodiments, generating the guidance point using the shooting positions of the images in the obtained image cluster includes: and selecting a proper image from each image cluster, and taking the shooting position of the image as the coordinate of the corresponding guide point. Thus, the guide point position can be determined using the image pickup position of the image, not the road network or the trajectory data, without requiring complicated calculation.
According to some embodiments, the suitable image selected is one of: and the images with the highest matching degree with the guide point features, the images with the highest matching degree with the interest point identifications or the images with the most positive shooting angle in the corresponding image clusters. Selection of an appropriate image can make the location of the guidance point more accurate, and the present disclosure is not limited thereto. For example, the coordinates of the guide point may be selected from a center image of the images in the cluster, an average position of the image positions in the cluster, or other positions that can represent typical positions of the features of the guide point in the image cluster.
Generating the guide point may further include taking the guide point generated for the interest point as a candidate guide point and then filtering the candidate guide point to obtain a final guide point. According to some embodiments, filtering the candidate guidance points comprises: and generating a radiation range of the interest point according to the coordinates of the interest point, and filtering out guide points which are not in accordance with expectation by combining the road network data and the radiation range of the interest point. The coordinates of the POI may be known data provided by a data center. The coordinates of the POI are for example the GPRS coordinates of the POI. The coordinates of the POI may be display coordinates. For example, where the POI is a square section building, the display coordinates may be geographic coordinates of its center point. The road network data refers to a road network matched with the image shooting coordinates and the POI coordinates. The guidance points that are not expected are, for example, candidate points that are far away from the POI coordinates, candidate points that do not match the road network data, for example, the candidate points cannot be projected to the nearest road, are at the same distance from multiple roads, cause ambiguity in guidance, and the like.
According to some embodiments, the plurality of images utilized by the methods of the present disclosure are user generated images, also referred to as UGC. Examples of UGC are, for example, shop, merchant uploaded and maintained facade, interior pictures, or ratings, user taking uploaded error correction images in map applications, etc., but the disclosure is not limited thereto. The image generated by the user simplifies the data acquisition process.
Fig. 4 is a flow diagram of a method for generating a guidance point according to another embodiment of the present disclosure.
At S41, an image, such as a user-generated image, is acquired.
At S42, pattern recognition, especially scene recognition, is performed on the images to screen out images containing the guide point features. For example, here, scene recognition is a deep learning classification model trained for a project using UGC images. In the deep learning process, an image containing a signboard, a main gate and ground contents and following a certain positional relationship is taken as a whole pattern. The road surface represents the road network/vehicle can pass through, the signboard shows the POI can be seen, and the gate shows the POI can be entered. The images following a certain positional relationship represent guide point features required for navigation. The images conforming to the pattern are identified as the same class, stored, and otherwise filtered.
At S43, the image containing the guide point features is retained.
At S44, OCR recognition is performed on the image, and a set of images containing the designated POI name is acquired. In this step, the image may be preliminarily screened, for example, if the text in OCR is too small, too biased, or if the text is horizontal but has a large left-right size difference, it is considered that such text may not meet the characteristics of the point of interest identifier (e.g., signboard). Therefore, such pictures can also be filtered out.
At S45, an image matching the POI name is obtained.
At S46, similarity matching and clustering are performed on the images. The image similarity matching aims to screen out images with the same or similar shooting contents, and the images are not sensitive to the shooting angle. This step may be a general image similarity matching. This step may also be a two-phase matching process, as described below with reference to FIG. 5.
At S47, candidate point coordinates are generated.
At S48, a final guidance point is obtained by filtering through the POI display coordinates, the road network data, and the like.
FIG. 5 is a flow diagram of two-stage clustering of images, according to some embodiments.
At S51, the images obtained in the previous step, which contain the guide point feature and are associated with the specific point of interest, are obtained for similarity clustering.
According to some embodiments, this step is performed using the redrawn image. Specifically, the method comprises the following steps: recognizing the text contents in the plurality of images and recording the corresponding text positions; drawing the detected characters into blank images according to corresponding positions; and carrying out similarity clustering on the drawn images. Thereby, the efficiency and accuracy of the matching are further improved. In case the images are filtered with the point of interest names, the text content in the plurality of images may have been identified in a previous step and the corresponding text position may have been recorded in a previous step. Otherwise, recognizing the text contents in the plurality of images and recording the corresponding text positions in the step.
At S52, first-stage similarity matching is performed. The first-stage similarity matching can be similarity graph matching of the whole image or simple character matching of texts in the image. An example of simple text matching is illustrated in fig. 7, where the words "speed 8 hotel" appear below the "smoking and wine supermarket" in fig. 7 (a) and 7 (b), respectively, whereas there is no "speed 8 hotel" around the "smoking and wine supermarket" in fig. 7 (c), indicating that fig. 7 (c) is likely to correspond to another entrance of the supermarket than the entrances in fig. 7 (a), (b). As another example, if the left side of the point of interest identification of pictures 1 and 2 "XX building" has a bank identification, while the left side of the point of interest identification of picture 3 is a hotel or barber shop identification, or there is no corresponding text or symbol within a certain margin, then pictures 1 and 2 can be considered to have similarity and to be taken corresponding to the same location. While picture 3 is not similar to pictures 1 and 2 and is therefore not taken at the same location. The present disclosure is not so limited and other algorithms for comparing image or text similarity may also be applicable.
An example clustering result is: c = { a, B }, where C is the set of all qualified pictures screened, a is one image cluster obtained from image similarity; b is another image cluster obtained from image similarity. Note that A, B herein is merely an example, the present disclosure is not limited to the case of generating two clusters, and three or more clusters are possible depending on actual data, for example, a scene where one point of interest has multiple guide points.
At S53, it is determined whether cluster a and cluster B need to be subdivided according to the position.
If S53 is yes, for example, if the cluster a corresponds to a plurality of image generation candidate points in close proximity and the cluster B is also the same, the step proceeds to S56, the image clustering step is completed, and the next step, for example, the leading point generation step, is performed. For example, it is determined in S53 that the images in cluster a are approximately centered at position a, and the images in cluster B are approximately centered at position B. Or, for example, it is determined in S53 that the distance between the shooting positions of the images in cluster a is smaller than the threshold distance la, and the distance between the shooting positions of the images in cluster a is smaller than the threshold distance lb. In such a case, the clustering is considered complete, and a and B correspond to two guide points or guide point candidates, respectively, for the point of interest. For example, a may correspond to a store front door image and B may correspond to a store back door image.
On the contrary, if "S53" is no, the method proceeds to step S54 to perform second-stage image similarity matching on the obtained clusters A, B, respectively.
Specifically, in step S54, the image is locally cut out. For example, in the case of text matching, the image may be intercepted centered on the anchor point L (x, y) of the OCR. The image part with most abundant information can be intercepted. The image before processing can be intercepted, and the image after reconstruction can also be intercepted.
Thereafter, in step S55, similarity matching is performed on the clipped partial images. The clustering result C (a 1, a2, B1, B2) = C (a 1, a 2), B (B1, B2)). Then, the process proceeds to S56 and the next step.
The method adopts two-stage matching, and particularly adds a first-stage matching process in the traditional image matching technology, so that the method has the advantages that at least if only traditional characteristic extraction is adopted, mismatching and the like are easily caused due to sensitivity to a shooting angle; and near the interest point, especially in the place with rich text information, the recognition of the text is not sensitive to the shooting angle, so the matching logic can be simplified in the first stage matching process, and the high-precision clustering result can be obtained.
FIG. 8 is a block diagram illustrating an apparatus for obtaining guide points around a point of interest, according to an example embodiment.
The apparatus 800 for obtaining a guide point around a point of interest according to this exemplary embodiment may include: an image acquisition unit 801, an image filtering unit 802, an interest point matching unit 803, an image clustering unit 804, and a guidance point generating unit 805. The image acquisition unit 801 is configured to acquire a plurality of images and their corresponding shooting positions. The image filtering unit 802 is configured to filter the images to obtain images containing guide point features in the plurality of images. The point of interest matching unit 803 is configured to match the filtered image with an identification of a point of interest, obtaining an image associated with said point of interest. The image clustering unit 804 is configured to cluster the obtained images associated with the interest points according to similarity to obtain image clusters. The guide point generating unit 805 is configured to generate a guide point for the point of interest using the shooting positions of the images in the obtained image cluster. Thus, the generation of the guidance point can be realized without depending on the road network and the trajectory information.
It should be understood that the foregoing description of the method steps in connection with fig. 1-7 applies equally to the elements of fig. 8 that perform the corresponding method steps and are not repeated here.
According to an aspect of the present disclosure, there is also provided a computing device, which may include: a processor; and a memory storing a program comprising instructions which, when executed by the processor, cause the processor to perform the above-described method for obtaining a guide point around a point of interest.
According to another aspect of the present disclosure, there is also provided a computer-readable storage medium storing a program, the program comprising instructions that, when executed by a processor of a computing device, cause the computing device to perform the above-described method for obtaining a guidance point around a point of interest.
In addition, the guide point generated according to the present disclosure may be used by any navigation system that needs to perform a guide point location function of a point of interest.
Referring to fig. 9, a computing device 2000 will now be described. The computing device 2000 may be any machine configured to perform processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a robot, a smart phone, an on-board computer, or any combination thereof. The methods according to the present disclosure may be implemented in whole or at least in part by computing device 2000 or similar devices or systems.
Computing device 2000 may include elements to connect with bus 2002 (possibly via one or more interfaces) or to communicate with bus 2002. For example, computing device 2000 may include a bus 2002, one or more processors 2004, one or more input devices 2006, and one or more output devices 2008. The one or more processors 2004 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special processing chips). Input device 2006 may be any type of device capable of inputting information to computing device 2000 and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 2008 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The computing device 2000 may also include or be connected with a non-transitory storage device 2010, which may be any storage device that is non-transitory and that may enable data storage, and may include, but is not limited to, a magnetic disk drive, an optical storage device, solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The non-transitory storage device 2010 may be removable from the interface. The non-transitory storage device 2010 may have data/programs (including instructions)/code for implementing the above-described methods and steps. Computing device2000 may also include a communication device 2012. The communication device 2012 may be any type of device or system that enables communication with external devices and/or with a network and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as bluetooth TM Devices, 1302.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing device 2000 may also include a working memory 2014, which may be any type of working memory that can store programs (including instructions) and/or data useful for the operation of the processor 2004, and may include, but is not limited to, random access memory and/or read only memory devices.
Software elements (programs) may reside in the working memory 2014 including, but not limited to, an operating system 2016, one or more application programs 2018, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more applications 2018, and the above-described computing methods may be implemented by instructions read by the processor 2004 and executed by the one or more applications 2018. More specifically, in the above-described calculation method, steps S11 to S15 can be realized, for example, by the processor 2004 executing the application 2018 having the instructions of steps S11 to S15. Further, other steps in the above-described calculation method may be implemented, for example, by the processor 2004 executing an application 2018 having instructions to perform the respective steps. Executable code or source code of instructions of the software elements (programs) may be stored in a non-transitory computer-readable storage medium (such as the storage device 2010 described above) and, upon execution, may be stored in the working memory 2014 (possibly compiled and/or installed). Executable code or source code for the instructions of the software elements (programs) may also be downloaded from a remote location.
It will also be appreciated that various modifications may be made in accordance with specific requirements. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, C + +, using logic and algorithms according to the present disclosure.
It should also be understood that the foregoing method may be implemented in a server-client mode. For example, a client may receive data input by a user and send the data to a server. The client may also receive data input by the user, perform part of the processing in the foregoing method, and transmit the data obtained by the processing to the server. The server may receive data from the client and perform the aforementioned method or another part of the aforementioned method and return the results of the execution to the client. The client may receive the results of the execution of the method from the server and may present them to the user, for example, through an output device.
It should also be understood that the components of computing device 2000 may be distributed across a network. For example, some processes may be performed using one processor while other processes may be performed by another processor that is remote from the one processor. Other components of the computing device 2000 may also be similarly distributed. As such, the computing device 2000 may be interpreted as a distributed computing system that performs processing at multiple locations. For example, the computing device 2000 may be implemented as part of a cloud platform. The cloud platform abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud. The resources may include applications and/or data that may be used when executing computer processes on servers remote from the computing device 2000. Resources may also include services provided over the internet and/or over a subscriber network such as a cellular or Wi-Fi network.
According to the technical scheme of the embodiment of the disclosure, the guide point can be generated without depending on road network and track data.
While embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely illustrative embodiments or examples and that the scope of the invention is not to be limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (21)

1. A method for obtaining a guide point around a point of interest, comprising the steps of:
acquiring a plurality of images and corresponding shooting positions thereof;
screening the images to obtain a plurality of images containing guide point features in the plurality of images;
matching the screened images with the identification of the interest point to obtain a plurality of images associated with the interest point;
clustering the obtained images associated with the interest points according to similarity to obtain at least one image cluster, wherein each image cluster comprises at least one image; and
generating corresponding at least one guide point for the interest point using the obtained shooting position of the image of each of the at least one image cluster.
2. The method of claim 1, wherein the generated guidance point is plural.
3. The method of claim 1, wherein screening the images comprises obtaining images containing guide point features by pattern recognition.
4. The method of claim 1, wherein the guide point features comprise at least one of: identifying characteristics of points of interest, characteristics that a vehicle can pass through, and characteristics that a person can pass through.
5. The method of claim 1, wherein the guide point features include signs, road surfaces, and doors that follow a predetermined positional relationship.
6. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the identification of the point of interest is a point of interest name,
wherein matching the filtered images with the identification of the point of interest, obtaining the image associated with the point of interest comprises, for each of the filtered images:
recognizing characters in the image;
if the identified text includes the point of interest name, determining that the image is associated with the point of interest.
7. The method of any one of claims 1-6, wherein clustering the obtained images associated with the points of interest by similarity to obtain clusters of images comprises: and clustering through the similarity of texts in the images.
8. The method of claim 7, wherein clustering by similarity of text in images comprises:
for each of the obtained images associated with the point of interest,
the text in the image is recognized and,
the coordinates of each character in the image are recorded, and
drawing the identified characters in a blank image according to the corresponding coordinates; and
and performing similarity clustering on the redrawn images.
9. The method of any of claims 1-6, wherein clustering the obtained images associated with the points of interest by similarity to obtain image clusters further comprises:
and if the distance between the shooting positions of the images in the obtained image clusters does not meet the threshold distance, performing secondary clustering on the images, wherein the secondary clustering is similarity clustering performed on local positions in the images.
10. The method of claim 9, wherein the local location is a local location near a location in the image that matches the identification of the point of interest.
11. The method of any of claims 1-6, wherein generating a guide point for the point of interest using the capture locations of the images in the obtained image cluster comprises:
and selecting a proper image from each image cluster, and taking the shooting position of the image as the coordinate of the corresponding guide point.
12. The method of claim 11, wherein the appropriate image is one of: and the images with the highest matching degree with the guide point features, the images with the highest matching degree with the interest point identifications or the images with the most positive shooting angle in the corresponding image clusters.
13. The method of claim 1, further comprising: and generating a radiation range of the interest point according to the coordinates of the interest point, and filtering out guide points which do not meet the radiation ranges of the road network data and the interest point.
14. The method of claim 1, the plurality of images being user-generated images.
15. An apparatus for obtaining guide points around a point of interest, comprising:
the image acquisition unit is used for acquiring a plurality of images and corresponding shooting positions thereof;
the image screening unit is used for screening the images to obtain a plurality of images containing guide point features in the plurality of images;
the interest point matching unit is used for matching the screened images with the identification of the interest point to obtain a plurality of images associated with the interest point;
the image clustering unit is used for clustering the obtained images associated with the interest points according to the similarity so as to obtain at least one image cluster, and each image cluster comprises at least one image; and
a guide point generating unit configured to generate at least one corresponding guide point for the interest point using a photographing position of an image of each of the obtained at least one image cluster.
16. The apparatus of claim 15, wherein filtering the images comprises obtaining images containing guide point features by pattern recognition.
17. The apparatus of claim 15, wherein the guide point features comprise at least one of: identifying characteristics of points of interest, characteristics that a vehicle can pass through, and characteristics that a person can pass through.
18. A computing device, comprising:
a processor; and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-14.
19. A computer readable storage medium storing a program, the program comprising instructions that when executed by a processor of a computing device cause the computing device to perform the method of any of claims 1-14.
20. A navigation system, wherein the navigation system uses guide points generated according to the method of any one of claims 1-14.
21. A computer program product comprising computer instructions which, when executed by a processor, implement the method according to any one of claims 1-14.
CN202010942895.6A 2020-09-09 2020-09-09 Method, apparatus, device, and medium for obtaining guide points around a point of interest Active CN112033396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010942895.6A CN112033396B (en) 2020-09-09 2020-09-09 Method, apparatus, device, and medium for obtaining guide points around a point of interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010942895.6A CN112033396B (en) 2020-09-09 2020-09-09 Method, apparatus, device, and medium for obtaining guide points around a point of interest

Publications (2)

Publication Number Publication Date
CN112033396A CN112033396A (en) 2020-12-04
CN112033396B true CN112033396B (en) 2022-12-16

Family

ID=73583989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010942895.6A Active CN112033396B (en) 2020-09-09 2020-09-09 Method, apparatus, device, and medium for obtaining guide points around a point of interest

Country Status (1)

Country Link
CN (1) CN112033396B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651393B (en) * 2020-12-24 2024-02-06 北京百度网讯科技有限公司 Method, device, equipment and storage medium for processing interest point data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577604B1 (en) * 2010-07-07 2013-11-05 Google Inc. System and method of determining map coordinates from images

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011155936A1 (en) * 2010-06-10 2011-12-15 Tele Atlas North America Inc. System and method for determining locations within points of interest
JP5808932B2 (en) * 2011-04-11 2015-11-10 株式会社ナビタイムジャパン Navigation system, navigation method, and program
JP2014164460A (en) * 2013-02-25 2014-09-08 Toyota Mapmaster Inc Guidance point setting device, method, computer program for setting guidance point and recording medium recording computer program
CN107292302B (en) * 2016-03-31 2021-05-14 阿里巴巴(中国)有限公司 Method and system for detecting interest points in picture
US10352718B2 (en) * 2016-09-23 2019-07-16 Apple Inc. Discovering points of entry to a location
CN110019599A (en) * 2017-10-13 2019-07-16 阿里巴巴集团控股有限公司 Obtain method, system, device and the electronic equipment of point of interest POI information
CN110321885A (en) * 2018-03-30 2019-10-11 高德软件有限公司 A kind of acquisition methods and device of point of interest
CN110727816A (en) * 2018-06-29 2020-01-24 百度在线网络技术(北京)有限公司 Method and device for determining interest point category
CN109253733B (en) * 2018-10-30 2020-12-29 百度在线网络技术(北京)有限公司 Real-time navigation method, device, equipment and medium
CN111488771B (en) * 2019-01-29 2023-06-30 阿里巴巴集团控股有限公司 OCR hooking method, device and equipment
CN110781413B (en) * 2019-08-28 2024-01-30 腾讯大地通途(北京)科技有限公司 Method and device for determining interest points, storage medium and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577604B1 (en) * 2010-07-07 2013-11-05 Google Inc. System and method of determining map coordinates from images

Also Published As

Publication number Publication date
CN112033396A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
US10007867B2 (en) Systems and methods for identifying entities directly from imagery
CN110226186B (en) Method and device for representing map elements and method and device for positioning
US8374390B2 (en) Generating a graphic model of a geographic object and systems thereof
US10606824B1 (en) Update service in a distributed environment
CN111931664A (en) Mixed note image processing method and device, computer equipment and storage medium
Joshi et al. Comparing random forest approaches to segmenting and classifying gestures
US11392797B2 (en) Method, apparatus, and system for filtering imagery to train a feature detection model
US20210097103A1 (en) Method and system for automatically collecting and updating information about point of interest in real space
WO2016149918A1 (en) Determining of geographical position of user
US8761435B2 (en) Detecting geographic features in images based on invariant components
US10515293B2 (en) Method, apparatus, and system for providing skip areas for machine learning
CN102388392A (en) Pattern recognition device
CN110647886A (en) Interest point marking method and device, computer equipment and storage medium
US20170336215A1 (en) Classifying entities in digital maps using discrete non-trace positioning data
Hakim et al. Implementation of an image processing based smart parking system using Haar-Cascade method
CN113159024A (en) License plate recognition technology based on improved YOLOv4
US20120093395A1 (en) Method and system for hierarchically matching images of buildings, and computer-readable recording medium
CN112033396B (en) Method, apparatus, device, and medium for obtaining guide points around a point of interest
CN111859002A (en) Method and device for generating interest point name, electronic equipment and medium
Kiew et al. Vehicle route tracking system based on vehicle registration number recognition using template matching algorithm
Hazelhoff et al. Exploiting street-level panoramic images for large-scale automated surveying of traffic signs
CN109523570A (en) Beginning parameter transform model method and device
CN113159146A (en) Sample generation method, target detection model training method, target detection method and device
KR20140137254A (en) Terminal, server, system and method for providing location information using character recognition
CN111291758B (en) Method and device for recognizing seal characters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant