CN112146676B - Information navigation method, device, equipment and storage medium - Google Patents

Information navigation method, device, equipment and storage medium Download PDF

Info

Publication number
CN112146676B
CN112146676B CN202010981256.0A CN202010981256A CN112146676B CN 112146676 B CN112146676 B CN 112146676B CN 202010981256 A CN202010981256 A CN 202010981256A CN 112146676 B CN112146676 B CN 112146676B
Authority
CN
China
Prior art keywords
image
interest
point
information
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010981256.0A
Other languages
Chinese (zh)
Other versions
CN112146676A (en
Inventor
刘任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010981256.0A priority Critical patent/CN112146676B/en
Publication of CN112146676A publication Critical patent/CN112146676A/en
Application granted granted Critical
Publication of CN112146676B publication Critical patent/CN112146676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3679Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities
    • G01C21/3682Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities output of POI information on a road map

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The disclosure relates to an information navigation method, device, equipment and storage medium, the method comprises: in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device; in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point; and displaying the navigation information based on the electronic map. The method and the device can display the interest points which are determined by the images acquired by the normally open image acquisition device in advance based on the instruction information, can determine the navigation information reaching the target interest points based on the target operation of selecting the target interest points, can quickly and accurately determine the interest points preferred by the user, provide the navigation information going to the interest points for the user, and meet the requirements of the user.

Description

Information navigation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to an information navigation method, apparatus, device, and storage medium.
Background
With the development of mobile terminal technology, mobile terminals such as smart phones have a navigation function, a call function, a playback function, and a camera device, so that photographing/photographing can be performed. In the related art, information navigation can be realized based on technologies such as GPS (Global Positioning System), WI-Fi (WIreless-Fidelity), and BT (Bluetooth). However, how to quickly and accurately determine a preferred interest point of a user based On an Always On (Always On) image capturing device mounted On a mobile terminal and provide navigation information to the interest point for the user becomes a current major issue.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide an information navigation method, apparatus, device and storage medium, so as to solve the defects in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided an information navigation method applied to an electronic device having a normally-open image capturing device, the method including:
in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device;
in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point;
and displaying the navigation information based on the electronic map.
In an embodiment, before determining the navigation information from the navigation start point to the target point of interest, the method further includes:
acquiring an image of a current scene acquired by a normally open image acquisition device;
performing scene recognition on the current scene based on the image of the current scene to obtain a scene recognition result;
and determining a set position in the current scene as a navigation starting point based on the scene recognition result.
In an embodiment, the scene recognition of the current scene based on the image of the current scene includes:
performing character recognition on the image of the current scene to obtain an identification information recognition result of a corresponding place in the image of the current scene;
and identifying the current scene based on the identification information identification result of the place.
In an embodiment, in case the electronic device is in an indoor location, the method further comprises:
determining identification information and positioning information of the indoor place based on the image acquired by the normally open type image acquisition device;
acquiring pre-stored related information of the indoor places based on the identification information and the positioning information of the indoor places, wherein the related information at least comprises the identification information and the position information of each place in the indoor places;
the identifying the current scene based on the identification information identification result of the place includes:
matching the identification information recognition result with the identification information of each place in the indoor place;
and determining the current scene based on the position information of the place of which the identification information is matched with the identification information recognition result.
In one embodiment, the method further comprises generating the interest points in advance based on the following steps, including:
acquiring a historical image acquired by a normally open type image acquisition device;
in response to the detection of the selection operation of at least one image in the historical images, acquiring interest point identification information and positioning information corresponding to the at least one image;
generating a point of interest based on the at least one image, the point of interest identification information, and the positioning information.
In one embodiment, the displaying at least one point of interest on the electronic map includes:
and displaying at least one image of each interest point and the interest point identification information on the electronic map based on the positioning information of each interest point in the at least one interest point.
In an embodiment, the obtaining the interest point identification information corresponding to the at least one image includes:
receiving input interest point identification information through a preset information input interface; or the like, or a combination thereof,
and carrying out scene recognition on the shooting scene of the at least one image, and determining the interest point identification information based on the scene recognition result of the shooting scene.
In an embodiment, the scene recognition of the shooting scene of the at least one image includes:
performing image recognition on the at least one image to obtain a character recognition result in the at least one image;
and identifying the shooting scene of the at least one image based on the character recognition result.
According to a second aspect of the embodiments of the present disclosure, there is provided an information navigation device applied to an electronic device having a normally-open image capturing apparatus, the device including:
the interest point display module is used for responding to the received instruction information for displaying the interest points and displaying at least one interest point on the electronic map, wherein the at least one interest point is determined based on the image acquired by the normally open type image acquisition device;
a navigation information determination module for determining navigation information from a navigation start point to a target point of interest in response to detecting a target operation of selecting the target point of interest from the at least one point of interest;
and the navigation information display module is used for displaying the navigation information based on the electronic map.
In one embodiment, the system further comprises a starting point determining module;
the starting point determining module comprises:
the image acquisition unit is used for acquiring the image of the current scene acquired by the normally open image acquisition device;
the scene recognition unit is used for carrying out scene recognition on the current scene based on the image of the current scene to obtain a scene recognition result;
and the starting point determining unit is used for determining the set position in the current scene as a navigation starting point based on the scene recognition result.
In an embodiment, the scene recognition unit is further configured to:
performing character recognition on the image of the current scene to obtain an identification information recognition result of a corresponding place in the image of the current scene;
and identifying the current scene based on the identification information identification result of the place.
In an embodiment, in a case where the electronic device is in an indoor place, the starting point determining module further includes a related information determining unit;
the relevant information determining unit is further configured to:
determining identification information and positioning information of the indoor place based on the image acquired by the normally open type image acquisition device;
based on the identification information and the positioning information of the indoor places, acquiring prestored related information of the indoor places, wherein the related information at least comprises the identification information and the position information of each place in the indoor places;
the scene recognition unit is further configured to:
matching the identification information recognition result with the identification information of each place in the indoor place;
and determining the current scene based on the position information of the place of which the identification information is matched with the identification information recognition result.
In an embodiment, the apparatus further comprises a point of interest generation module;
the interest point generating module includes:
the historical image acquisition unit is used for acquiring a historical image acquired by the normally open type image acquisition device;
the identification positioning acquisition unit is used for acquiring interest point identification information and positioning information corresponding to at least one image in response to the detection of the selection operation of the at least one image in the historical images;
an interest point generating unit for generating an interest point based on the at least one image, the interest point identification information, and the positioning information.
In an embodiment, the point of interest presentation module is further configured to present at least one image of each point of interest and the point of interest identification information on the electronic map based on the positioning information of each point of interest of the at least one point of interest.
In an embodiment, the identification location obtaining unit is further configured to:
receiving input interest point identification information through a preset information input interface; or the like, or, alternatively,
and carrying out scene recognition on the shooting scene of the at least one image, and determining the interest point identification information based on the scene recognition result of the shooting scene.
In an embodiment, the id location obtaining unit is further configured to:
performing image recognition on the at least one image to obtain a character recognition result in the at least one image;
and identifying the shooting scene of the at least one image based on the character recognition result.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus, the apparatus comprising:
the image acquisition device comprises a normally open type image acquisition device, a processor and a memory for storing executable instructions of the processor;
wherein the content of the first and second substances,
the normally open type image acquisition device is used for acquiring image information;
the processor is configured to:
in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device;
in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point;
and displaying the navigation information based on the electronic map.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device;
in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point;
and displaying the navigation information based on the electronic map.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method can realize displaying the interest points which are determined by using the images collected by the normally open type image collecting device in advance based on the instruction information, and can determine the navigation information reaching the target interest points based on the target operation of selecting the target interest points, can quickly and accurately determine the interest points favored by the user, provides the navigation information going to the interest points for the user, and meets the requirements of the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of information navigation according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of information navigation according to yet another exemplary embodiment;
FIG. 3 is a flow diagram illustrating how scene recognition of the current scene is performed based on an image of the current scene in accordance with an exemplary embodiment;
FIG. 4 is a flow diagram illustrating how scene recognition of the current scene is performed based on an image of the current scene in accordance with yet another illustrative embodiment;
FIG. 5 is a flow diagram illustrating how points of interest are generated in accordance with an illustrative embodiment;
FIG. 6 is a flow diagram illustrating how scene recognition may be performed on a captured scene of the at least one image in accordance with an exemplary embodiment;
FIG. 7 is a block diagram of an information navigation device, shown in accordance with an exemplary embodiment;
FIG. 8 is a block diagram of an information navigation device according to yet another exemplary embodiment;
FIG. 9 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the exemplary embodiments below do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a method of information navigation according to an exemplary embodiment; the method of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment) with a normally-open image acquisition device.
As shown in fig. 1, the method comprises the following steps S101-S103:
in step S101, in response to receiving instruction information for presenting points of interest, at least one point of interest is presented on the electronic map.
In this embodiment, when a user wants to browse a previously generated interest point, instruction information for displaying the interest point may be sent to the terminal device, and the terminal device may display at least one previously generated interest point on the electronic map after receiving the instruction information. Wherein the at least one point of interest may be determined based on an image acquired by the normally-open image acquisition device.
The interest points can be used for representing places of interest of the user, such as interested sights, shops, restaurants or movie theaters.
It should be noted that the electronic map may be a third-party electronic map obtained in advance, which is not limited in this embodiment.
For example, a user may acquire an image by using a normally open image acquisition device (e.g., always On Camera) of the terminal device, and then some places photographed in the image may be marked as points of interest based On the interest of the user, so that the points of interest may be subsequently displayed On an electronic map in response to instruction information of the user.
In another embodiment, the manner of generating the at least one point of interest may also be referred to the following embodiment shown in fig. 3, which is not described in detail first.
In step S102, in response to detecting a target operation of selecting a target point of interest from the at least one point of interest, navigation information from a navigation start point to the target point of interest is determined.
In this embodiment, after at least one point of interest is displayed on the electronic map in response to receiving instruction information for displaying the point of interest, if a target operation for selecting a target point of interest from the at least one point of interest is detected, navigation information from a navigation start point to the target point of interest may be determined.
The navigation information may include a driving path from a navigation starting point to the target point of interest.
In one embodiment, the navigation initiation point may be determined based on a current location of the user. For example, the current position of the user may be detected, and the current position of the user may be determined as the navigation start point.
In another embodiment, the determination of the navigation starting point can be referred to the following embodiment shown in fig. 2, and will not be described in detail here.
For example, after the user sends instruction information for showing the interest points to the terminal device, the terminal device may show at least one interest point on the electronic map, and then the user may trigger a target operation for selecting a target interest point from the shown at least one interest point according to personal needs, for example, click one interest point of the at least one interest point shown in the electronic map, and then the terminal device may obtain a current navigation start point and determine navigation information from the navigation start point to the target interest point.
In step S103, the navigation information is presented based on the electronic map.
In this embodiment, after determining navigation information from a navigation start point to a target point of interest in response to detecting a target operation of selecting the target point of interest from the at least one point of interest, the navigation information may be presented based on the electronic map.
For example, the route information from the navigation starting point to the target interest point can be displayed in a highlighted color on the electronic map, and the navigation can be performed based on voice or text, so that the user can know the route to the target interest point and the user's requirements can be met.
As can be seen from the above description, in the present embodiment, at least one interest point is displayed on an electronic map in response to receiving instruction information for displaying interest points, where the at least one interest point is determined based on an image collected by the always-on image collecting device, and in response to detecting a target operation for selecting a target interest point from the at least one interest point, navigation information from a navigation start point to the target interest point is determined, and then the navigation information is displayed based on the electronic map, so that an interest point determined in advance by using an image collected by the always-on image collecting device can be displayed based on the instruction information, and navigation information reaching the target interest point can be determined based on the target operation for selecting the target interest point, a preferred interest point of a user can be determined quickly and accurately, navigation information for the user to the interest point is provided, and the user's requirements are met.
FIG. 2 is a flow chart illustrating a method of information navigation according to yet another exemplary embodiment; the method of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment) with a normally-open image acquisition device.
As shown in fig. 2, the method comprises the following steps S201-S206:
in step S201, in response to receiving instruction information for presenting points of interest, at least one point of interest is presented on the electronic map.
The at least one point of interest is determined based on an image acquired by the normally-open image acquisition device.
In step S202, in response to detecting a target operation of selecting a target interest point from the at least one interest point, an image of the current scene captured by the normally-open image capturing device is acquired.
In this embodiment, when the terminal device detects a target operation of selecting a target interest point from the at least one interest point, an image of a current scene acquired by the normally-open image acquisition device may be acquired.
It should be noted that, after the normally open image acquisition device is opened in the terminal device of the user, the image of the current view field can be automatically acquired in each scene such as the work attendance picture, the work attendance picture and the travel journey of the user. Because the normally-open image acquisition device always shoots the image of the scene where the user is located, the normally-open image acquisition device can acquire the image of the current scene acquired by the normally-open image acquisition device in response to the detection of the target operation of selecting the target interest point from the at least one interest point.
It can be understood that above-mentioned normally open camera has the characteristics of low-power consumption, can be under the condition that does not influence user's daily life automatic acquisition and shoot the image information in the field of vision, and need not user's manual shooting, can improve the promptness and the convenience of gathering image information.
In step S203, performing scene recognition on the current scene based on the image of the current scene to obtain a scene recognition result.
In this embodiment, after the image of the current scene acquired by the normally-open image acquisition device is acquired in response to detection of a target operation for selecting a target interest point from the at least one interest point, scene identification may be performed on the current scene based on the image of the current scene to obtain a scene identification result.
Wherein, the scene recognition result may include: at least one of an address, a location name, and a location type of a shooting location of the image of the current scene. The location type may be set based on actual needs, such as a scenic spot, a restaurant, a gas station, a station, or a shop, which is not limited in this embodiment.
In step S204, based on the scene recognition result, a set location in the current scene is determined as a navigation start point.
In this embodiment, after the scene recognition is performed on the current scene based on the image of the current scene to obtain a scene recognition result, the set location in the current scene may be determined as the navigation start point based on the scene recognition result.
It should be noted that the setting location may be set based on actual needs, for example, set as a location with the highest heat in the current scene, or a location selected by the user, which is not limited in this embodiment.
For example, when the image of the current scene is subjected to scene recognition and the obtained scene recognition result is "XX street and XX street intersection in subway station/young subway station/XX area", a set place in the scene, such as a subway security inspection opening, may be determined as a navigation starting point.
For another example, when the obtained scene recognition result is "mall/double-mall/XX road XX", a set location in the scene, such as a mall elevator entrance selected by the user, may be determined as the navigation starting point.
In step S205, navigation information from a navigation start point to the target point of interest is determined.
In step S206, the navigation information is presented based on the electronic map.
For the explanation and description of steps S201 and S205-S206, reference may be made to the above embodiments, which are not described herein again.
As can be seen from the above description, in this embodiment, by obtaining an image of a current scene acquired by an always-on image acquisition device, performing scene identification on the current scene based on the image of the current scene to obtain a scene identification result, and then determining a set location in the current scene as a navigation starting point based on the scene identification result, it is possible to accurately determine the navigation starting point based on the image of the current scene acquired by the always-on image acquisition device, so as to subsequently determine navigation information from the navigation starting point to the target interest point, and display the navigation information based on the electronic map, thereby quickly determining a suitable navigation starting point for a user, accurately determining navigation information to the interest point based on the navigation starting point, and meeting the user's requirements.
FIG. 3 is a flow diagram illustrating how scene recognition of the current scene is performed based on an image of the current scene in accordance with an exemplary embodiment; the present embodiment exemplifies how to perform scene recognition on the current scene based on the image of the current scene on the basis of the above-mentioned embodiments. As shown in fig. 3, the performing of the scene recognition on the current scene based on the image of the current scene in step S203 may include the following steps S301 to S302:
in step S301, character recognition is performed on the image of the current scene to obtain a recognition result of the identification information of the corresponding location in the image of the current scene.
In this embodiment, after the image of the current scene acquired by the normally open image acquisition device is acquired, the image may be subjected to character recognition to recognize characters in the image, so as to obtain a character recognition result. It is understood that the text in the scene image is usually identification information (e.g., a place name, etc.) for characterizing the current place, and thus can be determined as an identification information recognition result of the corresponding place in the image based on the text recognition result.
For example, if the image of the current scene includes image information of an entrance and an exit of a subway station, the text at the entrance of the subway station in the image can be identified, and a text identification result of the "XX subway station" is obtained. For another example, if the image of the current scene includes image information of a certain shop doorway, the text on the signboard at the shop doorway in the image may be recognized to obtain a text recognition result of "XX shop". Further, an identification information recognition result of the corresponding place in the image may be determined based on the character recognition result.
In step S302, the current scene is identified based on the identification information recognition result of the place.
In this embodiment, after the image of the current scene is subjected to character recognition to obtain the identification information recognition result of the corresponding location in the image of the current scene, the current scene may be recognized based on the identification information recognition result of the location.
In an embodiment, the scene may include at least one of an address of a shooting location, a location name, and a location type of the image of the current scene. The location type may be set based on actual needs, such as a scenic spot, a restaurant, a gas station, a station, or a shop, which is not limited in this embodiment.
For example, when the image of the current scene is subjected to image recognition, and the identification information recognition result of the corresponding place is obtained as the "subway young avenue station", the scene corresponding to the image of the current scene, that is, the current scene, such as the "subway station/subway young avenue station/XX area XX street and XX road intersection", may be determined according to the identification information recognition result. The subway station is of a place type, the subway young street station is of a place name, and the XX street and the XX road intersection in the XX area are addresses.
As can be seen from the above description, in the embodiment, the identification information recognition result of the corresponding location in the image of the current scene is obtained by performing text recognition on the image of the current scene, and then the current scene is recognized based on the identification information recognition result of the location, so that the location identification information recognition result in the image of the current scene can be accurately obtained based on a text recognition technology, and then the current scene can be accurately recognized based on the recognition result, the accuracy of recognizing the current scene can be improved, a foundation can be laid for subsequently determining the set location in the current scene as the navigation starting point, and the navigation starting point can be accurately determined, so that the navigation information from the navigation starting point to the target interest point can be subsequently determined, and the requirements of the user can be met.
FIG. 4 is a flow diagram illustrating how scene recognition of the current scene is performed based on an image of the current scene in accordance with yet another illustrative embodiment; the present embodiment exemplifies how to perform scene recognition on the current scene based on the image of the current scene on the basis of the above embodiments. As shown in fig. 4, the performing of the scene recognition on the current scene based on the image of the current scene in step S203 may include the following steps S401 to S405:
in step S401, identification information and positioning information of the indoor place are determined based on the image captured by the normally-open image capturing device.
In the embodiment, the image information can be acquired in the daily life of the user through the normally open image acquisition device on the terminal equipment, and then the path taken by the user can be recorded based on the acquired image. On this basis, when the user walks into an indoor place (e.g., a mall, a hotel, etc.), identification information (e.g., a mall name, a hotel name) of the indoor place may be determined based on the image of the indoor place acquired by the normally-open image acquisition device, and then positioning information (e.g., an address, etc.) of the indoor place may be acquired based on preset map information.
In step S402, the pre-stored information related to the indoor location is acquired based on the identification information and the positioning information of the indoor location.
In this embodiment, after the identification information and the positioning information of the indoor location are determined based on the image acquired by the normally-open image acquisition device, the pre-stored relevant information of the indoor location may be acquired based on the identification information and the positioning information of the indoor location. Wherein, the related information at least comprises the identification information and the position information of each place in the indoor place.
For example, after it is determined that the identification information and the positioning information of the indoor location are "happy street store" and "XX street in XX" respectively based on the image acquired by the normally open image acquisition device, the pre-stored related information of the indoor location may be acquired based on the identification information and the positioning information, that is, the identification information and the position information of each location in the indoor location, such as favorite tea at the 2 nd building 203 number, full-recorded dessert at the 3 rd building 305 number, and the like, may be acquired.
In step S403, character recognition is performed on the image of the current scene to obtain a recognition result of the identification information of the corresponding location in the image of the current scene.
For the explanation and explanation of step S403, reference may be made to the above embodiments, and details are not described here.
In step S404, the identification information recognition result is matched with the identification information of each location in the indoor place.
In this embodiment, after the image of the current scene is subjected to character recognition to obtain the identification information recognition result of the corresponding location in the image of the current scene, the identification information recognition result may be matched with the identification information of each location in the indoor place.
For example, when character recognition is performed on the image of the current scene, and the identification information recognition result of the corresponding location in the image of the current scene is "full-recorded dessert", the identification information recognition result may be matched with the names of the shops in the related information of the indoor location.
In step S405, the current scene is determined based on the position information of the place where the identification information matches the identification information recognition result.
In this embodiment, after the identification information recognition result is matched with the identification information of each location in the indoor place, the current scene may be determined based on the location information of the location where the identification information is matched with the identification information recognition result.
For example, when the identification information recognition result of the corresponding location in the image of the current scene is "top-recorded dessert", the identification information recognition result may be matched with the names of the shops in the related information of the indoor location, so as to obtain the current scene, that is, the top-recorded dessert of which the user is currently located at floor 3 and floor 5.
As can be seen from the above description, in the embodiment, the identification information recognition result of the corresponding location in the image of the current scene is obtained by performing text recognition on the image of the current scene, and then the current scene is recognized based on the identification information recognition result of the location, so that the location identification information recognition result in the image of the current scene can be accurately obtained based on a text recognition technology, and then the current scene can be accurately recognized based on the recognition result, the accuracy of recognizing the current scene can be improved, a foundation can be laid for subsequently determining the set location in the current scene as the navigation starting point, and the navigation starting point can be accurately determined, so that the navigation information from the navigation starting point to the target interest point can be subsequently determined, and the requirements of the user can be met.
FIG. 5 is a flow diagram illustrating how points of interest are generated in accordance with an illustrative embodiment; the present embodiment takes how to generate the interest points as an example to illustrate on the basis of the above embodiments. As shown in fig. 5, the above embodiment further includes generating the interest points based on the following steps S501-S503, including:
in step S501, a history image acquired by a normally-open image acquisition device is acquired.
In the embodiment, the image information can be acquired in the daily life of the user through the normally open image acquisition device on the terminal equipment, so that a plurality of pieces of historical image information can be obtained.
The normally open image capturing device may include a normally open Camera (Always On Camera) in the related art, which is not limited in this embodiment.
For example, after the normally open image acquisition device is opened in the terminal device of the user, the image of the current field of view can be automatically acquired in each scene of the user, such as the work attendance picture, the travel journey and the like, so as to obtain a plurality of historical images for subsequently screening the images for generating the interest points.
In step S502, in response to detecting a selection operation on at least one image in the history images, the interest point identification information and the positioning information corresponding to the at least one image are acquired.
In this embodiment, after the historical images acquired by the normally-open image acquisition device are acquired, the historical images may be provided for a user to select, and when a selection operation on at least one image in the historical images is detected, the interest point identification information and the positioning information corresponding to the at least one image may be acquired. The positioning information may include GPS information recorded when the at least one image is captured, which is not limited in this embodiment.
In an embodiment, the point of interest identification information corresponding to the at least one image may be determined based on the following manner:
(i) Receiving input interest point identification information through a preset information input interface; or the like, or, alternatively,
(ii) And performing scene recognition on a shooting scene of the at least one image, and determining interest point identification information based on a scene recognition result of the shooting scene.
For example, after the user selects at least one image for generating an interest point from the historical images, the identification information of the interest point to be generated (i.e., the interest point identification information) may be input through a preset information input interface; or, after the user selects at least one image for generating an interest point from the historical images, the terminal device may perform scene recognition on a shooting scene of the at least one image to obtain a scene recognition result, and may further determine the interest point identification information based on the scene recognition result. Wherein the scene recognition result may include: at least one of an address, a place name, and a place type of a place where the at least one image is photographed. The location type may be set based on actual needs, for example, set as a scenic spot, a restaurant, a gas station, a station, or a shop, which is not limited in this embodiment.
For another example, when the scene recognition is performed on the at least one image and the obtained scene recognition result is "XX street and XX street intersection in subway station/young subway station/XX area", the scene recognition result may be determined as the identification information of the point of interest to be generated. The subway station is of a place type, the subway youth avenue station is of a place name, and the XX street and XX road intersection in the XX area is of an address.
In step S503, a point of interest is generated based on the at least one image, the point of interest identification information and the positioning information.
In an embodiment, after the point of interest identification information and the positioning information corresponding to at least one image in the historical images are acquired in response to detecting the selection operation on the at least one image, the point of interest may be generated based on the at least one image, the point of interest identification information and the positioning information.
The content of the interest point comprises the at least one image, the interest point identification information and the positioning information.
As can be seen from the above description, in this embodiment, by obtaining a history image acquired by a normally open image acquisition device, and in response to detecting a selection operation on at least one image in the history image, obtaining interest point identification information and location information corresponding to the at least one image, and then generating an interest point based on the at least one image, the interest point identification information, and the location information, an interest point can be generated based on the interest point identification information and the location information determined by the history image acquired by the normally open image acquisition device, information of image dimensions can be added to the interest point, and then the subsequent effects of displaying and navigating the generated interest point can be improved, more reference information can be provided for a user, and the user's needs can be met.
FIG. 6 is a flow diagram illustrating how scene recognition may be performed on a captured scene of the at least one image in accordance with an exemplary embodiment; the present embodiment exemplifies how to perform scene recognition on the shooting scene of the at least one image on the basis of the above-described embodiments. As shown in fig. 6, the method of this embodiment may further include performing scene recognition on the shooting scene of the at least one image based on the following steps S601-S602, including:
in step S601, image recognition is performed on the at least one image to obtain a text recognition result in the at least one image.
In this embodiment, when it is detected that the user selects at least one image from the historical images acquired by the normally-open image acquisition device, image recognition may be performed on the at least one image to recognize characters in the at least one image, so as to obtain a character recognition result.
For example, if the at least one image includes image information of an entrance and an exit of a subway station, the characters at the entrance of the subway station in the image can be identified, and a character identification result of 'XX station of the subway' is obtained. For another example, if the at least one image includes image information of a shop doorway, the text on the signboard at the shop doorway in the image may be recognized, so as to obtain a text recognition result of "XX shop".
In step S602, a shooting scene of the at least one image is identified based on the character recognition result.
In this embodiment, after the image recognition is performed on the at least one image to obtain the text recognition result in the at least one image, the shooting scene of the at least one image may be recognized based on the text recognition result.
In an embodiment, the shooting scene may include at least one of an address of a shooting location, a location name, and a location type. The location type may be set based on actual needs, for example, set as a scenic spot, a restaurant, a gas station, a station, or a shop, which is not limited in this embodiment.
For example, when the image recognition is performed on the at least one image and the character recognition result in the at least one image is "subway, young street station", the shooting scene of the at least one image, such as "subway station/subway, young street station/XX area XX street and XX road intersection", can be determined according to the character recognition result. The subway station is of a place type, the subway youth avenue station is of a place name, and the XX street and XX road intersection in the XX area is of an address.
As can be seen from the above description, in the embodiment, the at least one image is subjected to image recognition to obtain the character recognition result in the at least one image, and the shooting scene of the at least one image is recognized based on the character recognition result, so that the shooting scene of the image can be recognized by recognizing the character information in the image, an accurate basis can be provided for determining the interest point identification information for the subsequent shooting scene based on the image, and the accuracy of generating the interest point can be improved.
FIG. 7 is a block diagram of an information navigation device, shown in accordance with an exemplary embodiment; the device of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment) with a normally open type image acquisition device. As shown in fig. 7, the apparatus includes: an interest point presentation module 110, a navigation information determination module 120, and a navigation information presentation module 130, wherein:
the interest point display module 110 is configured to display at least one interest point on the electronic map in response to receiving instruction information for displaying the interest point, where the at least one interest point is determined based on the image acquired by the normally-open image acquisition device;
a navigation information determining module 120, configured to determine navigation information from a navigation start point to a target point of interest in response to detecting a target operation of selecting the target point of interest from the at least one point of interest;
a navigation information display module 130, configured to display the navigation information based on the electronic map.
As can be seen from the above description, in the present embodiment, at least one interest point is displayed on the electronic map in response to receiving instruction information for displaying the interest point, where the at least one interest point is determined based on an image collected by the normally-open image collecting device, and in response to detecting a target operation for selecting a target interest point from the at least one interest point, navigation information from a navigation start point to the target interest point is determined, and then the navigation information is displayed based on the electronic map, so that the interest point determined in advance by using the image collected by the normally-open image collecting device can be displayed based on the instruction information, and the navigation information reaching the target interest point can be determined based on the target operation for selecting the target interest point, and a preferred interest point of a user can be determined quickly and accurately, and navigation information for the user is provided to the interest point, so as to meet the needs of the user.
FIG. 8 is a block diagram of an information navigation device according to yet another exemplary embodiment; the device of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment) with a normally-open image acquisition device. The functions of the interest point display module 210, the navigation information determining module 220, and the navigation information display module 230 are the same as those of the interest point display module 110, the navigation information determining module 120, and the navigation information display module 130 in the embodiment shown in fig. 7, and are not described herein again.
As shown in fig. 8, the apparatus of this embodiment may further include a starting point determining module 240;
the starting point determining module 240 may include:
an image obtaining unit 241, configured to obtain an image of a current scene collected by the normally-open image collecting device;
a scene recognition unit 242, configured to perform scene recognition on the current scene based on the image of the current scene, so as to obtain a scene recognition result;
a starting point determining unit 243, configured to determine a set location in the current scene as a navigation starting point based on the scene recognition result.
In an embodiment, the scene recognition unit 242 may be further configured to:
performing character recognition on the image of the current scene to obtain an identification information recognition result of a corresponding place in the image of the current scene;
and identifying the current scene based on the identification information identification result of the place.
In an embodiment, in a case where the electronic device is in an indoor place, the starting point determining module 240 may further include a related information determining unit 244;
the related information determination unit 244 may be further configured to:
determining identification information and positioning information of the indoor place based on the image acquired by the normally open type image acquisition device;
acquiring pre-stored related information of the indoor places based on the identification information and the positioning information of the indoor places, wherein the related information at least comprises the identification information and the position information of each place in the indoor places;
the scene recognition unit 242 may also be configured to:
matching the identification information recognition result with the identification information of each place in the indoor place;
and determining the current scene based on the position information of the place of which the identification information is matched with the identification information recognition result.
In an embodiment, the apparatus may further include a point of interest generating module 250;
the interest point generating module 250 may include:
a history image acquisition unit 251 for acquiring a history image acquired by a normally open image acquisition device;
an identification and positioning obtaining unit 252, configured to, in response to detecting a selection operation on at least one image in the historical images, obtain interest point identification information and positioning information corresponding to the at least one image;
an interest point generating unit 253 for generating an interest point based on the at least one image, the interest point identification information and the positioning information.
On this basis, the above-mentioned interest point presenting module 210 may be further configured to present at least one image of each interest point and the interest point identification information on the electronic map based on the positioning information of each interest point of the at least one interest point.
In an embodiment, the identification location obtaining unit 252 may further be configured to:
receiving input interest point identification information through a preset information input interface; or the like, or, alternatively,
and carrying out scene recognition on the shooting scene of the at least one image, and determining the interest point identification information based on the scene recognition result of the shooting scene.
In an embodiment, the identifier location obtaining unit 252 may be further configured to:
performing image recognition on the at least one image to obtain a character recognition result in the at least one image;
and identifying a shooting scene of the at least one image based on the character recognition result.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 9 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like. In this embodiment, the electronic device may include a normally open image capturing device for capturing image information.
Referring to fig. 9, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 906 provides power to the various components of device 900. Power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a record mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 further includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing various aspects of state assessment for the device 900. For example, the sensor assembly 914 may detect an open/closed state of the device 900, the relative positioning of components, such as a display and keypad of the device 900, the sensor assembly 914 may also detect a change in position of the device 900 or a component of the device 900, the presence or absence of user contact with the device 900, orientation or acceleration/deceleration of the device 900, and a change in temperature of the device 900. The sensor assembly 914 may also include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The device 900 may access a wireless network based on a communication standard, such as WiFi,2G or 3g,4g or 5G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 904 comprising instructions executable by processor 920 of device 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. An information navigation method is applied to an electronic device with a normally-open image acquisition device, and comprises the following steps:
in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device;
in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point;
displaying the navigation information based on the electronic map;
the method also comprises the following steps of generating interest points in advance based on the following steps:
acquiring a historical image acquired by a normally open image acquisition device;
in response to the detection of the selection operation of at least one image in the historical images, acquiring interest point identification information and positioning information corresponding to the at least one image;
generating a point of interest based on the at least one image, the point of interest identification information, and the positioning information.
2. The method of claim 1, wherein prior to determining navigation information from a navigation start point to the target point of interest, further comprising:
acquiring an image of a current scene acquired by a normally open type image acquisition device;
performing scene recognition on the current scene based on the image of the current scene to obtain a scene recognition result;
and determining a set position in the current scene as a navigation starting point based on the scene recognition result.
3. The method of claim 2, wherein the performing scene recognition on the current scene based on the image of the current scene comprises:
performing character recognition on the image of the current scene to obtain an identification information recognition result of a corresponding place in the image of the current scene;
and identifying the current scene based on the identification information identification result of the place.
4. The method of claim 3, wherein in a case where the electronic device is at an indoor location, the method further comprises:
determining identification information and positioning information of the indoor place based on the image acquired by the normally open type image acquisition device;
based on the identification information and the positioning information of the indoor places, acquiring prestored related information of the indoor places, wherein the related information at least comprises the identification information and the position information of each place in the indoor places;
the identifying the current scene based on the identification information identification result of the place includes:
matching the identification information recognition result with the identification information of each place in the indoor place;
and determining the current scene based on the position information of the place of which the identification information is matched with the identification information recognition result.
5. The method of claim 1, wherein the presenting at least one point of interest on an electronic map comprises:
and displaying at least one image of each interest point and the interest point identification information on the electronic map based on the positioning information of each interest point in the at least one interest point.
6. The method of claim 1, wherein obtaining the point of interest identification information corresponding to the at least one image comprises:
receiving input interest point identification information through a preset information input interface; or the like, or, alternatively,
and carrying out scene recognition on the shooting scene of the at least one image, and determining the interest point identification information based on the scene recognition result of the shooting scene.
7. The method of claim 6, wherein the scene recognition of the captured scene of the at least one image comprises:
performing image recognition on the at least one image to obtain a character recognition result in the at least one image;
and identifying a shooting scene of the at least one image based on the character recognition result.
8. An information navigation device, applied to an electronic device having a normally-open image acquisition device, the device comprising:
the interest point display module is used for responding to the received instruction information for displaying the interest points and displaying at least one interest point on the electronic map, wherein the at least one interest point is determined based on the image acquired by the normally open type image acquisition device;
a navigation information determination module for determining navigation information from a navigation start point to a target point of interest in response to detecting a target operation of selecting the target point of interest from the at least one point of interest;
the navigation information display module is used for displaying the navigation information based on the electronic map;
the apparatus further comprises a point of interest generation module;
the interest point generation module comprises:
the historical image acquisition unit is used for acquiring a historical image acquired by the normally open type image acquisition device;
the identification positioning acquisition unit is used for responding to the detection of the selection operation of at least one image in the historical images and acquiring interest point identification information and positioning information corresponding to the at least one image;
an interest point generating unit for generating an interest point based on the at least one image, the interest point identification information, and the positioning information.
9. The apparatus of claim 8, further comprising a starting point determining module;
the starting point determining module comprises:
the image acquisition unit is used for acquiring the image of the current scene acquired by the normally open image acquisition device;
the scene recognition unit is used for carrying out scene recognition on the current scene based on the image of the current scene to obtain a scene recognition result;
a starting point determining unit, configured to determine a set location in the current scene as a navigation starting point based on the scene recognition result.
10. The apparatus of claim 9, wherein the scene recognition unit is further configured to:
performing character recognition on the image of the current scene to obtain an identification information recognition result of a corresponding place in the image of the current scene;
and identifying the current scene based on the identification information identification result of the place.
11. The apparatus of claim 10, wherein the starting point determining module further comprises a related information determining unit in case that the electronic device is in an indoor place;
the relevant information determining unit is further configured to:
determining identification information and positioning information of the indoor place based on the image acquired by the normally open type image acquisition device;
acquiring pre-stored related information of the indoor places based on the identification information and the positioning information of the indoor places, wherein the related information at least comprises the identification information and the position information of each place in the indoor places;
the scene recognition unit is further configured to:
matching the identification information recognition result with the identification information of each place in the indoor place;
and determining the current scene based on the position information of the place of which the identification information is matched with the identification information recognition result.
12. The apparatus of claim 8, wherein the point of interest presentation module is further configured to present at least one image of each point of interest and point of interest identification information on the electronic map based on the positioning information of each point of interest of the at least one point of interest.
13. The apparatus of claim 8, wherein the identity location acquisition unit is further configured to:
receiving input interest point identification information through a preset information input interface; or the like, or, alternatively,
and carrying out scene recognition on the shooting scene of the at least one image, and determining the interest point identification information based on the scene recognition result of the shooting scene.
14. The apparatus of claim 13, wherein the identity location acquisition unit is further configured to:
performing image recognition on the at least one image to obtain a character recognition result in the at least one image;
and identifying the shooting scene of the at least one image based on the character recognition result.
15. An electronic device, characterized in that the device comprises:
the image acquisition device comprises a normally open type image acquisition device, a processor and a memory for storing executable instructions of the processor;
wherein the content of the first and second substances,
the normally open type image acquisition device is used for acquiring image information;
the processor is configured to:
in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device;
in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point;
displaying the navigation information based on the electronic map;
the method also comprises the following steps of generating interest points in advance based on the following steps:
acquiring a historical image acquired by a normally open image acquisition device;
in response to the detection of the selection operation of at least one image in the historical images, acquiring interest point identification information and positioning information corresponding to the at least one image;
generating a point of interest based on the at least one image, the point of interest identification information, and the positioning information.
16. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor of an electronic device having an always-on image capturing apparatus, implementing:
in response to receiving instruction information for showing the interest points, showing at least one interest point on an electronic map, wherein the at least one interest point is determined based on the images collected by the normally-open image collection device;
in response to detecting a target operation of selecting a target interest point from the at least one interest point, determining navigation information from a navigation start point to the target interest point;
displaying the navigation information based on the electronic map;
the method also comprises the following steps of generating interest points in advance based on the following steps:
acquiring a historical image acquired by a normally open image acquisition device;
in response to the detection of the selection operation of at least one image in the historical images, acquiring interest point identification information and positioning information corresponding to the at least one image;
generating a point of interest based on the at least one image, the point of interest identification information, and the positioning information.
CN202010981256.0A 2020-09-17 2020-09-17 Information navigation method, device, equipment and storage medium Active CN112146676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010981256.0A CN112146676B (en) 2020-09-17 2020-09-17 Information navigation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010981256.0A CN112146676B (en) 2020-09-17 2020-09-17 Information navigation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112146676A CN112146676A (en) 2020-12-29
CN112146676B true CN112146676B (en) 2022-10-25

Family

ID=73894102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010981256.0A Active CN112146676B (en) 2020-09-17 2020-09-17 Information navigation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112146676B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950790A (en) * 2021-02-05 2021-06-11 深圳市慧鲤科技有限公司 Route navigation method, device, electronic equipment and storage medium
CN113901257B (en) 2021-10-28 2023-10-27 北京百度网讯科技有限公司 Map information processing method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984594A (en) * 2017-06-02 2018-12-11 苹果公司 Related interests point is presented
CN110555352A (en) * 2018-06-04 2019-12-10 百度在线网络技术(北京)有限公司 interest point identification method, device, server and storage medium
CN111291142A (en) * 2018-12-10 2020-06-16 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle equipment and personalized vehicle equipment interest point searching and displaying method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9612126B2 (en) * 2007-12-03 2017-04-04 Nokia Technologies Oy Visual travel guide
US8928666B2 (en) * 2012-10-11 2015-01-06 Google Inc. Navigating visual data associated with a point of interest
CN103996036B (en) * 2014-06-09 2017-07-28 百度在线网络技术(北京)有限公司 A kind of map data collecting method and device
CN110321885A (en) * 2018-03-30 2019-10-11 高德软件有限公司 A kind of acquisition methods and device of point of interest
CN108827307B (en) * 2018-06-05 2021-01-12 Oppo(重庆)智能科技有限公司 Navigation method, navigation device, terminal and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984594A (en) * 2017-06-02 2018-12-11 苹果公司 Related interests point is presented
CN110555352A (en) * 2018-06-04 2019-12-10 百度在线网络技术(北京)有限公司 interest point identification method, device, server and storage medium
CN111291142A (en) * 2018-12-10 2020-06-16 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle equipment and personalized vehicle equipment interest point searching and displaying method thereof

Also Published As

Publication number Publication date
CN112146676A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2017054358A1 (en) Navigation method and device
CN110460578B (en) Method and device for establishing association relationship and computer readable storage medium
CN105956091B (en) Extended information acquisition method and device
EP3287745B1 (en) Information interaction method and device
CN107423386B (en) Method and device for generating electronic card
EP3147802B1 (en) Method and apparatus for processing information
EP3026876B1 (en) Method for acquiring recommending information, terminal and server
CN108108461B (en) Method and device for determining cover image
CN112146676B (en) Information navigation method, device, equipment and storage medium
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN107229403B (en) Information content selection method and device
CN106156788A (en) Face identification method, device and intelligent glasses
CN105549300A (en) Automatic focusing method and device
CN108011990B (en) Contact management method and device
CN106331328B (en) Information prompting method and device
CN112950712B (en) Positioning method and device, electronic equipment and storage medium
US20170034347A1 (en) Method and device for state notification and computer-readable storage medium
CN107229707B (en) Method and device for searching image
CN105320749A (en) Travel route generation method and apparatus
CN106506808B (en) Method and device for prompting communication message
CN107239490B (en) Method and device for naming face image and computer readable storage medium
CN107169042B (en) Method and device for sharing pictures and computer readable storage medium
CN114464186A (en) Keyword determination method and device
CN106375727B (en) Method and device for controlling use state of image pickup equipment
WO2021237592A1 (en) Anchor point information processing method, apparatus and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant