CN111008297A - Addressing method and server - Google Patents

Addressing method and server Download PDF

Info

Publication number
CN111008297A
CN111008297A CN201911268859.XA CN201911268859A CN111008297A CN 111008297 A CN111008297 A CN 111008297A CN 201911268859 A CN201911268859 A CN 201911268859A CN 111008297 A CN111008297 A CN 111008297A
Authority
CN
China
Prior art keywords
address
image
target
terminal
corresponding relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911268859.XA
Other languages
Chinese (zh)
Other versions
CN111008297B (en
Inventor
马明月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911268859.XA priority Critical patent/CN111008297B/en
Publication of CN111008297A publication Critical patent/CN111008297A/en
Application granted granted Critical
Publication of CN111008297B publication Critical patent/CN111008297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides an addressing method and a server. The method comprises the following steps: receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object; acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address, wherein the first corresponding relation is generated according to a mutually-associated object image and an object address uploaded by at least one second terminal; and responding to the address query request, and sending the second target image and the second target address to the first terminal. The invention can improve the addressing efficiency of the first object, reduce the addressing time consumption, improve the addressing accuracy and reduce the addressing difficulty.

Description

Addressing method and server
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an addressing method and a server.
Background
Currently, in many application scenarios, a user needs to find an address of a certain object, for example, a geographic address of a certain physical building, and for example, a loser needs to find a lost address of a lost article, so as to find the lost article by using the lost address.
When an object is addressed, for example, a physical building, a landmark or a physical address of a certain destination is found, map software is generally adopted in the related art to search in a map by receiving query information input by a user, however, the available locations available by the map software are limited, and some remote locations are not marked in the map software, so that the map software is difficult to search; or when the query information input by the user is not accurate enough (or buildings with the same name exist), the searched place of the map has errors. For another example, when a user searches for a lost article, the problem of finding the article is often solved only by means of manual searching or alarming, and the risk of falsifying the article exists.
Therefore, when searching for the relevant address of the object in the related art, the problems of low addressing efficiency, long time consumption, low accuracy and great addressing difficulty generally exist.
Disclosure of Invention
The embodiment of the invention provides an addressing method and a server, and aims to solve the problems of low addressing efficiency, long time consumption, low accuracy and high addressing difficulty in searching for relevant addresses of objects in the related art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an addressing method, which is applied to a server, where the method includes:
receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object;
acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address, wherein the first corresponding relation is generated according to a mutually-associated object image and an object address uploaded by at least one second terminal;
and responding to the address query request, and sending the second target image and the second target address to the first terminal.
In a second aspect, an embodiment of the present invention further provides a server, where the server includes:
the system comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving an address query request for a first object sent by a first terminal, and the address query request comprises a first image of the first object;
the acquisition module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address, wherein the first corresponding relation is generated according to a mutually-associated object image and object address uploaded by at least one second terminal;
and the response module is used for responding to the address query request and sending the second target image and the second target address to the first terminal.
In a third aspect, an embodiment of the present invention further provides a server, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the addressing method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the addressing method.
In the embodiment of the invention, by receiving an address query request of a first terminal carrying a first image of a first object to be addressed and receiving and storing a first corresponding relationship between a second image and a second address which are uploaded by at least one second terminal and are correlated with each other in advance, a second target image matched with the first image in the address query request and a second target address matched with the second target image can be obtained in the first corresponding relationship, so that the first image of the first object to be addressed can be used for finding the address of the first object, namely the second target address, and the second target image and the second target address are sent to the first terminal, thereby improving the addressing efficiency of the first object and reducing the time consumption for addressing; the accuracy of image matching is high, so that the addressing accuracy is improved; in the addressing process, the first terminal only needs to upload one image containing the first object to be addressed, so that the addressing purpose can be achieved, and the addressing difficulty is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of an addressing method of one embodiment of the invention;
FIG. 2 is a schematic diagram of a terminal interface of one embodiment of the present invention;
FIG. 3 is a block diagram of a server of one embodiment of the invention;
fig. 4 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an addressing method, which has a plurality of application scenes, and can be used for addressing an entity building, a road sign or a certain destination, searching a lost address for a lost article and further searching the lost article, and can also comprise a social scene.
For an entity building, a road sign or a certain destination addressing scene, some remote places are not marked in map software, or some places have wrong search names input by users, or places with the same name, which can cause the problem of wrong addressing when the map software is used for text search, and the addressing is time-consuming and low in efficiency;
in terms of the scene of addressing lost addresses and finding lost articles, the range of life activities of each person is larger due to the current convenient traffic, and the population density of cities is large at present. People are very difficult to retrieve after carelessly losing things. The owner can not be easily reached by the lost person who wants to return the articles. Causing unnecessary loss to many users, especially great trouble to users when important files or certificates are discarded. When a user loses things, the user generally goes to public places such as a recently-visited place or a nearby police station to find and search. And the person who picks up the item is generally handed to a nearby police station or the like. Obviously, the above object-searching method is very time-consuming, and the owner and the finder do not necessarily select the same method, for example, the owner returns to search according to the original route, and the finder hands the object to the police; in addition, the object searching method has the risk of faking and the like. Therefore, the existing scheme is difficult to return lost articles to the owner conveniently and accurately.
Then, in order to solve the addressing problem existing in the above scenarios and achieve convenient and accurate addressing of objects (including but not limited to the above buildings, signposts, destinations, and articles), referring to fig. 1, a flowchart of an addressing method according to an embodiment of the present invention is shown, which is applied to a server, and the method specifically may include the following steps:
step 101, receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object;
the first object is an object whose corresponding address needs to be found, where the address may be an actual physical address of the object, or an address where the object is lost, that is, a lost address.
Further, the first image may be an original image including the first object and other images, or may be a region image including only the first object extracted from the original image including the first object.
In addition, when the address query request is triggered, on the terminal side, referring to fig. 2, the user can trigger the address query request by using a sliding input of the double fingers on the photo 11 in the interface of the photo 11 in the album application, in this example, the contact point of the double fingers of the user with the display screen slides from the dotted line position 12 to the position 13 (i.e., the edge of the display screen of the mobile phone) to trigger the address query request.
In an example 1, in a scenario where a first object (e.g., a building a) is located in an actual geographic location, a location of the building a that a user wants to go to is found, but a physical address of the building is not found on a map, and a photo of the building a is stored in a mobile phone (i.e., a first terminal) of the user, the photo may be uploaded to a cloud album, so as to trigger an address query request for the building a, where the request carries the photo.
For example, the photo (i.e., the first image) in the address query request is an original photo (i.e., not only the area image of the building a, but also the area images of other people or backgrounds, etc.), in order to avoid leakage of privacy information of the user, the server may extract the area image of the building a from the original photo, and perform subsequent steps using the area image of the building a.
For another example, the photo in the address query request may also be an area image only including the building a, that is, before the mobile phone side triggers the address query request, the area image about the building a is extracted from the photo selected and uploaded by the user, so in this example, the first image is an area image only including the building a.
In an example 2, that is, in a scene where a lost article is addressed and a lost address is found, and a certain user loses a wallet, a photo about the lost wallet B (a first object) stored in a mobile phone may be uploaded to a server (specifically, the photo may be issued to a cloud lost article finding platform on the server side), and other technical details are similar to those described in the foregoing description about example 1, and are not described again one by one.
102, acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address, wherein the first corresponding relation is generated according to a mutually-associated object image and an object address uploaded by at least one second terminal;
the terminal which finds the address of the object can upload the image to the server, and the terminal which knows the address of the object can upload the image of the object and the address of the object to the server, so that the server can generate a first corresponding relation between the second image and the second address according to the object image and the object address which are uploaded by the second terminals and are mutually associated, and the first corresponding relation is the corresponding relation between the image and the address which are generated according to the object image and the object address which are mutually associated and uploaded by at least one second terminal.
In addition, when the object image is an original image (i.e., includes not only the area image of the object but also other area images), in order to protect privacy, only the area image including the object may be extracted as the second image from the object image; when the object image is a region image of the object generated by extraction in advance on the mobile phone side, the object image is the second image.
In addition, the object address may be address information extracted from the object image (because the shot picture generally carries the location information of the shooting place), or may be address information associated with the object image uploaded separately by the mobile phone side.
In addition, the object address associated with the object image may be understood as the actual address of the object. For example, in the scenario of example 1, the object address is the actual physical address of building a, and in the scenario of example 2, the object address is the actual physical address of wallet B picked up by the user who picked up wallet B.
Thus, in the scenario of example 1, for example, a visitor took a photograph of building a while traveling at a crowd point, the photograph may be uploaded to a server, and the server may generate a correspondence between a second image of building a and an actual physical address (i.e., a second address) of building a; in the scenario of example 2, with the user 2 picking up wallet B at a certain location (e.g., a certain subway station) and taking a picture, the picture may be uploaded to a server, and the server may generate a correspondence between the second image of wallet B and the location (i.e., the second address) of wallet B.
Then, the data is uploaded through each second terminal, so that the server side already stores the first corresponding relationship between the second image and the second address in advance when receiving the address query request.
Therefore, in this step, the image in the first corresponding relationship may be matched with the first image in the address query request, and when defining which two images match each other, the similarity of image features (for example, the image features may include, but are not limited to, contour features, color features, texture features, breakage features, and the like) between the two images may be calculated, and if the similarity is greater than a preset threshold, it is determined that the two images match. Then, by calculating the similarity between the image features of the two images, a second target image matched with the first image can be queried from the first corresponding relationship, and correspondingly, a second target address matched with the second target image can also be obtained.
Thus, in example 1, the actual physical address of the building a, i.e., the second target address, may be obtained; in example 2, the missing address, i.e., the second destination address, of wallet B may be obtained.
Step 103, responding to the address query request, and sending the second target image and the second target address to the first terminal.
Wherein the server may transmit the matched second target image together with the second target address associated therewith to the first terminal in response to the address query request.
It is to be noted that the second target image and the second target address appear in pairs, and the number of pairs may be one or more pairs.
When the second target image and the second target address are in multiple groups, returning a group of second target image and second target address with highest similarity with the first image to the first terminal; when the second target image and the second target address with the highest similarity to the first image are multiple sets, the multiple sets of the second target image and the second target address can be sent to the first terminal in pairs, and the user determines, for example, which wallet is the image of the wallet lost by the user, and whether the second target address corresponding to the image is the address that the wallet B has gone on the day of losing the wallet, so as to determine the lost address of the wallet. Similarly, the user can determine which building in the image is the building B that the user wants to go to, thereby determining the actual physical address of the building B.
In the embodiment of the invention, by receiving an address query request of a first terminal carrying a first image of a first object to be addressed and receiving and storing a first corresponding relationship between a second image and a second address which are uploaded by at least one second terminal and are correlated with each other in advance, a second target image matched with the first image in the address query request and a second target address matched with the second target image can be obtained in the first corresponding relationship, so that the first image of the first object to be addressed can be used for finding the address of the first object, namely the second target address, and the second target image and the second target address are sent to the first terminal, thereby improving the addressing efficiency of the first object and reducing the time consumption for addressing; the accuracy of image matching is high, so that the addressing accuracy is improved; in the addressing process, the first terminal only needs to upload one image containing the first object to be addressed, so that the addressing purpose can be achieved, and the addressing difficulty is greatly reduced.
In the scene 1 of searching for the geographical location such as a building, a road sign and a destination, the first terminal only needs to upload pictures of various objects to be searched to the cloud album platform, and the server of the platform can automatically remove the character information in the pictures, only keep the image information of the building, the road sign or the destination, and avoid the leakage of the personal information of the user. Then the destination location can be quickly located by this method when the user purposefully builds a picture but does not know the destination specific address; for example, when a user browses a tourist tour, the user sees a sight spot photo issued by the user, but the address of the small sight spot cannot be located on a map, and only the photo needs to be downloaded and uploaded to the server of the embodiment of the invention, the accurate address of the small sight spot can be obtained, and thus, the accurate address is used for navigation and other operations.
In the scene 2 of finding the lost address of the lost article to retrieve the article, the first terminal only needs to upload the photo of the lost article to the server, the server can match the photo of the article with the picture in the cloud photo album, the lost address of the article can be accurately and efficiently found, the owner can quickly and accurately find the lost article by means of the lost address, or find the lost article by finding the user uploading the article, and the risk of falsely claiming is avoided.
It should be noted that, the first terminal may also upload the object image and the object address associated with each other to the server, so that the server updates the first corresponding relationship stored locally.
Thus, the at least one second terminal may comprise a first terminal, but it is noted that in this embodiment, a second target terminal uploading a second target image and the second target address associated with each other is different from the first terminal.
Optionally, the address query request further includes: a first address;
that is, when the first terminal uploads the first image, the first address may also be uploaded together.
In one embodiment, the first address is address information generated according to a location where the first terminal is located, or in another embodiment, the first address is address information generated according to historical locations of the first terminal;
for example, in the above example 1, when the address query request is triggered, the mobile phone may not only carry the picture that the user triggered the upload to the request, but also carry the location information where the mobile phone is currently located when the picture is uploaded to the request.
In one embodiment, the first address may be an administrative region (e.g., a city, or a city and a region, etc.) generated according to a location where the first terminal is located.
For another example, in the above example 2, when the address query request is triggered, the mobile phone may not only carry the picture uploaded by the user triggered to the request, but also generate the first address to carry to the request according to the historical location information of the mobile phone within the date that the wallet is lost.
In one embodiment, the first address may include a geographic route generated according to historical locations of the first terminal (e.g., Tiananmen's station to Quhui station of Beijing subway No. 1 line), and/or a surrounding geographic area where the geographic route is located, and/or a location point (e.g., Quhui station, national trade station) where each of the geographic routes appears more frequently in the historical locations, and/or an accurate address associated with the first image, such as coffee shop M of the address, which is uploaded separately by the user when the address query request is triggered.
Optionally, in an embodiment, in step 102, a second correspondence may be identified in a first correspondence between a second image and a second address stored in advance, where the second address in the second correspondence is within a first geographic range corresponding to the first address; then, in the second corresponding relation, a second target image matched with the first image and a second target address matched with the second target image are obtained.
It should be noted that, regardless of the second correspondence relationship in the present embodiment, or the third correspondence relationship or the fourth correspondence relationship defined in the subsequent embodiments, are different correspondence relationships that are screened from the first correspondence relationship under different conditions, and therefore, the second correspondence relationship, the third correspondence relationship, and the fourth correspondence relationship are all correspondence relationships between images and addresses.
In the embodiment of the present invention, in order to improve addressing efficiency and reduce addressing time consumption, the first corresponding relationship may be screened by using the first address, and the second corresponding relationship may be screened from the first corresponding relationship, specifically, since the second address is an actual address of the first object, that is, the second address is not a geographic range but unique address information, and it is known that the first address may correspond to a geographic area (or a geographic route) according to the above description of the embodiment of the first address, the first geographic range corresponding to the first address may be obtained.
When the first address is address information generated according to the location of the first terminal, the first geographic range corresponding to the first address can be an administrative region corresponding to the location;
when the first address is address information generated according to the historical positioning of the first terminal, the first geographic range may be at least one of the geographic route and a surrounding geographic area where the geographic route is located.
Then, from the first corresponding relationship, identifying a second address belonging to a second address in the first geographic range and identifying a second image corresponding to the second address in the first geographic range to form the second corresponding relationship; then, a second target image matched with the first image and a second target address matched with the second target image are obtained in the second corresponding relation, so that the addressing range is reduced from the large range of the first corresponding relation to the second corresponding relation in the first geographic range corresponding to the first address, and the addressing efficiency is greatly improved.
For example, in the above example 1, when a general user needs an address of a certain building, the user is already in a city where the building is located, for example, the user travels to the city and needs to address an address of a small scenic spot of the building a, then the method according to the embodiment of the present invention may query an actual physical address matching the building a in the second corresponding relationship in the city.
In the above example 2, it is possible to inquire whether there is a pickup address of the wallet a, that is, a lost address, in the second correspondence of the pickup uploaded by the pickup in the geographic range of the travel route of the user lost the wallet a on the current day.
Optionally, in the above embodiment, when the step 102 is executed, the following method may also be implemented:
if a second target image matched with the first image is not obtained in the second corresponding relation, the first geographic range is expanded according to a preset condition, and a second geographic range is generated;
that is, after the second correspondence is determined, if a second target image matching the first image is not found within the range of the second correspondence, the query range may be appropriately enlarged, that is, the first geographic range corresponding to the first address is enlarged according to the preset condition (for example, the addressing radius is increased, the range of the preset radius is enlarged for the first geographic range, or the administrative area is enlarged (for example, the first geographic range is from the beijing subway No. 1 line tiananmengx station to the beijing subway No. 1 line, or is enlarged to the beijing city, or is enlarged to the beijing area, etc.)).
Identifying, in the first correspondence, a third correspondence, wherein a second address in the third correspondence is within the second geographic range;
the principle of identifying the second corresponding relationship from the first corresponding relationship by using the first geographical range is similar to this step, and the description is omitted here.
And acquiring a second target image matched with the first image and a second target address matched with the second target image in the third corresponding relation.
The principle of this step is similar to the principle of using the second corresponding relationship to obtain the second target image and the second target address in the above embodiment, and details are not repeated here.
Optionally, in another embodiment, in performing step 102, a first object type of the first image may be identified; identifying a fourth corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein a second object type of the second image in the fourth corresponding relation is matched with the first object type; and in the fourth corresponding relation, acquiring a second target image matched with the first image and a second target address matched with the second target image.
Specifically, since the range of the first corresponding relationship is large, if the first corresponding relationship is matched one by one, the efficiency is low, and therefore, in order to improve the addressing efficiency and reduce the addressing time, in the embodiment of the present invention, the addressing range may be reduced by the type of the object in the image, and the addressing accuracy is improved, where the addressing range may be reduced from the first corresponding relationship to the fourth corresponding relationship.
For example, in the above example 1, the first object type is a building, and in the above example 2, the first object type is a wallet; and when the server stores the first corresponding relation in advance, the server can store a second object type in association with each second image in the first corresponding relation, so that in order to narrow the addressing range, a second image (for example, the image K) of which the second object type is matched with the first object type (wherein, the matching mode includes, but is not limited to, one or more of the following two object types, namely, the upper-lower relationship, for example, perfume and the XX brand perfume, and/or, the object types are completely the same, for example, the object types are wallets, and/or, the semantic similarity of the object types is greater than a preset threshold value and the like) and a second address corresponding to the second image (namely, the image K here) can be screened out from the first corresponding relation to form the fourth corresponding relation. Then, a second target image and a second target address are acquired from the fourth correspondence.
Optionally, when the address query request further includes: when the first address is address information generated according to the location where the first terminal is located, after step 103, the method according to the embodiment of the present invention may further include:
if first preset information (for example, information indicating that a building in the second target image is a building a that the user wants to address) sent by the first terminal and indicating that the second target image is confirmed is received, the server may generate at least one navigation route from the location to a second target address according to the second target address corresponding to the confirmed second target image and the location where the second terminal is located, and send the at least one navigation route to the first terminal;
the local map software of the first terminal may generate at least one navigation route according to the at least one delivered navigation route, from which the user may select a navigation route to use.
Optionally, after the user of the second terminal arrives at the second destination address corresponding to the building a, the user may further take a picture of the building a, and then upload the taken picture (carrying the location information of the building a) to the server, and the server updates the first corresponding relationship stored locally with the picture, so as to help more people address the destination address.
The embodiment of the invention can realize the function of searching the destination through the photo. And the destination address can be efficiently and accurately found by searching and matching according to the picture and the address information, and a user can obtain the location information which cannot be found by map navigation only by simple operation (for example, the input location name is not accurate enough, so that the location cannot be found by navigation, or a scene without the name of the location can be applied), and obtain the navigation route. Compared with the prior art, the method has larger promotion.
Optionally, in another embodiment, when the address query request further includes a first address, and when the first address is address information generated according to historical positioning of the first terminal, after step 103, the method according to an embodiment of the present invention may further include:
receiving second preset information (for example, a wallet B indicating that the second target image is lost by a user of the first terminal) sent by the first terminal and indicating that the second target image is confirmed;
and sending first user contact information corresponding to the first terminal to a second target terminal, and/or sending second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading a second target image and a second target address which are associated with each other.
Specifically, when the first terminal uploads the address query request, the contact information of the first terminal user (i.e., the contact information of the first user, including, for example, a name and a phone) may be carried in the address query request and uploaded to the server, or uploaded to the server separately. Then if the server stores the first user contact information, the server may send the first user contact information to the second target terminal that picked up wallet B and uploaded the photograph of wallet B in example 2 above.
In addition, when the second terminal uploads the object image and the object address which are associated with each other, the second terminal may also report the contact information of the user of the second terminal to the server together, so that each group of corresponding relations in the first corresponding relations stored at the server side may be associated with corresponding user contact information, and in this embodiment, the server may obtain the second user contact information associated with the second target image and the second target address (that is, the personal contact information of the user who picks up the wallet B and uploads the photo, for example, including the name and the phone), and then send the second user contact information to the first terminal.
The lost wallet B user may actively contact the user who picked up wallet B and/or the user who picked up wallet B may actively contact the lost wallet B user, thereby achieving the goal of retrieving the lost items.
The embodiment of the invention can realize the function of finding lost articles through the photos. And searching and matching are carried out according to the picture and the location (address) information, so that articles lost by the user can be efficiently found, and the lost articles can be found without going out. The user who finds the lost article does not need to find places such as police stations and the like, and can conveniently and quickly release information. The retrieval probability of lost articles is greatly improved, and the time cost of a user is reduced.
Compared with the prior art, the method has larger promotion.
According to the method provided by the embodiment of the invention, the lost article finding platform of the cloud photo album is provided, the user can find lost articles by uploading lost article pictures to the cloud photo album platform, the platform automatically searches for matching, the user can find the lost articles without going out of the house, the user cannot find the articles or the hands and feet after the lost articles, and the lost articles or the lost owners can be found more quickly and accurately.
Optionally, in the step 103, if the second target image and the second target address sent to the first terminal are multiple sets, only one second target image is confirmed in the first preset information or the second preset information, and correspondingly, when the summarizing server in the embodiment obtains the user contact information of the corresponding terminal, the summarizing server may also obtain the second user contact information corresponding to the second target image confirmed by the user in the first preset information or the second preset information.
Optionally, after step 103, after the first terminal receives the second target image and the second target address, if the first terminal has a display screen, the second target image and the first image may be displayed in different areas of the display screen, so as to facilitate user comparison, whether the object in the second target image is the first object that the user wants to address; and if the first terminal is a folding screen terminal, the first terminal can automatically unfold the folding screen and display the second target image and the first image on the two screens, so that the comparison by a user is facilitated.
Optionally, after step 103, the method according to the embodiment of the present invention further includes:
if the server receives third preset information indicating that the second target image is rejected and confirmed, which is sent by the first terminal (for example, indicating that the wallet in the second target image is not a wallet B lost by the user of the first terminal, and if the wallet in the second target image is not a building a that the user wants to address), the server may filter the second target image and the second target address in step 103 from the first corresponding relationship, generate a fourth corresponding relationship, and obtain, according to the address query request and the fourth corresponding relationship, a second target image matched with the first image and a second target address matched with the second target image; and then the second target image and the second target address are sent to the first terminal.
Therefore, when the second target image which is inquired is not the first object which is addressed by the user, the method provided by the embodiment of the invention can delete the corresponding relation between the second target object and the second target address in the first corresponding relation in the next round of inquiry, thereby avoiding meaningless inquiry and improving the addressing efficiency and the addressing accuracy.
Optionally, after step 103, the method according to the embodiment of the present invention further includes:
if the server receives third preset information which is sent by the first terminal and indicates that the second target image is rejected to be confirmed, after a preset time interval, filtering the second target image and the second target address in the step 103 from the first corresponding relationship (because the preset time interval exists, other second terminals may upload new object images and object addresses, and therefore the first corresponding relationship in the step may be updated compared with the first corresponding relationship at the time of the step 102), and generating a fifth corresponding relationship; acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and the fifth corresponding relation; and sending the second target image and the second target address to the first terminal.
In this way, when the second target image which is queried is not the first object addressed by the user, the method of the embodiment of the invention can wait for the preset time period and then execute the next round of query, so that the range of the first corresponding relation during the next round of query can be enlarged, and the hit rate of addressing can be improved; in the next round of query, the corresponding relation of the second target object and the second target address in the first corresponding relation is deleted, so that meaningless query is avoided, and the addressing efficiency and the addressing accuracy are improved.
Optionally, the method of the above embodiment of the present invention may be applied not only to the above listed addressing of a physical building, a landmark or a certain destination, finding a lost address for a lost article and then finding the lost article, but also to a social scenario.
For example, in a social scenario, such as user a wanting to play badminton, however, no friend is around to accompany him. The same situation is encountered by user B. The user A and the user B respectively upload a picture comprising the badminton and playing field information (namely address information) to the server, the server can match the address information with the picture information, and when the picture information and the address information are matched, the contact information of the user B can be issued to the terminal of the user A, and/or the contact information of the user A can be issued to the terminal of the user B, so that two people can form a team to play the badminton, and the two parties can communicate with each other after acquiring the contact way.
Other embodiments of the social scene are similar to the principles of the above-listed embodiments, and are not repeated here.
Referring to FIG. 3, a block diagram of a server of one embodiment of the present invention is shown. The server of the embodiment of the invention can realize the details of the addressing method in the embodiment and achieve the same effect. The server shown in fig. 3 includes:
a first receiving module 31, configured to receive an address query request for a first object sent by a first terminal, where the address query request includes a first image of the first object;
an obtaining module 32, configured to obtain, according to the address query request and a first corresponding relationship between a pre-stored second image and a second address, a second target image matched with the first image and a second target address matched with the second target image, where the first corresponding relationship is generated according to a correlated object image and an object address uploaded by at least one second terminal;
a response module 33, configured to send the second target image and the second target address to the first terminal in response to the address query request.
Optionally, the address query request further includes: a first address, where the first address is address information generated according to a location where the first terminal is located, or the first address is address information generated according to a historical location of the first terminal;
the acquisition module 32 includes:
the first identification submodule is used for identifying a second corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein the second address in the second corresponding relation is within a first geographic range corresponding to the first address;
and the first acquisition sub-module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image in the second corresponding relation.
Optionally, the obtaining module 32 further includes:
the expansion sub-module is used for expanding the first geographic range according to a preset condition to generate a second geographic range if a second target image matched with the first image is not obtained in the second corresponding relation;
a second identification submodule, configured to identify a third corresponding relationship in the first corresponding relationship, where a second address in the third corresponding relationship is within the second geographic range;
and the second obtaining submodule is used for obtaining a second target image matched with the first image and a second target address matched with the second target image in the third corresponding relation.
Optionally, the obtaining module 32 further includes:
a third identifying sub-module for identifying a first object type of the first image;
the fourth identification submodule is used for identifying a fourth corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein a second object type of the second image in the fourth corresponding relation is matched with the first object type;
and the third obtaining submodule is used for obtaining a second target image matched with the first image and a second target address matched with the second target image in the fourth corresponding relation.
Optionally, the address query request further includes: a first address, when the first address is address information generated according to a historical location of the first terminal, the server further including:
the second receiving module is used for receiving preset information which is sent by the first terminal and used for confirming the second target image;
the sending module is used for sending first user contact information corresponding to the first terminal to a second target terminal and/or sending second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading a second target image and a second target address which are related to each other.
The server provided by the embodiment of the present invention can implement each process implemented by the server in the above method embodiments, and is not described herein again to avoid repetition.
The server receives an address query request of a first terminal carrying a first image of a first object to be addressed through the module, and receives and stores a first corresponding relation between a second image and a second address which are uploaded by at least one second terminal and are correlated with each other in advance, so that a second target image matched with the first image in the address query request and a second target address matched with the second target image can be obtained in the first corresponding relation, and therefore the first image of the first object to be addressed can be used for finding the address of the first object, namely the second target address, and the second target image and the second target address are sent to the first terminal, the addressing efficiency of the first object is improved, and the addressing time consumption is reduced; the accuracy of image matching is high, so that the addressing accuracy is improved; in the addressing process, the first terminal only needs to upload one image containing the first object to be addressed, so that the addressing purpose can be achieved, and the addressing difficulty is greatly reduced.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device for implementing the functions of the terminal side in the above embodiments of the present invention,
the terminal device 400 includes but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 4 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The radio frequency unit 401 is configured to send an address query request for a first object, where the address query request includes a first image of the first object to a server; and/or, the radio frequency unit 401 is further configured to upload the correlated object image and the object address to the server;
the server is used for acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address; the first corresponding relation is generated according to the object image and the object address which are uploaded by the at least one second terminal device and are related to each other.
The input unit 404 is configured to receive the second target image and the second target address.
In the embodiment of the present invention, a terminal device sends an address query request carrying a first image of a first object to be addressed, so that a second target image matched with the first image in the address query request and a second target address matched with the second target image, which are acquired according to a first corresponding relationship, can be received from a server, where the first corresponding relationship is generated according to at least one object image and object address uploaded by a second terminal device and associated with each other, so that the terminal device in the embodiment of the present invention can find an address of the first object, that is, a second target address, by using the first image of the first object to be addressed, thereby improving the addressing efficiency of the first object and reducing the time consumption for addressing; the accuracy of image matching is high, so that the addressing accuracy is improved; in the addressing process, the terminal equipment only needs to upload one image containing the first object to be addressed, so that the addressing purpose can be achieved, and the addressing difficulty is greatly reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 402, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the terminal apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The terminal device 400 further comprises at least one sensor 405, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the terminal apparatus 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 408 is an interface for connecting an external device to the terminal apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 400 or may be used to transmit data between the terminal apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the terminal device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The terminal device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the foregoing addressing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing addressing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An addressing method applied to a server, the method comprising:
receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object;
acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address, wherein the first corresponding relation is generated according to a mutually-associated object image and an object address uploaded by at least one second terminal;
and responding to the address query request, and sending the second target image and the second target address to the first terminal.
2. The method of claim 1, wherein the address query request further comprises: a first address, where the first address is address information generated according to a location where the first terminal is located, or the first address is address information generated according to a historical location of the first terminal;
the acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and the second address comprises:
identifying a second corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein the second address in the second corresponding relation is within a first geographical range corresponding to the first address;
and in the second corresponding relation, acquiring a second target image matched with the first image and a second target address matched with the second target image.
3. The method according to claim 2, wherein the obtaining a second target image matching the first image and a second target address matching the second target image according to the address query request and a first correspondence between a pre-stored second image and a second address further comprises:
if a second target image matched with the first image is not obtained in the second corresponding relation, the first geographic range is expanded according to a preset condition, and a second geographic range is generated;
identifying, in the first correspondence, a third correspondence, wherein a second address in the third correspondence is within the second geographic range;
and acquiring a second target image matched with the first image and a second target address matched with the second target image in the third corresponding relation.
4. The method according to claim 1, wherein the obtaining a second target image matching the first image and a second target address matching the second target image according to the address query request and a first correspondence between a pre-stored second image and a second address further comprises:
identifying a first object type of the first image;
identifying a fourth corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein a second object type of the second image in the fourth corresponding relation is matched with the first object type;
and in the fourth corresponding relation, acquiring a second target image matched with the first image and a second target address matched with the second target image.
5. The method of claim 1, wherein the address query request further comprises: a first address, after the transmitting the second target image and the second target address to the first terminal in response to the address query request when the first address is address information generated according to historical positioning of the first terminal, the method further comprising:
receiving preset information which is sent by the first terminal and used for confirming the second target image;
and sending first user contact information corresponding to the first terminal to a second target terminal, and/or sending second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading a second target image and a second target address which are associated with each other.
6. A server, characterized in that the server comprises:
the system comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving an address query request for a first object sent by a first terminal, and the address query request comprises a first image of the first object;
the acquisition module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first corresponding relation between a pre-stored second image and a second address, wherein the first corresponding relation is generated according to a mutually-associated object image and object address uploaded by at least one second terminal;
and the response module is used for responding to the address query request and sending the second target image and the second target address to the first terminal.
7. The server according to claim 6, wherein the address query request further comprises: a first address, where the first address is address information generated according to a location where the first terminal is located, or the first address is address information generated according to a historical location of the first terminal;
the acquisition module includes:
the first identification submodule is used for identifying a second corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein the second address in the second corresponding relation is within a first geographic range corresponding to the first address;
and the first acquisition sub-module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image in the second corresponding relation.
8. The server of claim 7, wherein the obtaining module further comprises:
the expansion sub-module is used for expanding the first geographic range according to a preset condition to generate a second geographic range if a second target image matched with the first image is not obtained in the second corresponding relation;
a second identification submodule, configured to identify a third corresponding relationship in the first corresponding relationship, where a second address in the third corresponding relationship is within the second geographic range;
and the second obtaining submodule is used for obtaining a second target image matched with the first image and a second target address matched with the second target image in the third corresponding relation.
9. The server according to claim 6, wherein the obtaining module further comprises:
a third identifying sub-module for identifying a first object type of the first image;
the fourth identification submodule is used for identifying a fourth corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein a second object type of the second image in the fourth corresponding relation is matched with the first object type;
and the third obtaining submodule is used for obtaining a second target image matched with the first image and a second target address matched with the second target image in the fourth corresponding relation.
10. The server according to claim 6, wherein the address query request further comprises: a first address, when the first address is address information generated according to a historical location of the first terminal, the server further including:
the second receiving module is used for receiving preset information which is sent by the first terminal and used for confirming the second target image;
the sending module is used for sending first user contact information corresponding to the first terminal to a second target terminal and/or sending second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading a second target image and a second target address which are related to each other.
CN201911268859.XA 2019-12-11 2019-12-11 Addressing method and server Active CN111008297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911268859.XA CN111008297B (en) 2019-12-11 2019-12-11 Addressing method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911268859.XA CN111008297B (en) 2019-12-11 2019-12-11 Addressing method and server

Publications (2)

Publication Number Publication Date
CN111008297A true CN111008297A (en) 2020-04-14
CN111008297B CN111008297B (en) 2023-12-15

Family

ID=70114540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911268859.XA Active CN111008297B (en) 2019-12-11 2019-12-11 Addressing method and server

Country Status (1)

Country Link
CN (1) CN111008297B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112414427A (en) * 2020-10-27 2021-02-26 维沃移动通信有限公司 Navigation information display method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100260426A1 (en) * 2009-04-14 2010-10-14 Huang Joseph Jyh-Huei Systems and methods for image recognition using mobile devices
US20150227557A1 (en) * 2014-02-10 2015-08-13 Geenee Ug Systems and methods for image-feature-based recognition
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN107315755A (en) * 2016-04-27 2017-11-03 杭州海康威视数字技术股份有限公司 The orbit generation method and device of query object
CN107533746A (en) * 2015-02-28 2018-01-02 华为技术有限公司 Information protecting method, server and terminal
CN108023924A (en) * 2016-10-31 2018-05-11 财付通支付科技有限公司 A kind of information processing method, terminal and server
CN108256100A (en) * 2018-01-31 2018-07-06 维沃移动通信有限公司 A kind of information search method, mobile terminal and Cloud Server

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100260426A1 (en) * 2009-04-14 2010-10-14 Huang Joseph Jyh-Huei Systems and methods for image recognition using mobile devices
US20150227557A1 (en) * 2014-02-10 2015-08-13 Geenee Ug Systems and methods for image-feature-based recognition
CN107533746A (en) * 2015-02-28 2018-01-02 华为技术有限公司 Information protecting method, server and terminal
CN107315755A (en) * 2016-04-27 2017-11-03 杭州海康威视数字技术股份有限公司 The orbit generation method and device of query object
CN108023924A (en) * 2016-10-31 2018-05-11 财付通支付科技有限公司 A kind of information processing method, terminal and server
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN108256100A (en) * 2018-01-31 2018-07-06 维沃移动通信有限公司 A kind of information search method, mobile terminal and Cloud Server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112414427A (en) * 2020-10-27 2021-02-26 维沃移动通信有限公司 Navigation information display method and electronic equipment
CN112414427B (en) * 2020-10-27 2023-04-28 维沃移动通信有限公司 Navigation information display method and electronic equipment

Also Published As

Publication number Publication date
CN111008297B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
US11445117B2 (en) Portable information device having real-time display with relevant information
CN110147705B (en) Vehicle positioning method based on visual perception and electronic equipment
JP5871976B2 (en) Mobile imaging device as navigator
CN108711355B (en) Track map strategy making and using method, device and readable storage medium
CN108519080B (en) Navigation route planning method and terminal
CN108062390B (en) Method and device for recommending user and readable storage medium
CN108319709B (en) Position information processing method and device, electronic equipment and storage medium
CN111182453A (en) Positioning method, positioning device, electronic equipment and storage medium
CN108647957A (en) A kind of method of payment, device and mobile terminal
CN109508398B (en) Photo classification method and terminal equipment thereof
CN109040968B (en) Road condition reminding method, mobile terminal and computer readable storage medium
CN108628985B (en) Photo album processing method and mobile terminal
CN108460817B (en) Jigsaw puzzle method and mobile terminal
CN110519699B (en) Navigation method and electronic equipment
CN109684277B (en) Image display method and terminal
KR101615504B1 (en) Apparatus and method for serching and storing contents in portable terminal
CN111337049A (en) Navigation method and electronic equipment
CN111064888A (en) Prompting method and electronic equipment
CN109167752B (en) Mobile terminal and method and device for automatically logging in application platform
CN108322611B (en) Screen locking information pushing method and device and computer readable storage medium
CN111008297B (en) Addressing method and server
CN109474889B (en) Information transmission method, mobile terminal and server
CN110535754B (en) Image sharing method and device
CN110470293B (en) Navigation method and mobile terminal
CN107844203B (en) Input method candidate word recommendation method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant