CN109752001B - Navigation system, method and device - Google Patents

Navigation system, method and device Download PDF

Info

Publication number
CN109752001B
CN109752001B CN201711081993.XA CN201711081993A CN109752001B CN 109752001 B CN109752001 B CN 109752001B CN 201711081993 A CN201711081993 A CN 201711081993A CN 109752001 B CN109752001 B CN 109752001B
Authority
CN
China
Prior art keywords
user
image
identity
target
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711081993.XA
Other languages
Chinese (zh)
Other versions
CN109752001A (en
Inventor
谭傅伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201711081993.XA priority Critical patent/CN109752001B/en
Publication of CN109752001A publication Critical patent/CN109752001A/en
Application granted granted Critical
Publication of CN109752001B publication Critical patent/CN109752001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

The embodiment of the application discloses a navigation system, a navigation method and a navigation device. One embodiment of the system comprises: the cloud server is used for receiving a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user; determining whether the user is currently located in a target location area; in response to determining that the user is currently located in the target location area, sending a navigation request to a server corresponding to the target location area; the server is used for selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of the user; receiving an image currently acquired by a camera device positioned in a target position area; determining the current position of the user in the target position area according to the image and the selected appearance characteristics; and generating navigation data according to the current position and the target position, and sending the navigation data to the user. This embodiment helps to improve the accuracy of the positioning navigation.

Description

Navigation system, method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of navigation, and particularly relates to a navigation system, a method and a device.
Background
Navigation is a method of guiding a device to move from one point of a given course to another. Navigation is generally divided into two categories: (1) autonomous navigation: the navigation is carried out by equipment on an aircraft or a ship, and the equipment comprises inertial navigation, Doppler navigation, astronomical navigation and the like; (2) non-autonomous navigation: the navigation system is used for the navigation of the traffic equipment such as aircrafts, ships, automobiles and the like in cooperation with related ground or air equipment, and comprises radio navigation and satellite navigation.
Navigation maps, i.e., digital maps, are maps that are stored and referred to digitally using computer technology. The method is mainly used for planning the path and realizing the navigation function.
Disclosure of Invention
The embodiment of the application provides a navigation system, a navigation method and a navigation device.
In a first aspect, an embodiment of the present application provides a navigation system, including: the system comprises a cloud server, a server and a camera device; the cloud server is used for receiving a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user; determining whether the user is currently located in a target location area; in response to determining that the user is currently located in the target location area, sending a navigation request to a server corresponding to the target location area; the server is used for selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of the user; receiving an image currently acquired by a camera device positioned in a target position area; determining the current position of the user in the target position area according to the image and the selected appearance characteristics; and generating navigation data according to the current position and the target position, and sending the navigation data to the user.
In some embodiments, the server is further configured to: receiving a face image and a human-shaped image which are acquired by a camera device positioned at an entrance position of a target position area; and extracting the face features and the appearance features, and sending the face features to a cloud server.
In some embodiments, the cloud server is further configured to: matching the face features with a pre-stored sample face feature set; sending the sample face features and the identity of a target user to a server, wherein the target user is a user corresponding to the sample face features matched with the face features in the sample face feature set; and generating a corresponding relation between the identity of the target user and the information of the target position area corresponding to the server.
In some embodiments, determining whether the user is currently located in the target location area comprises: determining whether information of a target position area corresponding to the identity of the user exists at present; and if so, determining that the user is currently located in the target position area.
In some embodiments, the server is further configured to: obtaining appearance characteristics of a user corresponding to the face characteristics matched with the sample face characteristics of the target user; and generating a user information set according to the identity of the target user and the obtained appearance characteristics.
In some embodiments, the user information in the set of user information further comprises sample facial features of the user; and the server is further configured to: receiving a face image collected by a camera device positioned at an exit position of a target position area, and matching the face image with a user information set; removing the user information matched with the face image in the user information set; and sending the identity identifier in the removed user information to a cloud server.
In some embodiments, the cloud server is further configured to clear information of the target location area corresponding to the identity sent by the server.
In a second aspect, an embodiment of the present application provides a navigation method, where the method includes: receiving a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user; selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of the user; receiving an image currently acquired by a camera device positioned in a target position area; determining the current position of the user in the target position area according to the image and the selected appearance characteristics; and generating navigation data according to the current position and the target position, and sending the navigation data to the user.
In some embodiments, the method further comprises: receiving a face image and a human-shaped image which are collected by a camera device positioned at an entrance position of a target position area, and extracting face characteristics and appearance characteristics; acquiring an identity of a user corresponding to the face features; and generating a user information set according to the acquired identity and appearance characteristics.
In some embodiments, the user information in the set of user information further comprises sample facial features of the user; and the method further comprises: receiving a face image collected by a camera device positioned at an exit position of a target position area, and matching the face image with a user information set; and removing the user information matched with the face image in the user information set.
In a third aspect, an embodiment of the present application provides a navigation device, including: the first receiving unit is configured to receive a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user; the selection unit is configured to select user information corresponding to the identity identifier from a pre-stored user information set according to the identity identifier, wherein the user information comprises the identity identifier and the appearance characteristic of the user; the second receiving unit is configured to receive an image currently acquired by the camera device located in the target position area; the determining unit is configured for determining the current position of the user in the target position area according to the image and the selected appearance characteristics; and the generating unit is configured to generate navigation data according to the current position and the target position and send the navigation data to the user.
In some embodiments, the apparatus further comprises: the fourth receiving unit is configured to receive a face image and a human-shaped image which are acquired by the camera device and located at the entrance position of the target position area, and extract face features and appearance features; the acquiring unit is configured to acquire the identity of a user corresponding to the face features; and the information set generating unit is configured to generate a user information set according to the acquired identity and appearance characteristics.
In some embodiments, the user information in the set of user information further comprises sample facial features of the user; and the apparatus further comprises: the third receiving unit is configured to receive a face image acquired by the camera device at the exit position of the target position area and match the face image with the user information set; and the clearing unit is configured for clearing the user information matched with the face image in the user information set.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; the camera device is used for collecting images; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to carry out a method as described in any of the embodiments of the second aspect above.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored. Which when executed by a processor performs the method as described in any of the embodiments of the second aspect above.
According to the navigation system, the navigation method and the navigation device, after the cloud server receives the navigation request sent by the user, whether the user is located in the target position area or not can be determined at first. Then, if it is determined that the user is located in the target location area, the cloud server may send the navigation request to a server corresponding to the target location area. At this time, the server may first select, according to the identity in the navigation request, user information corresponding to the identity from a pre-stored user information set, where the user information may include the identity and appearance characteristics of the user; then, the server can receive the image currently acquired by the camera device positioned in the target position area; then, the current position of the user in the target position area can be determined according to the image and the selected appearance characteristics; finally, navigation data can be generated according to the current position and the target position and sent to the user. This helps to improve the accuracy of the positioning navigation.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a timing diagram of one embodiment of a navigation system according to the present application;
FIG. 3 is a timing diagram of yet another embodiment of a navigation system according to the present application;
FIG. 4 is a timing diagram of yet another embodiment of a navigation system according to the present application;
FIG. 5 is a flow diagram of one embodiment of a navigation method according to the present application;
FIG. 6 is a schematic structural diagram of one embodiment of a navigation device according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which the navigation system, the navigation method or the navigation apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminals 101, 102, networks 103, 106, a server 104, a cloud server 105, and image capture devices 107, 108. The network 103 is used to provide a medium of communication links between the terminals 101, 102, the server 104 and the cloud server 105. The network 106 is used to provide a medium for communication links between the server 104 and the camera devices 107, 108. The networks 103, 106 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminals 101, 102 to interact with the server 104, the cloud server 105, over the network 103, to receive or send messages, and the like. The terminals 101 and 102 may have various client applications installed thereon, such as a navigation application, a web browser application, a face recognition application, a shopping application, and the like.
The terminals 101, 102 may be various electronic devices having a display screen including, but not limited to, smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like.
The image capturing devices 107, 108 may be various devices for capturing images, such as a camera, a still camera, etc. The cameras 107 and 108 can capture face images, human-shaped images, and the like. The human-shaped image may include not only a face image of the user, but also at least a part of a body image of the user, such as an image including an upper body of the user or an image including a whole body of the user.
The server 104 may be a server that provides various services, such as an analysis server that analyzes and processes face images captured by the camera devices 107 and 108.
The cloud server 105 may also be a server that provides various services, such as a background server that provides support for navigation applications displayed on the terminals 101, 102. The backend server may perform processing such as analysis on the navigation request transmitted by the terminal 101, 102, and may transmit a processing result (e.g., navigation line data corresponding to the navigation request) to the terminal 101, 102.
It should be noted that the navigation method provided by the embodiment of the present application is generally executed by the server 104, and accordingly, the navigation device is generally disposed in the server 104.
Note that when the cloud server 105 has the function of the server 104, the system architecture 100 may not provide the server 104.
It should be understood that the number of terminals, networks, servers, cloud servers, and cameras in fig. 1 are merely illustrative. There may be any number of terminals, networks, servers, cloud servers, and cameras, as desired for implementation.
With continued reference to FIG. 2, a timing diagram of one embodiment of a navigation system according to the present application is shown.
In this embodiment, the navigation system may include a cloud server, a server, and a camera; the cloud server is used for receiving a navigation request sent by a user, wherein the navigation request can comprise a destination position and an identity of the user; determining whether the user is currently located in a target location area; in response to determining that the user is currently located in the target location area, sending a navigation request to a server corresponding to the target location area; the server is used for selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information can comprise the identity and appearance characteristics of a user; receiving an image currently acquired by a camera device positioned in a target position area; determining the current position of the user in the target position area according to the image and the selected appearance characteristics; and generating navigation data according to the current position and the target position, and sending the navigation data to the user.
As shown in fig. 2, in step 201, the cloud server receives a navigation request sent by a user.
In this embodiment, a cloud server (e.g., the cloud server 105 shown in fig. 1) may receive a navigation request sent by a user through a terminal (e.g., the terminals 101 and 102 shown in fig. 1) through a wired connection manner or a wireless connection manner. The navigation request may include the destination location and the identity of the user, among other things. Here, the destination location may be represented in various ways, such as a name of the destination location, a communication address, or latitude and longitude coordinates. The identity of a user generally refers to an identification that can uniquely determine the identity of the user. The identification may comprise one or at least two of the characters such as numbers, letters, words and symbols. For example, the identity may include at least one of: name, identification number, mobile phone number or SID (system identification codes), etc.
In step 202, the cloud server determines whether the user is currently located in the target location area.
In this embodiment, the cloud server may determine whether the user is currently located in the target location area through various methods. The target location area is not limited in this application, and for example, the target location area may be a location area where a certain shop or a mall is located; for another example, the target position area may be a position area where the camera exists around the user, such as a road.
In some optional implementation manners of this embodiment, the cloud server may obtain current location information of the user through mobile phone positioning to determine whether the current location information is located in the target location area. The mobile phone Positioning technology may include Positioning based on a GPS (Global Positioning System), Positioning based on a base station of a mobile operating network, and Positioning using WiFi (Wireless Fidelity). These positioning techniques have been widely used in human life, and the detailed working principle thereof is not described herein.
Optionally, the user may acquire an image of the current position by using a camera installed on the terminal, and send the image to the cloud server. The cloud server may match the image in a pre-stored image database. If an image matched with the image exists in the image database, the position corresponding to the matched image can be used as the current position of the user, so that whether the user is located in the target position area currently is determined.
As an example, the cloud server may also determine whether the user is currently located in the target location area by determining whether information of the target location area corresponding to the identity of the user currently exists. If so, it may be determined that the user is currently located in the target location area. The process of generating the corresponding relationship between the user identity and the target location area may specifically refer to the embodiment shown in fig. 3.
It should be noted that when the cloud server determines that the user is not currently located in the target location area, a prompt message may be sent to the terminal to instruct to stop the processing of the navigation request.
In step 203, the cloud server sends a navigation request to a server corresponding to the target location area in response to determining that the user is currently located in the target location area.
In this embodiment, when determining that the user is currently located in the target location area, the cloud server may send the navigation request sent by the user to a server (e.g., the server 104 shown in fig. 1) corresponding to the target location area where the user is currently located in a wired connection manner or a wireless connection manner.
As an example, after determining that the user is currently located in the location area of store a, the cloud server may look up the identifier of store a in a pre-stored store information table, and further obtain an IP (Internet Protocol Address) of the server of store a to transmit the navigation request to the server of store a. The shop information table is used for describing the corresponding relation between the identification of the shop and the IP of the server. The identity of the store generally refers to the identity of the store that can be uniquely determined. The indicia may likewise comprise at least one of a number, letter, word, symbol, and the like.
In step 204, the server selects the user information corresponding to the identity from the pre-stored user information set according to the identity.
In this embodiment, a server (e.g., the server 104 shown in fig. 1) may obtain the identity of the user in a navigation request sent by the cloud server. And according to the identity, the user information corresponding to the identity can be selected from a pre-stored user information set. The user information may include an identity and appearance characteristics of the user. The appearance characteristics can include not only body shape characteristics (such as height, short, fat, thin and the like) of the user, but also dressing information of the user for the convenience of the identification of the user, thereby improving the identification efficiency. The dressing information may include (but is not limited to) at least one of: clothing types (such as skirts, trousers, etc.), clothing colors, ornament types (such as hats, scarves), ornament colors, shoe types (such as long tubes, short tubes, leather shoes and sports shoes), shoe colors, and the like.
In some optional implementations of this embodiment, the server may obtain the user data of all registered users in advance. The user data may typically include identification and standard face data. The standard face data generally refers to front face data. The user can upload at least one face image, such as a front face image and/or a side face image, during registration, so as to obtain standard face data of the user. In this way, when the user enters the target position area (such as store A), the camera device positioned at the entrance position of store A can acquire the face image and the human-shaped image of the user. The server may then match the facial image of the user with standard facial data in the user data. If the matching is successful, the server can generate a user information set according to the human-shaped image of the user, the successfully matched standard human face data and the corresponding identity.
It is to be understood that the storage location of the user data is not limited in this application. It may be stored locally on the server or on the cloud server.
Alternatively, the server may obtain the user information set in other ways. Reference may be made in particular to the embodiment shown in fig. 3.
In step 205, the server receives an image currently captured by a camera located in the target location area.
In this embodiment, the server may receive, through a wired connection manner or a wireless connection manner, an image currently captured by a camera (e.g., the camera 107 or 108 shown in fig. 1) located in the target location area. The camera device may be various devices for capturing images.
It is understood that, in order to facilitate the identification of the user, the target location area may be provided with at least one camera device, and areas captured by the camera devices have an overlapping area therebetween. That is, there is a coincidence between the images captured by these cameras. For example, the camera devices can be respectively installed at four corners and a central area of the roof of the store A, so that the users in the store A can be ensured to be shot by at least two camera devices at the same time.
In step 206, the server determines the current location of the user in the target location area according to the image and the selected appearance feature.
In this embodiment, the server may search for a human-shaped image matching the appearance feature in the currently acquired image of each camera according to the appearance feature selected in step 204. Furthermore, according to the position of the matched human-shaped image in the image and the position of each camera device which acquires the human-shaped image at the same moment, the server can determine the current position of the user corresponding to the appearance feature in the target position area.
It can be understood that, by positioning the current position of the user through the camera device of the target position area, the accuracy of positioning, especially indoor positioning, can be improved. And the use of configuration on the terminal for positioning can be reduced or avoided, thereby contributing to the reduction of power consumption of the terminal. In addition, because the positioning of the user is not needed to be realized through the configuration on the terminal, the hardware configuration requirement on the terminal can be reduced, and the production cost of the terminal is further reduced.
In step 207, the server generates navigation data according to the current location and the destination location, and sends the navigation data to the user.
In this embodiment, the server may obtain the destination location information of the user from the navigation request of step 204. At this time, according to the destination location and the current location of the user determined in step 206, the server may perform route planning, generate navigation data including route information, and transmit the navigation data to the user (i.e., the user transmitting the navigation request). The path information may include at least one of: road name, driving distance, bus information, etc. Here, the navigation data may be two-dimensional data information or three-dimensional data information.
It should be noted that the server may directly send the navigation data to the user, or may send the navigation data to the user through the cloud server. That is, the server may send the navigation data to the cloud server, and the cloud server sends the navigation data to the terminal used by the corresponding user. Therefore, the navigation route planning is carried out by utilizing the server corresponding to the target position area where the user is located at present, the load of the cloud server can be reduced, the processing efficiency of the navigation request is improved, the waiting time of the user is reduced, and the user experience is improved.
According to the navigation system provided by the embodiment of the application, after the cloud server receives the navigation request sent by the user, whether the user is located in the target position area or not can be determined at first. Then, if it is determined that the user is located in the target location area, the cloud server may send the navigation request to a server corresponding to the target location area. At this time, the server may first select, according to the identity in the navigation request, user information corresponding to the identity from a pre-stored user information set, where the user information may include the identity and appearance characteristics of the user; then, the server can receive the image currently acquired by the camera device positioned in the target position area; then, the current position of the user in the target position area can be determined according to the image and the selected appearance characteristics; finally, navigation data can be generated according to the current position and the target position and sent to the user. This helps to improve the accuracy of the positioning navigation.
With further reference to FIG. 3, a timing diagram of yet another embodiment of the navigation system provided herein is shown.
As shown in fig. 3, in step 301, the server receives a human face image and a human figure image captured by a camera at an entrance position of a target position area.
In this embodiment, a server (e.g., the server 104 shown in fig. 1) may receive, in real time or periodically, a face image and a human-shaped image captured by a camera (e.g., the cameras 107 and 108 shown in fig. 1) located at an entrance position of the target location area through a wired connection or a wireless connection. For example, in the store a, two cameras having different heights may be provided corresponding to entrance positions outside or inside the store a to capture a face image and a figure image, respectively.
In step 302, the server extracts the face features and the appearance features and sends the face features to the cloud server.
In this embodiment, the server may extract facial features by using an existing facial feature point extraction technology according to a facial image of a user. And the appearance characteristics of the user can be extracted according to the human-shaped image. The shape features may be the same as those described above, and will not be described herein.
In step 303, the cloud server matches the facial features with a pre-stored sample facial feature set.
In this embodiment, a cloud server (e.g., the cloud server 105 shown in fig. 1) may match the facial features sent by the server with sample facial features in a pre-stored sample facial feature set. Wherein, the sample face features are mainly standard face features. Here, the cloud server may acquire user data (the same as the above-described user data) of all registered users in advance. In this way, a sample face feature set may be generated from the standard face data in the user data.
In step 304, the cloud server sends the sample facial features and the identity of the target user to the server.
In this embodiment, according to the matching result in step 303, the cloud server may send the sample facial features and the identity of the target user to the server. And the target user is a user corresponding to the sample face features matched with the face features in the sample face feature set.
In step 305, the cloud server generates a correspondence between the identification of the target user and the information of the target location area corresponding to the server.
In this embodiment, according to the matching result in step 303, the cloud server may establish a corresponding relationship between the identity of the target user and the information indicating the target location area. The target position area is a target position area corresponding to the server sending the face features.
For example, a user information table may be stored in the cloud server. The user information table is used for representing the corresponding relation among the identity of the user, the face characteristics of the sample and the information of the shop. In this way, when the cloud server sends the sample facial features and the identification of the target user to the server of the store a, the cloud server can write the information of the store a (such as the identification, name, license number, etc. of the store a) into the column of the store information corresponding to the identification of the target user in the user information table. At this time, it is explained that the user is currently located in the store a.
In this way, the cloud server may determine whether the user is currently located in the target location area by: the cloud server can determine whether information of a target position area corresponding to the identity of the user exists at present; and if so, determining that the user is currently located in the target position area. That is, the cloud server may check whether a column of the store information of the user information table is currently empty (NULL); if not, the situation shows that the user is currently located in the shop indicated by the shop information.
In step 306, the server obtains the appearance feature of the user corresponding to the face feature matched with the sample face feature of the target user, and generates a user information set according to the identity of the target user and the obtained appearance feature.
In this embodiment, after receiving the identity of the target user and the sample face features sent by the cloud server, the server may first obtain, according to the sample face features, appearance features of the user corresponding to the face features matched therewith; and then generating a user information set according to the identity of the target user and the obtained appearance characteristics.
It should be noted that, if the server does not receive the identity of the target user sent by the cloud server, it may be stated that the user corresponding to the facial feature may be an unregistered user. Then, the server may delete the relevant data of the user corresponding to the facial feature, such as the facial image and the appearance feature.
According to the navigation system provided by the embodiment, when a user enters the target position area, the face image and the human-shaped image of the user can be acquired through the camera device positioned at the entrance position. At this time, the server corresponding to the target location area may extract the face feature and the appearance feature of the user. And according to the face characteristics of the user, the server can acquire the identity of the user from the cloud server so as to correspond to the appearance characteristics of the user. Therefore, the current position of the user in the target position area can be positioned in real time by using the camera device in the target position area, and the positioning accuracy is improved. Meanwhile, after receiving the face features sent by the server, the cloud server can confirm that the user is currently located in the target position area. That is, the current location of the user can be determined without using a terminal used by the user, so that the configuration requirement and energy consumption of the terminal can be reduced. In addition, the server is mainly used for face feature extraction and user positioning, and the cloud server is mainly used for face feature recognition. In this way, by load balancing, the data amount required to be processed by the server and the cloud server can be reduced, thereby improving the processing efficiency.
With continued reference to FIG. 4, a timing diagram of yet another embodiment of the navigation system provided herein is shown.
As shown in fig. 4, in step 401, the server receives a face image captured by a camera device located at an exit position of the target position area, and matches the face image with the user information set.
In this embodiment, the user information in the user information set may further include a sample facial feature of the user. In this way, when the user is about to leave the target location area, the server (e.g., the server 104 shown in fig. 1) may receive the face image captured by the camera (e.g., the cameras 107 and 108 shown in fig. 1) located at the exit position of the target location area. The facial features of the user may then be extracted to match them with sample facial features in the user information set.
In step 402, the server clears the user information in the user information set matching the facial image.
In this embodiment, according to the matching result in step 401, the server may remove the user information in the user information set that matches the extracted facial features. That is, when the user leaves the target location area, the server corresponding to the target location area can clear the information related to the user, thereby reducing the occupation of the storage space and improving the operation efficiency of the server.
In step 403, the server sends the identity identifier in the cleared user information to the cloud server.
In this embodiment, since the user has left the target location area, the server may send the identity in the cleared user information to a cloud server (e.g., cloud server 105 shown in fig. 1) so that the cloud server confirms that the user is not in the target location area.
It is understood that the server may also send the facial image (extracted facial features) in step 401 or the sample facial features in the removed user information to the cloud server, so that the cloud server confirms that the corresponding user is not in the target location area.
In step 404, the cloud server clears the information of the target location area corresponding to the identity sent by the server.
In this embodiment, after receiving the identity identifier in the removed user information sent by the server, the cloud server may remove the information of the target location area stored thereon and corresponding to the identity identifier. For example, the cloud server may clear the relevant information in the column of the store information corresponding to the identity in the user information table.
According to the navigation system provided by the embodiment, when the user is about to leave the target position area, the camera device located at the exit position can acquire the face image of the user. Therefore, according to the face image, the server can clear the user information related to the user, so that the stored content on the server can be cleared in time, and the operation efficiency is improved. In addition, after the identity of the user is sent to the cloud server, the cloud server can know that the user leaves the target position area, and therefore the storage content on the user can be changed in time.
Referring to fig. 5, a flow 500 of an embodiment of a navigation method provided by the present application is shown. The process 500 of the navigation method includes the following steps:
step 501, receiving a navigation request sent by a user.
In this embodiment, an electronic device (for example, the server 104 shown in fig. 1) on which the navigation method operates may receive a navigation request sent by a user through a wired connection or a wireless connection. The navigation request may include the destination location and the identity of the user, among other things. The identity here may be the same as the identity in the above embodiments, and is not described here again.
Alternatively, the user may send the navigation request using a navigation application presented on the used terminal ( e.g. terminals 101, 102 shown in fig. 1). At this time, a cloud server (e.g., cloud server 105 shown in fig. 1) providing support for the navigation application may transmit the navigation request to the electronic device after receiving it.
Further, after receiving the navigation request, the cloud server may first confirm whether the user sending the navigation request is currently located in the target location area. If the current position of the electronic equipment is confirmed to be in the target position area, the navigation request can be sent to the electronic equipment corresponding to the target position area. For a specific process, reference may be made to the related description in the foregoing embodiments, and details are not described herein again.
Step 502, according to the identity, selecting user information corresponding to the identity from a pre-stored user information set.
In this embodiment, the electronic device may select, according to the identity identifier in the navigation request, user information corresponding to the identity identifier from a pre-stored user information set. The user information comprises the identity and appearance characteristics of the user. For a specific process, reference may be made to the related description in the foregoing embodiments, and details are not described herein again.
Step 503, receiving the image currently acquired by the camera device located in the target position area.
In this embodiment, the electronic device may receive, by way of wired connection or wireless connection, an image currently captured by a camera (e.g., the cameras 107 and 108 shown in fig. 1) located in the target location area.
Step 504, determining the current position of the user in the target position area according to the image and the selected appearance characteristics.
In this embodiment, the electronic device may determine the current location of the user in the target location area according to the appearance feature selected in step 502 and the image received in step 503. For a specific process, reference may be made to the related description in the foregoing embodiments, and details are not described herein again.
And 505, generating navigation data according to the current position and the target position, and sending the navigation data to the user.
In this embodiment, the electronic device may perform path planning according to the destination location in the navigation request and the current location determined in step 504, so as to generate navigation data. And sends the navigation data to the user. For a specific process, reference may be made to the related description in the foregoing embodiments, and details are not described herein again.
In some optional implementations of this embodiment, the method may further include: receiving a face image and a human-shaped image which are collected by a camera device positioned at an entrance position of a target position area, and extracting face characteristics and appearance characteristics; acquiring an identity of a user corresponding to the face features; and generating a user information set according to the acquired identity and appearance characteristics.
Optionally, the user information in the user information set may further include sample facial features of the user; and the method may further comprise: receiving a face image collected by a camera device positioned at an exit position of a target position area, and matching the face image with a user information set; and removing the user information matched with the face image in the user information set.
According to the navigation method provided by the embodiment of the application, the identity and the appearance characteristics of the user are obtained, and the current specific position of the user in the target position area can be determined by using the image acquired by the camera device of the target position area where the user is located currently. This may improve the accuracy of the positioning. Meanwhile, the situation of positioning by using the terminal can be reduced, so that the energy consumption and hardware configuration requirements of the terminal can be reduced.
With further reference to fig. 6, the present application provides an embodiment of a navigation device as an implementation of the methods shown in the above figures. The embodiment of the device corresponds to the embodiment of the method shown in fig. 5, and the device can be applied to various electronic devices.
As shown in fig. 6, the navigation device 600 of the present embodiment may include: a first receiving unit 601, configured to receive a navigation request sent by a user, where the navigation request includes a destination location and an identity of the user; a selecting unit 602 configured to select, according to the identity, user information corresponding to the identity from a pre-stored user information set, where the user information includes the identity and appearance characteristics of the user; a second receiving unit 603 configured to receive an image currently acquired by the camera device located in the target position area; a determining unit 604 configured to determine a current location of the user in the target location area according to the image and the selected appearance feature; the generating unit 605 is configured to generate navigation data according to the current position and the destination position, and send the navigation data to the user.
In this embodiment, specific implementation manners and advantageous effects of the first receiving unit 601, the selecting unit 602, the second receiving unit 603, the determining unit 604 and the generating unit 605 may respectively refer to the descriptions of step 501, step 502, step 503, step 504 and step 505 in the embodiment shown in fig. 5, and are not described herein again.
In some optional implementations of this embodiment, the apparatus 600 may further include: a fourth receiving unit (not shown in the figure) configured to receive a face image and a human-shaped image collected by the camera device located at the entrance position of the target position area, and extract face features and appearance features; an obtaining unit (not shown in the figure) configured to obtain an identity of a user corresponding to the face feature; and an information set generating unit (not shown in the figure) configured to generate a user information set according to the obtained identity and appearance characteristics.
Optionally, the user information in the user information set may further include sample facial features of the user. And the apparatus 600 may further comprise: a third receiving unit (not shown in the figure) configured to receive a face image acquired by the camera device located at the exit position of the target position area, and match the face image with the user information set; and the clearing unit (not shown in the figure) is configured to clear the user information matched with the face image in the user information set.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a touch panel, a keyboard, a camera, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first receiving unit, a selecting unit, a second receiving unit, a determining unit, and a generating unit. Where the names of the units do not in some cases constitute a limitation of the unit itself, for example, the first receiving unit may also be described as a "unit that receives a navigation request sent by a user".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user; selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of the user; receiving an image currently acquired by a camera device positioned in a target position area; determining the current position of the user in the target position area according to the image and the selected appearance characteristics; and generating navigation data according to the current position and the target position, and sending the navigation data to the user.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A navigation system, comprising: the system comprises a cloud server, a server and a camera device;
the cloud server is used for receiving a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user; determining whether the user is currently located in a target location area, comprising: receiving an image of the current position of a user sent by a terminal, wherein the image of the current position of the user is acquired by a camera of the terminal; retrieving an image of the current position of the user from an image library prestored in the cloud server; in response to there being an image in the image library that matches the image of the user's current location, determining whether the user is located in a target location area based on a location corresponding to the image that matches the image of the user's current location; in response to determining that the user is currently located in a target location area, sending the navigation request to a server corresponding to the target location area;
the server is used for selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of a user, and the appearance characteristics comprise dressing information; receiving an image currently acquired by a camera device positioned in the target position area; determining the current position of the user in the target position area according to the image currently acquired by the camera device in the target position area and the selected appearance characteristics; and generating navigation data according to the current position and the target position, and sending the navigation data to the user.
2. The system of claim 1, wherein the server is further configured to:
receiving a face image and a human-shaped image which are acquired by a camera device positioned at an entrance position of a target position area;
and extracting the face features and the appearance features, and sending the face features to the cloud server.
3. The system of claim 2, wherein the cloud server is further to:
matching the facial features with a pre-stored sample facial feature set;
sending the sample face features and the identity of a target user to the server, wherein the target user is a user corresponding to the sample face features matched with the face features in the sample face feature set;
and generating a corresponding relation between the identity of the target user and the information of the target position area corresponding to the server.
4. The system of claim 3, wherein the determining whether the user is currently located in a target location area comprises:
determining whether information of a target position area corresponding to the identity of the user exists at present;
and if so, determining that the user is currently located in the target position area.
5. The system of claim 4, wherein the server is further to:
obtaining appearance characteristics of a user corresponding to the face characteristics matched with the sample face characteristics of the target user;
and generating a user information set according to the identity of the target user and the obtained appearance characteristics.
6. The system of claim 5, wherein the user information in the set of user information further comprises sample facial features of the user; and
the server is further configured to:
receiving a face image collected by a camera device positioned at the exit position of the target position area, and matching the face image with the user information set;
removing the user information matched with the face image in the user information set;
and sending the identity identifier in the removed user information to the cloud server.
7. The system of claim 6, wherein the cloud server is further configured to
And clearing the information of the target position area corresponding to the identity sent by the server.
8. A navigation method, comprising:
receiving a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user;
selecting user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of a user;
receiving an image currently acquired by a camera device located in a target position area, wherein the target position area is determined through the following steps: receiving an image of the current position of a user sent by the user, wherein the image of the current position of the user is acquired by a camera of a terminal of the user; searching the image of the current position of the user in a pre-stored image library; in response to the image matched with the image of the current position of the user existing in the image library, determining the area where the position corresponding to the image matched with the image of the current position of the user is located as the target position area;
determining the current position of the user in the target position area according to the image currently acquired by the camera device in the target position area and the selected appearance characteristics, wherein the appearance characteristics comprise dressing information;
and generating navigation data according to the current position and the target position, and sending the navigation data to the user.
9. The method of claim 8, wherein the method further comprises:
receiving a face image and a human-shaped image which are collected by a camera device positioned at the entrance position of the target position area, and extracting face characteristics and appearance characteristics;
acquiring the identity of a user corresponding to the face features;
and generating a user information set according to the acquired identity and the appearance characteristics.
10. The method of claim 9, wherein the user information in the set of user information further comprises sample facial features of the user; and
the method further comprises the following steps:
receiving a face image collected by a camera device positioned at the exit position of the target position area, and matching the face image with the user information set;
and removing the user information matched with the face image in the user information set.
11. A navigation device, comprising:
the first receiving unit is configured to receive a navigation request sent by a user, wherein the navigation request comprises a destination position and an identity of the user;
the selecting unit is configured to select user information corresponding to the identity from a pre-stored user information set according to the identity, wherein the user information comprises the identity and appearance characteristics of a user, and the appearance characteristics comprise dressing information;
a second receiving unit configured to receive an image currently captured by the imaging device located in a target location area, the target location area being determined by: receiving an image of the current position of a user sent by the user, wherein the image of the current position of the user is acquired by a camera of a terminal of the user; searching the image of the current position of the user in a pre-stored image library; in response to the image matched with the image of the current position of the user existing in the image library, determining the area where the position corresponding to the image matched with the image of the current position of the user is located as the target position area;
the determining unit is configured to determine the current position of the user in the target position area according to the image currently acquired by the camera device in the target position area and the selected appearance characteristics;
and the generating unit is configured to generate navigation data according to the current position and the target position and send the navigation data to the user.
12. The apparatus of claim 11, wherein the user information in the set of user information further comprises sample facial features of the user; and
the device further comprises:
the third receiving unit is configured to receive a face image acquired by the camera device located at the exit position of the target position area, and match the face image with the user information set;
and the clearing unit is configured to clear the user information matched with the face image in the user information set.
13. An electronic device, comprising:
one or more processors;
the camera device is used for collecting images;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 8-10.
14. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 8-10.
CN201711081993.XA 2017-11-07 2017-11-07 Navigation system, method and device Active CN109752001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711081993.XA CN109752001B (en) 2017-11-07 2017-11-07 Navigation system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711081993.XA CN109752001B (en) 2017-11-07 2017-11-07 Navigation system, method and device

Publications (2)

Publication Number Publication Date
CN109752001A CN109752001A (en) 2019-05-14
CN109752001B true CN109752001B (en) 2021-07-06

Family

ID=66399601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711081993.XA Active CN109752001B (en) 2017-11-07 2017-11-07 Navigation system, method and device

Country Status (1)

Country Link
CN (1) CN109752001B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149454A (en) * 2019-06-26 2020-12-29 杭州海康威视数字技术股份有限公司 Behavior recognition method, device and equipment
CN111678519B (en) * 2020-06-05 2022-07-22 北京都是科技有限公司 Intelligent navigation method, device and storage medium
CN112135242B (en) * 2020-08-11 2023-05-02 科莱因(苏州)智能科技有限公司 Building visitor navigation method based on 5G and face recognition

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104422439B (en) * 2013-08-21 2017-12-19 希姆通信息技术(上海)有限公司 Air navigation aid, device, server, navigation system and its application method
CN106370174B (en) * 2015-07-23 2020-09-08 腾讯科技(深圳)有限公司 Position navigation method and device based on enterprise communication software
CN105371850B (en) * 2015-11-17 2018-12-11 广东欧珀移动通信有限公司 A kind of route navigation method and mobile terminal
CN105426476B (en) * 2015-11-17 2020-07-17 Oppo广东移动通信有限公司 Navigation route generation method and terminal
DE102016200706A1 (en) * 2016-01-20 2017-07-20 Robert Bosch Gmbh Pedestrian navigation in a parking garage
CN106871898A (en) * 2016-12-30 2017-06-20 山东中架工人信息技术股份有限公司 A kind of RIM solid 3D micro navigations systems and the method for forming navigation
CN107314769A (en) * 2017-06-19 2017-11-03 成都领创先科技有限公司 The strong indoor occupant locating system of security

Also Published As

Publication number Publication date
CN109752001A (en) 2019-05-14

Similar Documents

Publication Publication Date Title
EP2975555B1 (en) Method and apparatus for displaying a point of interest
JP4591353B2 (en) Character recognition device, mobile communication system, mobile terminal device, fixed station device, character recognition method, and character recognition program
EP3550479A1 (en) Augmented-reality-based offline interaction method and apparatus
US20230410500A1 (en) Image display system, terminal, method, and program for determining a difference between a first image and a second image
KR101147748B1 (en) A mobile telecommunication device having a geographic information providing function and the method thereof
KR101800890B1 (en) Location-based communication method and system
CN107124476B (en) Information pushing method and device
CN109752001B (en) Navigation system, method and device
KR101790655B1 (en) Feedback method for bus information inquiry, mobile terminal and server
KR20120024073A (en) Apparatus and method for providing augmented reality using object list
CN110555876B (en) Method and apparatus for determining position
US20190213790A1 (en) Method and System for Semantic Labeling of Point Clouds
EP3242225A1 (en) Method and apparatus for determining region of image to be superimposed, superimposing image and displaying image
CN110763250A (en) Method, device and system for processing positioning information
CN109302492B (en) Method, apparatus, and computer-readable storage medium for recommending service location
CN108985421B (en) Method for generating and identifying coded information
WO2015024465A1 (en) Argument reality content screening method, apparatus, and system
KR20210086834A (en) System and method for providing AR based tour information via smart glasses
CN107864501B (en) Method and equipment for acquiring and providing wireless access point connection information
CN108460470B (en) Service point reservation method and device
CN108107457B (en) Method and apparatus for acquiring location information
CN115408609A (en) Parking route recommendation method and device, electronic equipment and computer readable medium
CN110864683B (en) Service handling guiding method and device based on augmented reality
CN110719324A (en) Information pushing method and equipment
CN112287051B (en) Merchant navigation method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant