WO2019000461A1 - Procédé et appareil de positionnement, support de stockage et serveur - Google Patents

Procédé et appareil de positionnement, support de stockage et serveur Download PDF

Info

Publication number
WO2019000461A1
WO2019000461A1 PCT/CN2017/091350 CN2017091350W WO2019000461A1 WO 2019000461 A1 WO2019000461 A1 WO 2019000461A1 CN 2017091350 W CN2017091350 W CN 2017091350W WO 2019000461 A1 WO2019000461 A1 WO 2019000461A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
user
image information
electronic device
determining
Prior art date
Application number
PCT/CN2017/091350
Other languages
English (en)
Chinese (zh)
Inventor
梁昆
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Priority to CN201780090739.8A priority Critical patent/CN110870300A/zh
Priority to PCT/CN2017/091350 priority patent/WO2019000461A1/fr
Publication of WO2019000461A1 publication Critical patent/WO2019000461A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a positioning method, device, storage medium, and server.
  • a mobile terminal such as a mobile phone
  • a mobile terminal such as a mobile phone
  • Existing location tracking technologies are often implemented based on global positioning system (GPS) or cellular network base stations. This positioning method can only determine a latitude and longitude position, and the positioning accuracy is low.
  • GPS global positioning system
  • the embodiment of the invention provides a positioning method, a device, a storage medium and a server, which can improve positioning accuracy.
  • An embodiment of the present invention provides a positioning method, including:
  • target image information Obtaining target image information, and a shooting time of the target image information, wherein the target image information includes at least one photographed user;
  • the embodiment of the invention further provides a positioning device, comprising:
  • a first acquiring module configured to acquire target image information, and a shooting time of the target image information, where the target image information includes at least one photographed user;
  • a second acquiring module configured to acquire, by the target network sharing device, all the electronic devices that are searched at the shooting time, and a geographic location of each electronic device;
  • a determining module configured to determine, according to the target image information and a geographic location, an electronic device used by each captured user
  • a creating module configured to create a user database according to the target image information and the electronic device
  • a positioning module configured to locate a target user based on the user database.
  • the embodiment of the invention further provides a storage medium, wherein the storage medium stores a plurality of instructions, the instructions being adapted to be loaded by a processor to perform the positioning method.
  • the embodiment of the present invention further provides a server, the server includes a processor and a memory, the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to perform the following steps:
  • a first acquiring module configured to acquire target image information, and a shooting time of the target image information, where the target image information includes at least one photographed user;
  • a second acquiring module configured to acquire, by the target network sharing device, all the electronic devices that are searched at the shooting time, and a geographic location of each electronic device;
  • a determining module configured to determine, according to the target image information and a geographic location, an electronic device used by each captured user
  • a creating module configured to create a user database according to the target image information and the electronic device
  • a positioning module configured to locate a target user based on the user database.
  • FIG. 1 is a schematic diagram of an application scenario of a positioning system according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of a positioning method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart diagram of a positioning method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a three-dimensional coordinate system provided by an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a positioning device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a determining module according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a positioning module according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • Embodiments of the present invention provide a positioning method, apparatus, storage medium, server, and system.
  • FIG. 1 is a schematic diagram of an application scenario of a positioning system.
  • the positioning system may include any server provided by the present invention, and the server may be a server of a large shopping place.
  • the server may acquire target image information and a shooting time of the target image information, where the target image information includes at least one captured user, and then acquire all electronic devices that the target network sharing device searches at the shooting time, and each a geographic location of an electronic device, after which an electronic device used by each captured user is determined according to the target image information and a geographic location, and a user database is created according to the target image information and the electronic device, and then the target is based on the user database The user is positioned.
  • the server may determine the image information collected by the camera at the entrance of the shopping place as the target image information, and determine the network sharing device (such as the wifi hotspot device) at the entrance as the target network sharing device, and the target network sharing device can search for the area.
  • the shooting area corresponding to the target image information needs to be the same.
  • the camera can shoot each user entering the shopping place in real time or periodically.
  • the wifi hotspot device can search for the connected connected electronic devices and obtain its GPS from each connected electronic device (Global Positioning System (Global Positioning System) position, after which the user and the electronic device can be mapped one by one according to the user image and the GPS position and a user database is established. Then, when a user needs to be located, the location of the user with the positioning can be obtained simply by sending a photo of the user to be located to the server.
  • Global Positioning System Global Positioning System
  • the positioning device may be implemented as a separate entity, or may be integrated into other network devices, such as a server, to implement the server of the large shopping or leisure venue.
  • the specific process of the positioning method can be as follows:
  • the target image information can be acquired by an image capture device such as a camera. Since a plurality of image capturing devices may be installed in one building, in order to ensure that each user entering the building can be photographed, the image capturing device corresponding to the target image information may be installed at the entrance of the building.
  • an image capture device such as a camera. Since a plurality of image capturing devices may be installed in one building, in order to ensure that each user entering the building can be photographed, the image capturing device corresponding to the target image information may be installed at the entrance of the building.
  • the network sharing device mainly refers to a device that can provide a wireless network for a user, such as a router, a switch, or a WIFI hotspot device.
  • the area that the target network sharing device can search for and the corresponding shooting area in the target image information need to be substantially the same, that is, the network sharing device installed at the entrance can be determined as the target network sharing device.
  • the electronic device can include an internet device such as a smart phone, a tablet computer, and a personal computer.
  • the geographic location can be obtained by using a positioning method such as a satellite positioning system (such as GPS, Global Positioning System, global positioning system), mobile base station positioning, etc., which can be automatically sent by the electronic device to the network sharing device, or can be actively acquired by the network sharing device. of.
  • a satellite positioning system such as GPS, Global Positioning System, global positioning system
  • mobile base station positioning etc.
  • S103 Determine, according to the target image information and the geographic location, an electronic device used by each captured user.
  • step S103 may specifically include:
  • a classifier such as a random forest, a SVM (Support Vector Machine) or the like, may be used to extract a person image from the image information.
  • SVM Small Vector Machine
  • a relatively obvious area or a point where each of the photographed user's head, neck, or center of gravity is located may be selected as the display position of the entire user.
  • steps 1-3 may specifically include:
  • the electronic device used by each of the photographed users is determined based on the first relative position and the second relative position.
  • the relative position mainly refers to the relative orientation and phase between two shooting users or electronic devices.
  • the searched electronic device is usually the target image.
  • the information carried by the user is photographed, so that the second relative position of the electronic devices to each other represents a first relative position of the photographed users to each other, and each of the photographed photographs can be determined according to the first relative position and the second relative position.
  • Electronic equipment carried by the user is photographed, so that the second relative position of the electronic devices to each other represents a first relative position of the photographed users to each other, and each of the photographed photographs can be determined according to the first relative position and the second relative position.
  • step S104 may specifically include:
  • the association is stored in the user database to create the user database.
  • the user database is mainly used to store the identity features of each captured user and the electronic device used.
  • the identity features may include facial features, height features, gender features, age features, and clothing features, etc., which may be obtained by analyzing image features of the captured user in the target image information, and the image features may include color features and textures.
  • Features, shape features and/or spatial relationship features, etc., different image features can be extracted by different algorithms, such as color features extracted by color histogram method, texture features are extracted by geometric methods, and so on.
  • the target image information may be analyzed and processed by a specified classification model to obtain the identity feature
  • the classification model may be a trained deep neural network, such as CNN (Convolutional Neural Networks), CNN. It is a multi-layer neural network consisting of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer. It supports inputting images of multi-dimensional input vectors directly into the network, avoiding data in feature extraction and classification. Reconstruction greatly reduces the complexity of image processing.
  • CNN Convolutional Neural Networks
  • CNN Convolutional Neural Networks
  • It is a multi-layer neural network consisting of an input layer, a convolution layer, a pooling layer, a fully connected layer and an output layer. It supports inputting images of multi-dimensional input vectors directly into the network, avoiding data in feature extraction and classification. Reconstruction greatly reduces the complexity of image processing.
  • the target image information is input into the CNN network, the information is transformed from the input layer to the output layer, and the calculation process performed by the
  • the CNN model needs to be trained according to the sample and classification information in advance. For example, a large number of sample images can be collected in advance, and each sample image can manually indicate the age, sex, height and other information of the sample characters, and then the samples are taken.
  • the picture is input into the CNN network for training.
  • the training process mainly includes two stages: a forward propagation phase and a backward propagation phase.
  • Each sample Xi ie, the sample picture
  • Oi Fn(...(F2(F1(XiW(1))W(2))).
  • i is a positive integer
  • W(n) is the weight of the nth layer
  • F is an activation function (such as a sigmoid function or a hyperbolic tangent function) by inputting the sample picture to the convolutional neural network.
  • the weight matrix can be obtained.
  • the difference between each actual output Oi and the ideal output Yi can be calculated, and the adjustment weight matrix is back-propagated according to the method of minimizing the error, wherein Yi is according to the sample.
  • Xi's annotation information is obtained. For example, if the gender of the sample image Xi is female, then Yi can be set to 1.
  • the value matrix determines the trained convolutional neural network, so that each picture can be analyzed according to the trained convolutional neural network intelligence, and the gender, age, height and other information of the characters in the picture are accurately obtained.
  • step S105 may specifically include:
  • the positioning request may be sent by an electronic device.
  • the search location can be achieved by sending a photo of the person being searched (ie, the target user) to the server. It should be pointed out that since each person may have a great change in appearance at different times, users can provide recent photos of the target users as much as possible when providing photos. Generally, the closer the photos are provided, the better. Improve the efficiency and accuracy of your search.
  • the server can use the trained CNN model to obtain the target identity feature of the target user in the photo.
  • steps 2-3 may specifically include:
  • step 2-3-1 may specifically include:
  • the found electronic device is the target electronic device.
  • each identity information of each user entering the building is stored in the user database, and each identity information is associated with the electronic device used by the user, subsequent identification information of the target user only needs to be performed. You can find the corresponding electronic device.
  • the device identifier is a unique identifier of the network shared device, and may be an identification number set by the user, or may be a device identification code set by the manufacturer when the device is shipped, usually, in a building, each Multiple network sharing devices can be set up on one floor. Since each user entering the building is constantly moving, the network sharing device connected to the electronic device that it carries should also be constantly changing, when the electronic device is connected every time. When a new network share device is reached, the server can get its geographic location.
  • step 2-3-3 may specifically include:
  • the three-dimensional coordinates are used as coordinate positions of the target user to locate the target user.
  • the device identifier of the network sharing device may be determined according to the floor where the device is located, and the exact floor information may be obtained by using the device identifier.
  • the plane where the ground is located can be used as the plane where the XY axis is located, and the plane where the height of the building is located is the plane where the Z axis is located to establish a three-dimensional coordinate system.
  • the XY coordinate value of each electronic device may be a latitude and longitude coordinate (ie, a geographic location), or may be a relative position relative to a certain fixed point of the building, and the Z-axis coordinate value may be a floor height, such as the first floor or On the second floor, etc., by converting the geographic location of each electronic device into corresponding three-dimensional coordinates, the exact location of the corresponding user in the building can be located, and then the server can send the located location via WiFi or Bluetooth. It is simple and convenient to locate the requesting user's electronic device, so that the positioning requesting user can find the lost friend or relative person as soon as possible according to the positioning position.
  • the positioning method provided by the embodiment obtains the target image information and the shooting time of the target image information, and the target image information includes at least one captured user, and acquires the target network sharing device to search at the shooting time. All the electronic devices that arrive, and the geographic location of each electronic device, after which the electronic device used by each captured user is determined according to the target image information and the geographic location, and a user database is created according to the target image information and the electronic device, After that, the target user is located based on the user database, so that the user can be quasi-determined in the three-dimensional space.
  • the position is simple, the positioning accuracy is high, and the practicality is strong.
  • a positioning method and the specific process can be as follows:
  • the server acquires target image information and a shooting time of the target image information, where the target image information includes at least one photographed user.
  • an image taken by a camera at the entrance of a building may be determined as target image information, and the photographed user mainly refers to a user who enters the building.
  • the camera can acquire target image information in real time or periodically, for example, every five seconds.
  • the server acquires, by the target network sharing device, all the electronic devices that are searched at the shooting time, and a geographic location of each electronic device.
  • the network sharing device at the entrance of the building may be determined as a target network sharing device, which may include a router, a switch, or a WIFI hotspot device.
  • the geographic location may be obtained by using a positioning method such as a satellite positioning system or a mobile base station, and may be automatically sent by the electronic device to the network sharing device, or may be actively acquired by the network sharing device.
  • the server extracts a character image of the photographed user from the target image information, and determines a display position of each photographed user in the character image.
  • a classifier such as a random forest, SVM, or the like, can be used to extract a person image from the image information, and the position of the center of gravity of each user can be selected as the display position of the entire user.
  • the server determines a relative position of two adjacent shooting users according to the display position, obtains a first relative position, and determines a relative position of the two adjacent electronic devices according to the geographic location, to obtain a second relative position.
  • the relative distance between two adjacent center of gravity points can be regarded as the relative position of the user
  • the relative distance between the geographical positions of two adjacent electronic devices can be regarded as the relative position of the device.
  • the server determines, according to the first relative position and the second relative position, an electronic device used by each photographed user.
  • the electronic device used by each of the photographed users can be determined according to the relative positions of each other.
  • the server determines, according to the target image information, an identity feature of each captured user.
  • the identity feature may include facial features, height features, gender features, age features, and clothing features, etc., which may be obtained by analyzing image features of the captured user in the target image information
  • the image features may include color features.
  • different image features can be extracted by different algorithms, such as color features extracted by color histogram method, texture features are extracted by geometric methods, and so on.
  • the target image information may be analyzed and processed by a specified classification model to obtain the identity feature, wherein the classification model may be a trained deep neural network, such as a CNN model.
  • the server establishes an association relationship between the corresponding identity feature and the electronic device according to the captured user, and stores the association relationship in the user database to create the user database.
  • the identity information of each user entering the building and the related information of the user's electronic device can be stored in the user database.
  • the server receives a positioning request, where the positioning request carries image information of the target user.
  • the electronic device can generate a positioning request according to the recent photo of B, and send the positioning request to the network sharing device to the network sharing device. server.
  • the server determines, according to the image information, a target identity feature of the target user, and uses the association relationship to search for an electronic device corresponding to the target identity feature from the user database as the target electronic device.
  • the server may process the image information by using the CNN model to obtain the target identity feature, and then find the electronic device corresponding to the target identity feature from the user database.
  • the server determines a device identifier of the network sharing device that currently searches for the target electronic device, and acquires a current geographic location of the target electronic device.
  • the network sharing device connected to the electronic device that it carries should also be constantly changing. To locate the target electronic device, it is necessary to determine the real-time. The network shared device to which the target electronic device is currently connected, and its geographic location.
  • the server determines a three-dimensional coordinate of the target electronic device according to the device identifier and the current geographic location, and uses the three-dimensional coordinate as a coordinate position of the target user to locate the target user.
  • the plane where the ground is located can be used as the plane where the XY axis lies, and the plane where the height of the building is located is the plane where the Z axis is located.
  • XY coordinate value of each electronic device It can be latitude and longitude coordinates (ie, geographic location), or it can be a distance value relative to a certain fixed point of the building, and the Z-axis coordinate value can be the floor height, such as the first floor or the second floor, etc., the floor height can be based on Depending on the device identification, the network sharing devices on each floor have different device identifications.
  • the exact location of the corresponding user such as user B
  • the server can send the located location to the location requesting user.
  • the server can send the located location to the location requesting user.
  • electronic equipment In electronic equipment.
  • the server can acquire the target image information and the shooting time of the target image information
  • the target image information includes at least one captured user, and acquires the target network sharing device in the shooting. All the electronic devices searched for in time, and the geographical position of each electronic device, and then extracting the image of the person of the captured user from the target image information, and determining the display position of each captured user in the image of the person, Then, determining a relative position of the two adjacent shooting users according to the display position, obtaining a first relative position, and determining a relative position of the two adjacent electronic devices according to the geographic position, obtaining a second relative position, and then, according to the first a relative position and a second relative position determine an electronic device used by each captured user, and determine an identity feature of each captured user according to the target image information, and then establish a corresponding identity feature and an electronic device according to the captured user.
  • the user database is created, so that the identity information of each user entering the building and the electronic device of the user to be accessed can be stored in the user database, after which the server can receive a positioning request, the positioning request Carrying the image information of the target user, and then determining the target identity feature of the target user according to the image information, and using the association relationship to search for the electronic device corresponding to the target identity feature from the user database as the target electronic device, and then determining Searching for the device identifier of the network sharing device of the target electronic device, and acquiring the current geographic location of the target electronic device, and then determining the three-dimensional coordinates of the target electronic device according to the device identifier and the current geographic location, and determining the three-dimensional coordinates As the coordinate position of the target user, the target user is positioned, so that the accurate position of the user in the building can be quickly found according to the user's recent photos, and the method is simple and the positioning accuracy is high.
  • a positioning device which may be integrated in a server, which may be a server of a large shopping place or a leisure place.
  • FIG. 5 specifically describes a positioning device according to an embodiment of the present invention, which may include: a first acquiring module 10, a second acquiring module 20, a determining module 30, a creating module 40, and a positioning module 50, among them:
  • the first obtaining module 10 is configured to acquire target image information and a shooting time of the target image information, where the target image information includes at least one photographed user.
  • the first acquiring module 10 may collect the target image information by using an image capturing device such as a camera. Since a plurality of image capturing devices may be installed in one building, in order to ensure that each user entering the building can be photographed, the image capturing device corresponding to the target image information may be installed at the entrance of the building.
  • an image capturing device such as a camera. Since a plurality of image capturing devices may be installed in one building, in order to ensure that each user entering the building can be photographed, the image capturing device corresponding to the target image information may be installed at the entrance of the building.
  • the second obtaining module 20 is configured to acquire all the electronic devices that are searched by the target network sharing device at the shooting time, and the geographic location of each electronic device.
  • the network sharing device mainly refers to a device that can provide a wireless network for a user, such as a router, a switch, or a WIFI hotspot device.
  • the area that the target network sharing device can search for and the corresponding shooting area in the target image information need to be substantially the same, that is, the network sharing device installed at the entrance can be determined as the target network sharing device.
  • the electronic device can include an internet device such as a smart phone, a tablet computer, and a personal computer.
  • the geographic location can be obtained by using a positioning method such as a satellite positioning system (such as GPS, Global Positioning System, global positioning system), mobile base station positioning, etc., which can be automatically sent by the electronic device to the network sharing device, or can be actively acquired by the network sharing device. of.
  • a satellite positioning system such as GPS, Global Positioning System, global positioning system
  • mobile base station positioning etc.
  • the second determining module 30 is configured to determine an electronic device used by each captured user according to the target image information and the geographic location.
  • the determining module 30 may specifically include an extracting submodule 31, a first determining submodule 32, and a second determining submodule 33, where:
  • the extraction sub-module 31 is configured to extract a person image of the photographed user from the target image information.
  • the extraction submodule 31 can extract a person image from the image information by using a classifier such as a random forest, a SVM (Support Vector Machine) or the like.
  • a classifier such as a random forest, a SVM (Support Vector Machine) or the like.
  • the first determining sub-module 32 is configured to determine a display position of each captured user in the character image.
  • the first determining sub-module 32 can select a relatively obvious area or point where each of the photographed user's head, neck, or center of gravity is located as the display position of the entire user.
  • the second determining sub-module 33 is configured to determine an electronic device used by each captured user according to the display location and the geographic location.
  • the second determining submodule 33 can be specifically used to:
  • the electronic device used by each of the photographed users is determined based on the first relative position and the second relative position.
  • the relative position mainly refers to the relative orientation and relative distance between two shooting users or electronic devices. Since the shooting time of the target image information and the search time of the electronic device are the same, and the area that the target network sharing device can search is substantially the same as the shooting area corresponding to the target image information, the searched electronic device is usually the target image.
  • the information carried by the user is photographed, so that the second relative position of the electronic devices to each other represents a first relative position of the photographed users to each other, and each of the photographed photographs can be determined according to the first relative position and the second relative position.
  • Electronic equipment carried by the user is photographed, so that the second relative position of the electronic devices to each other represents a first relative position of the photographed users to each other, and each of the photographed photographs can be determined according to the first relative position and the second relative position.
  • the creating module 40 is configured to create a user database according to the target image information and the electronic device.
  • the creation module 40 can be specifically used to:
  • the association is stored in the user database to create the user database.
  • the user database is mainly used to store the identity features of each captured user and the electronic device used.
  • the identity features may include facial features, height features, gender features, age features, and clothing features, etc., which may be obtained by analyzing image features of the captured user in the target image information, and the image features may include color features and textures.
  • Features, shape features and/or spatial relationship features, etc., different image features can be extracted by different algorithms, such as color features extracted by color histogram method, texture features are extracted by geometric methods, and so on.
  • the target image information may be analyzed and processed by using a specified classification model to obtain the identity feature, wherein the classification model may be a trained deep neural network, such as CNN.
  • CNN Convolutional Neural Networks, Convolutional Neural Networks
  • a multi-layer neural network consisting of an input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer, which supports direct input of images of multidimensional input vectors.
  • the network avoids the reconstruction of data during feature extraction and classification, which greatly reduces the complexity of image processing.
  • the target image information is input into the CNN network, the information is transformed from the input layer to the output layer, and the calculation process performed by the CNN network is actually the input (target image information) and the weight matrix of each layer. The process of multiplying the points to obtain the final output (that is, height characteristics, gender characteristics, or age characteristics, etc.).
  • each sample image can manually indicate the age, sex, height and other information of the sample characters, and then the samples are taken.
  • the picture is input into the CNN network for training.
  • the training process mainly includes two stages: a forward propagation phase and a backward propagation phase. In the forward propagation phase, each sample Xi (ie, a sample picture) can be input into an n-layer volume.
  • a weight matrix By inputting the sample picture to the convolutional neural network, a weight matrix can be obtained, and then, in the backward propagation phase, Calculating the difference between each actual output Oi and the ideal output Yi, and back-propagating the adjustment weight matrix according to the method of minimizing the error, wherein Yi is obtained according to the annotation information of the sample Xi, for example, if the gender of the sample image Xi is marked For a woman, Yi can Set to 1, if the gender of the sample picture Xi is male, then Yi can be set to 0. Finally, the trained convolutional neural network is determined according to the adjusted weight matrix, so that the subsequent convolutional neural network can be based on the training. Intelligent analysis of each picture, more accurately get the gender, age, height and other information of the characters in the picture.
  • the positioning module 50 is configured to locate the target user based on the user database.
  • the positioning module 50 may specifically include a receiving submodule 51, a third determining submodule 52, and a positioning submodule 53, wherein:
  • the receiving sub-module 51 is configured to receive a positioning request, where the positioning request carries image information of the target user.
  • the positioning request may be sent by an electronic device.
  • the search location can be achieved by sending a photo of the person being searched (ie, the target user) to the server. It should be noted that due to each The appearance of people in different periods may vary greatly, so users can provide recent photos of the target users when providing photos. Generally, the closer the photos are provided, the better the efficiency and accuracy of the search. .
  • the third determining submodule 52 is configured to determine a target identity feature of the target user according to the image information.
  • the third determining submodule 52 can use the trained CNN model to obtain the target identity feature of the target user in the photo.
  • the locating sub-module 53 is configured to locate the target user according to the association relationship, the target identity feature, and the user database.
  • the positioning sub-module 53 may specifically include a determining unit 531, an obtaining unit 532, and a positioning unit 533, where:
  • the determining unit 531 is configured to determine the target electronic device from the user database according to the association relationship and the target identity feature.
  • the determining unit 531 can be specifically configured to:
  • the found electronic device is the target electronic device.
  • each identity information of each user entering the building is stored in the user database, and each identity information is associated with the electronic device used by the user, subsequent identification information of the target user only needs to be performed. You can find the corresponding electronic device.
  • the obtaining unit 532 is configured to determine a device identifier of the network sharing device that is currently searching for the target electronic device, and obtain a current geographic location of the target electronic device.
  • the device identifier is a unique identifier of the network shared device, and may be an identification number set by the user, or may be a device identification code set by the manufacturer when the device is shipped, usually, in a building, each Multiple network sharing devices can be set up on one floor. Since each user entering the building is constantly moving, the network sharing device connected to the electronic device that it carries should also be constantly changing, when the electronic device is connected every time. When a new network share device is reached, the server can get its geographic location.
  • the positioning unit 533 is configured to locate the target user according to the device identifier and the current geographic location.
  • the positioning unit can be specifically used to:
  • the three-dimensional coordinates are used as coordinate positions of the target user to locate the target user.
  • the device identifier of the network sharing device may be determined according to the floor where the device is located, and the exact floor information may be obtained by using the device identifier.
  • the plane where the ground is located can be used as the plane where the XY axis is located, and the plane where the height of the building is located is the plane where the Z axis is located to establish a three-dimensional coordinate system.
  • the XY coordinate value of each electronic device may be a latitude and longitude coordinate (ie, a geographic location), or may be a relative position relative to a certain fixed point of the building, and the Z-axis coordinate value may be a floor height, such as the first floor or On the second floor, etc., by converting the geographic location of each electronic device into corresponding three-dimensional coordinates, the exact location of the corresponding user in the building can be located, and then the server can send the located location via WiFi or Bluetooth. It is simple and convenient to locate the requesting user's electronic device, so that the positioning requesting user can find the lost friend or relative person as soon as possible according to the positioning position.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • the positioning device acquires the target image information and the shooting time of the target image information by the first acquiring module 10, the target image information includes at least one captured user, and the second acquiring module 20 acquires The target network shares all the electronic devices that are searched by the device at the shooting time, and the geographic location of each electronic device.
  • the determining module 30 determines, according to the target image information and the geographic location, the electronic device used by each captured user, creating The module 40 creates a user database according to the target image information and the electronic device, and then the positioning module 50 locates the target user based on the user database, thereby realizing accurate positioning of the user in the stereo space, the method is simple, the positioning accuracy is high, and the utility is practical. Strong.
  • the embodiment of the present invention further provides a server, as shown in FIG. 8, which shows a schematic structural diagram of a server according to an embodiment of the present invention, specifically:
  • the server may include one or more processor 601 processing cores, memory 602 of one or more computer readable storage media, power supply 603 and input unit 604, and the like. It will be understood by those skilled in the art that the server structure illustrated in FIG. 8 does not constitute a limitation to the server, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements. among them:
  • the processor 601 is the control center of the server, and connects the entire server by using various interfaces and lines.
  • the various portions of the server perform overall monitoring of the server by running or executing software programs and/or modules stored in memory 602, as well as invoking data stored in memory 602, performing various functions of the server and processing the data.
  • the processor 601 may include one or more processing cores; preferably, the processor 601 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 601.
  • the memory 602 can be used to store software programs and modules, and the processor 601 executes various functional applications and data processing by running software programs and modules stored in the memory 602.
  • the memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the server, etc.
  • memory 602 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 602 can also include a memory controller to provide processor 601 access to memory 602.
  • the server also includes a power source 603 for powering various components.
  • the power source 603 can be logically coupled to the processor 601 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 603 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the server can also include an input unit 604 that can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • an input unit 604 can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • the server may further include a display unit or the like, and details are not described herein again.
  • the processor 601 in the server loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and is executed by the processor 601 to be stored in the memory.
  • the application in 602 thus implementing various functions, as follows:
  • target image information Obtaining target image information, and a shooting time of the target image information, the target image information including at least one photographed user;
  • Target users are located based on the user database.
  • the processor when determining the electronic device used by each of the photographed users according to the target image information and the geographic location, the processor may further be configured to:
  • the electronic device used by each of the photographed users is determined according to the display position and the geographical position.
  • the processor is further operable to: when determining an electronic device used by each of the photographed users based on the display location and the geographic location:
  • the electronic device used by each of the photographed users is determined based on the first relative position and the second relative position.
  • the processor when creating a user database based on the target image information and the electronic device, the processor is further operable to:
  • the association is stored in the user database to create the user database.
  • the processor is further operable to: when locating the target user based on the user database:
  • the positioning request carrying image information of the target user
  • the target user is located according to the association relationship, the target identity feature, and the user database.
  • the processor is further operable to: when locating the target user based on the association relationship, the target identity feature, and the user database:
  • the target user is located according to the device identifier and the current geographic location.
  • the processor is further operable to: when locating the target user based on the device identification and the current geographic location:
  • the three-dimensional coordinates are used as coordinate positions of the target user to locate the target user.
  • the server can implement the effective effects of any of the positioning devices provided by the embodiments of the present invention. For details, refer to the previous embodiments, and details are not described herein.
  • the embodiment of the invention further provides a storage medium, wherein the storage medium stores a plurality of instructions, the instructions being adapted to be loaded by a processor to perform the positioning method described in any of the above embodiments.
  • the medium may include, but is not limited to, a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé et un appareil de positionnement, un support de stockage et un serveur. Le procédé de positionnement consiste : à obtenir des informations d'image cible et un moment de prise de vues des informations d'image cible, les informations d'image cible comprenant au moins un utilisateur photographié ; à obtenir tous les dispositifs électroniques recherchés par un dispositif de partage de réseau cible au moment de la prise de vues, et l'emplacement géographique de chaque dispositif électronique ; à déterminer le dispositif électronique utilisé par chaque utilisateur photographié en fonction des informations d'image cible et de l'emplacement géographique ; à créer une base de données d'utilisateur en fonction des informations d'image cible et des dispositifs électroniques ; et à positionner un utilisateur cible en se basant sur la base de données d'utilisateur.
PCT/CN2017/091350 2017-06-30 2017-06-30 Procédé et appareil de positionnement, support de stockage et serveur WO2019000461A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780090739.8A CN110870300A (zh) 2017-06-30 2017-06-30 定位方法、装置、存储介质及服务器
PCT/CN2017/091350 WO2019000461A1 (fr) 2017-06-30 2017-06-30 Procédé et appareil de positionnement, support de stockage et serveur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/091350 WO2019000461A1 (fr) 2017-06-30 2017-06-30 Procédé et appareil de positionnement, support de stockage et serveur

Publications (1)

Publication Number Publication Date
WO2019000461A1 true WO2019000461A1 (fr) 2019-01-03

Family

ID=64740889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091350 WO2019000461A1 (fr) 2017-06-30 2017-06-30 Procédé et appareil de positionnement, support de stockage et serveur

Country Status (2)

Country Link
CN (1) CN110870300A (fr)
WO (1) WO2019000461A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046831A (zh) * 2019-12-20 2020-04-21 上海中信信息发展股份有限公司 家禽识别方法、装置及服务器
CN111723616A (zh) * 2019-03-20 2020-09-29 杭州海康威视系统技术有限公司 一种人员相关性度量方法及装置
CN111753578A (zh) * 2019-03-27 2020-10-09 北京外号信息技术有限公司 光通信装置的识别方法和相应的电子设备
CN112528699A (zh) * 2020-12-08 2021-03-19 北京外号信息技术有限公司 用于获得场景中的设备或其用户的标识信息的方法和系统
CN112561953A (zh) * 2019-09-26 2021-03-26 北京外号信息技术有限公司 用于现实场景中的目标识别与跟踪的方法和系统
CN112561952A (zh) * 2019-09-26 2021-03-26 北京外号信息技术有限公司 用于为目标设置可呈现的虚拟对象的方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973738A (zh) * 2013-01-30 2014-08-06 中国电信股份有限公司 人员位置的定位方法、装置及系统
CN104378735A (zh) * 2014-11-13 2015-02-25 无锡儒安科技有限公司 室内定位方法、客户端及服务器
CN105606086A (zh) * 2015-08-28 2016-05-25 宇龙计算机通信科技(深圳)有限公司 一种定位方法及终端
CN106027959A (zh) * 2016-05-13 2016-10-12 深圳先进技术研究院 基于位置线状拟合的视频识别追踪定位系统
CN106878666A (zh) * 2015-12-10 2017-06-20 杭州海康威视数字技术股份有限公司 基于监控摄像机来查找目标对象的方法、装置和系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1757087A4 (fr) * 2004-04-16 2009-08-19 James A Aman Systeme automatique permettant de filmer en video, de suivre un evenement et de generer un contenu
CN101226664B (zh) * 2008-02-02 2011-01-26 北京海鑫科金高科技股份有限公司 一种用于自助银行与自动柜员机的智能监控系统和方法
US8531523B2 (en) * 2009-12-08 2013-09-10 Trueposition, Inc. Multi-sensor location and identification
WO2012024516A2 (fr) * 2010-08-18 2012-02-23 Nearbuy Systems, Inc. Localisation de cible utilisant fusion de capteurs sans fil et de caméra
CN104902004B (zh) * 2015-04-13 2016-04-27 深圳位置网科技有限公司 一种失踪人口的紧急救助系统及方法
CN105872979B (zh) * 2016-05-31 2019-11-26 王方松 一种取得设定场所中人群信息的方法及装置
CN106899935B (zh) * 2017-01-18 2018-08-14 深圳大学 一种基于无线接收设备和摄像头的室内定位方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973738A (zh) * 2013-01-30 2014-08-06 中国电信股份有限公司 人员位置的定位方法、装置及系统
CN104378735A (zh) * 2014-11-13 2015-02-25 无锡儒安科技有限公司 室内定位方法、客户端及服务器
CN105606086A (zh) * 2015-08-28 2016-05-25 宇龙计算机通信科技(深圳)有限公司 一种定位方法及终端
CN106878666A (zh) * 2015-12-10 2017-06-20 杭州海康威视数字技术股份有限公司 基于监控摄像机来查找目标对象的方法、装置和系统
CN106027959A (zh) * 2016-05-13 2016-10-12 深圳先进技术研究院 基于位置线状拟合的视频识别追踪定位系统

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723616A (zh) * 2019-03-20 2020-09-29 杭州海康威视系统技术有限公司 一种人员相关性度量方法及装置
CN111723616B (zh) * 2019-03-20 2023-06-02 杭州海康威视系统技术有限公司 一种人员相关性度量方法及装置
CN111753578A (zh) * 2019-03-27 2020-10-09 北京外号信息技术有限公司 光通信装置的识别方法和相应的电子设备
CN112561953A (zh) * 2019-09-26 2021-03-26 北京外号信息技术有限公司 用于现实场景中的目标识别与跟踪的方法和系统
CN112561952A (zh) * 2019-09-26 2021-03-26 北京外号信息技术有限公司 用于为目标设置可呈现的虚拟对象的方法和系统
CN112561953B (zh) * 2019-09-26 2024-09-10 北京移目科技有限公司 用于现实场景中的目标识别与跟踪的方法和系统
CN111046831A (zh) * 2019-12-20 2020-04-21 上海中信信息发展股份有限公司 家禽识别方法、装置及服务器
CN111046831B (zh) * 2019-12-20 2023-06-30 上海信联信息发展股份有限公司 家禽识别方法、装置及服务器
CN112528699A (zh) * 2020-12-08 2021-03-19 北京外号信息技术有限公司 用于获得场景中的设备或其用户的标识信息的方法和系统
CN112528699B (zh) * 2020-12-08 2024-03-19 北京外号信息技术有限公司 用于获得场景中的设备或其用户的标识信息的方法和系统

Also Published As

Publication number Publication date
CN110870300A (zh) 2020-03-06

Similar Documents

Publication Publication Date Title
WO2019000461A1 (fr) Procédé et appareil de positionnement, support de stockage et serveur
JP7091504B2 (ja) 顔認識アプリケーションにおけるフォールスポジティブの最小化のための方法および装置
Chen et al. Crowd map: Accurate reconstruction of indoor floor plans from crowdsourced sensor-rich videos
WO2021057744A1 (fr) Procédé et appareil de positionnement, et dispositif et support d'informations
US9805065B2 (en) Computer-vision-assisted location accuracy augmentation
CN110645986B (zh) 定位方法及装置、终端、存储介质
US9830337B2 (en) Computer-vision-assisted location check-in
WO2019196403A1 (fr) Procédé de positionnement, serveur de positionnement et système de positionnement
Kawaji et al. Image-based indoor positioning system: fast image matching using omnidirectional panoramic images
WO2020108234A1 (fr) Procédé de génération d'index d'image, procédé et appareil de recherche d'image, terminal et support
WO2015018233A1 (fr) Procédé de détermination de la position d'un dispositif de terminal et dispositif de terminal
US9288636B2 (en) Feature selection for image based location determination
Liang et al. Image-based positioning of mobile devices in indoor environments
US11727605B2 (en) Method and system for creating virtual image based deep-learning
Feng et al. Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments
Debnath et al. Tagpix: Automatic real-time landscape photo tagging for smartphones
CN108307357A (zh) 基于Beacon三点定位的楼层定位方法
Sui et al. An accurate indoor localization approach using cellphone camera
CN110263800B (zh) 基于图像的位置确定
Mukherjee et al. Energy efficient face recognition in mobile-fog environment
WO2019127320A1 (fr) Procédé et appareil de traitement d'informations, dispositif de traitement en nuage, et produit-programme d'ordinateur
CN107016351A (zh) 拍摄指导信息的获取方法以及装置
Yong-Xu et al. Campus navigation system based on mobile augmented reality
Yin et al. A SOCP-based automatic visual fingerprinting method for indoor localization system
Yu et al. A mobile location search system with active query sensing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17915502

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17915502

Country of ref document: EP

Kind code of ref document: A1