CN107273907B - Indoor positioning method, commodity information recommendation method and device and electronic equipment - Google Patents

Indoor positioning method, commodity information recommendation method and device and electronic equipment Download PDF

Info

Publication number
CN107273907B
CN107273907B CN201710526796.8A CN201710526796A CN107273907B CN 107273907 B CN107273907 B CN 107273907B CN 201710526796 A CN201710526796 A CN 201710526796A CN 107273907 B CN107273907 B CN 107273907B
Authority
CN
China
Prior art keywords
boundary
determining
area
node device
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710526796.8A
Other languages
Chinese (zh)
Other versions
CN107273907A (en
Inventor
孙凯
孙小雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201710526796.8A priority Critical patent/CN107273907B/en
Publication of CN107273907A publication Critical patent/CN107273907A/en
Application granted granted Critical
Publication of CN107273907B publication Critical patent/CN107273907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The application provides an indoor positioning method, a commodity information recommendation method and device and electronic equipment, wherein the indoor positioning method comprises the following steps: determining a first initial position coordinate of the node equipment when the node equipment enters the area; determining a first relative position of the node equipment relative to the first initial position coordinate when the node equipment moves in the area based on the same feature point in two adjacent frames of images; and determining the position coordinates of the node equipment in the area according to the first initial position coordinates and the first relative position. According to the technical scheme, the node equipment is close to the space distance when the camera module shoots two adjacent frames of images in the moving process, so that a lens with a larger field angle is not needed for the camera module, and the deployment cost of the node equipment can be reduced.

Description

Indoor positioning method, commodity information recommendation method and device and electronic equipment
Technical Field
The present application relates to the field of positioning technologies, and in particular, to an indoor positioning method, a commodity information recommendation method and apparatus, and an electronic device.
Background
In the indoor positioning technology, the lighting fixture positioning is to fix the camera module on a carrier to be positioned, and during the movement of the carrier, the camera module captures lighting fixture features in the air, and if two adjacent lighting fixtures are far away, the distance may exceed the field angle of the camera module.
Disclosure of Invention
In view of the above, the present application provides an indoor positioning method, a commodity information recommendation method and apparatus, and an electronic device.
In order to achieve the above purpose, the present application provides the following technical solutions:
according to a first aspect of the present application, an indoor positioning method is provided, including:
determining a first initial position coordinate of the node equipment when the node equipment enters the area;
determining a first relative position of the node equipment relative to the first initial position coordinate when the node equipment moves in the area based on the same feature point in two adjacent frames of images;
and determining the position coordinates of the node equipment in the area according to the first initial position coordinates and the first relative position.
According to a second aspect of the present application, there is provided a commodity information recommendation method including:
determining position coordinates in the area by the indoor positioning method provided by the first aspect;
determining commodity information to be recommended based on the position coordinates in the area;
and displaying the information of the commodities needing to be recommended.
According to a third aspect of the present application, there is provided an indoor positioning device, comprising:
the first determining module is used for determining a first initial position coordinate of the node equipment when the node equipment enters the area;
a second determining module, configured to determine, based on the same feature point in two adjacent frames of images, a first relative position of the node device with respect to the first initial position coordinate when moving in the area;
a third determining module, configured to determine, according to the first initial position coordinate determined by the first determining module and the first relative position determined by the second determining module, a position coordinate of the node device in the area.
According to a fourth aspect of the present application, there is provided a commodity information recommending apparatus including:
a fourth determining module, configured to determine location coordinates of the node device in the area by using the indoor positioning method provided in the first aspect;
the fifth determining module is used for determining commodity information needing to be recommended based on the position coordinates determined by the fourth determining module;
and the display module is used for displaying the information of the commodities which need to be recommended and are determined by the fifth determination module.
According to a fifth aspect of the present application, a computer-readable storage medium is provided, wherein the storage medium stores a computer program for executing the indoor positioning method provided by the first aspect or executing the commodity information recommendation method provided by the second aspect.
According to a sixth aspect of the present application, there is provided an electronic device comprising:
a processor; a memory for storing the processor-executable instructions;
wherein, the processor is configured to execute the indoor positioning method provided by the first aspect.
According to a seventh aspect of the present application, there is provided an electronic device comprising:
a processor; a memory for storing the processor-executable instructions;
the processor is configured to execute the commodity information recommendation method provided by the second aspect.
According to the technical scheme, in the moving process of the node equipment, the space distance is short when the camera module shoots two adjacent frames of images, so that a lens with a large field angle is not needed for the camera module, and the deployment cost of the node equipment can be reduced.
Drawings
Fig. 1 is a schematic view of a scenario of an indoor positioning system to which the present application is applied;
fig. 2A is a schematic flow chart of an indoor positioning method according to an exemplary embodiment of the present application;
FIG. 2B is a top view of the area in the embodiment of FIG. 2A;
FIG. 3A is a schematic flow chart diagram illustrating yet another indoor positioning method according to an exemplary embodiment of the present application;
FIG. 3B is a schematic illustration of the positioning pattern at the entrance of the region in the embodiment shown in FIG. 3A;
the image shown in fig. 3C represents the previous image of the two adjacent images;
the image shown in fig. 3D represents the next image of the two adjacent images;
FIG. 3E is a schematic diagram illustrating how the position offset between two adjacent images is determined in the embodiment shown in FIG. 3A;
FIG. 4A is a schematic illustration of the positions of first and second boundary bars at the entrance of a zone as shown in an exemplary embodiment of the present application;
FIG. 4B is a schematic illustration of the location of a first boundary and a second boundary at an entrance to a zone as shown in an exemplary embodiment of the present application;
fig. 5A is a schematic flow chart diagram illustrating another indoor positioning method according to an exemplary embodiment of the present application;
FIG. 5B is a top view of the embodiment of FIG. 5A between the regions and sub-regions;
FIG. 5C is a partial perspective view of the embodiment of FIG. 5A between the regions and sub-regions;
fig. 6A is a schematic flow chart illustrating yet another indoor positioning method according to an exemplary embodiment of the present application;
FIG. 6B is a schematic view of the shape of the virtual boundary, the first boundary strip and the second boundary strip in the embodiment shown in FIG. 6A;
fig. 7A is a schematic flow chart illustrating yet another indoor positioning method according to an exemplary embodiment of the present application;
FIG. 7B is a schematic diagram showing the positional relationship of the virtual boundary, the first boundary and the second boundary in the embodiment shown in FIG. 7A;
fig. 8A is a schematic flow chart illustrating yet another indoor positioning method according to an exemplary embodiment of the present application;
FIG. 8B is a schematic view of the second positioning pattern at a virtual boundary in the embodiment of FIG. 8A;
fig. 9 is a flowchart illustrating a method for recommending merchandise information according to an exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of an electronic device according to an exemplary embodiment of the present application;
FIG. 11 is a schematic diagram illustrating another electronic device according to an exemplary embodiment of the present application;
FIG. 12 is a schematic diagram illustrating an indoor positioning apparatus according to an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of another indoor positioning apparatus shown in an exemplary embodiment of the present application;
fig. 14 is a schematic structural diagram of a commodity information recommending apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a schematic view of a scenario of an indoor positioning system to which the present application is applied; as shown in fig. 1, the indoor positioning system may include a plurality of node devices (e.g., carts 11, 12, …, 1N, N being the number of carts in the indoor positioning system) which are movable indoors and a computing device 10, wherein the computing device 10 may be a Personal Computer (PC) or a server.
It should be noted that, when the node device executes the embodiment of the present application, the node device may start a positioning process when entering an entrance of an area, and execute the indoor positioning method described in the present application, for example, the positioning device itself determines a first initial position coordinate when entering the area, shoots a scene in the area based on a camera module installed in the positioning device itself, performs feature point identification, and then positions the node device in the area; when the computing device executes the embodiment of the application, the computing device can detect the position of the node device in real time, determine a first initial position coordinate of the node device when the node device enters the area, receive an image in the area shot by the node device, perform feature point identification, and then position the node device in the area.
FIG. 2A is a schematic flow chart of an indoor positioning method according to an exemplary embodiment of the present application, and FIG. 2B is a top view of an area in the embodiment of FIG. 2A; the embodiment may be applied to a computing device such as a personal computer and a server, and a node device (for example, a cart in a supermarket for goods) that needs to perform indoor positioning, as shown in fig. 2A, the method includes the following steps:
in step 201, a first initial position coordinate of a node device when entering an area is determined.
In an embodiment, the node device may be a positioning device that needs to be positioned indoors, for example, a cart in a supermarket for goods. In an embodiment, as shown in fig. 2B, in an area 20, a physical boundary 211, a physical boundary 212, and a physical boundary 213 may be included, where the physical boundary 212 and the physical boundary 213 are containers, the physical boundary 211 is a wall, a plane of the area may be set in advance to have a plane coordinate system XOY, the first initial position coordinate may be a reference point with an origin of the plane coordinate system, and the first initial position coordinate may be a coordinate point in the plane coordinate system XOY.
At the entrance of the area, calibrated reference objects can be applied on the ground in advance, and the coordinate information of the reference objects can be recorded in a preset list. When the node equipment is detected to pass through the reference object, the first initial position coordinate of the node equipment when entering the area can be determined.
Step 202, determining a first relative position of the node device relative to the first initial position coordinate when the node device moves in the area based on the same feature point in two adjacent frames of images acquired by the camera module.
In one embodiment, the feature points may be pixel points such as spots on the ground, paper mass, and the like that can be identified by an image identification method. When the preset frame rate of the shot image is higher, the same stain or paper mass can appear in two adjacent images, along with the movement of the node equipment, a new characteristic point on the image can be continuously identified through an image identification method, and a first relative position of the node equipment relative to the first initial coordinate position when the node equipment moves in the area is determined based on the new characteristic point.
And step 203, determining the position coordinates of the node equipment in the area according to the first initial position coordinates and the first relative position.
In an embodiment, the position coordinates of the node device in the area may be obtained based on a sum of the first initial position coordinate and the first relative position, for example, the first initial position coordinate is [ x0, y0], the first relative position is [ Δ x, Δ y ], and the position coordinates of the node device in the area are [ x0+ Δ x, y0+ Δ y ].
In the embodiment, in the moving process of the node device, the spatial distance between two adjacent frames of images shot by the camera module is short, so that a lens with a large field angle is not required for the camera module, and the deployment cost of the node device can be reduced.
Fig. 3A is a schematic flow chart of another indoor positioning method according to an exemplary embodiment of the present application, fig. 3B is a schematic diagram of a positioning pattern at an entrance of an area in the embodiment shown in fig. 3A, an image shown in fig. 3C represents a previous image in two adjacent images, an image shown in fig. 3D represents a next image in two adjacent images, and fig. 3E is a schematic diagram of how a position offset amount between two adjacent images is determined in the embodiment shown in fig. 3A; based on the above embodiments, the present embodiment takes an example of how to determine the first initial position coordinate and the first relative position, as shown in fig. 3A, and includes the following steps:
in step 301, the position coordinates of a first positioning pattern in an area, which a node device passes through when entering the area, are determined.
As shown in fig. 3B, at the entrance of the region, a plurality of first positioning patterns, for example, the first positioning pattern 31, etc. may be applied, and the first positioning patterns may be different patterns such as triangle, circle, square, etc., or different patterns of different shapes may be filled in each pattern, or different colors may be filled in each pattern. In an embodiment, the plurality of first positioning patterns may have different shapes or different colors, or a combination of different shapes and different colors. In one embodiment, the position coordinates of the plurality of first positioning patterns in the area may be calibrated in advance in the area, and the position coordinates of the first positioning patterns in the area may be stored after calibration.
In another embodiment, as shown in fig. 3B, a first reference point 32 may be provided at the entrance, the position coordinates of the first reference point 32 in the area may be calibrated in advance, the relative positions of the plurality of first positioning patterns with respect to the first reference point 32 may be calibrated in advance, and accordingly, the position coordinates of the first positioning pattern in the area may be determined based on the relative positions of the first positioning patterns with respect to the first reference point and the first position coordinates of the first reference point in the area.
In one embodiment, a camera module may be installed at the bottom of the node device, an image of the ground is captured by the camera module, and a first positioning pattern, which may be a shape, a color, or a combination of a shape and a color, located at an intermediate position on the image in the image is identified based on an image recognition method.
Step 302, determining a first initial position coordinate of the node device when entering the area based on the position coordinate of the first positioning pattern in the area.
Corresponding to step 301, the position coordinates of the first positioning pattern in the area may be regarded as the first initial position coordinates of the node device when entering the area.
Step 303, determining coordinate offset of the same feature point in two adjacent frames of images in an image plane coordinate system.
As shown in fig. 3C and fig. 3D, the previous image in two adjacent images and the next image in two adjacent images are respectively shown, and the feature point 35 is the same feature point in the two adjacent images. If it is detected that the coordinates of the feature point 35 in the image plane coordinate system in the previous frame image are [ a1, b1], the coordinates of the feature point 35 in the image plane coordinate system in the subsequent frame image are [ a2, b2], and the coordinate offset amounts of the same feature point in the two adjacent frame images in the image plane coordinate system are [ a2-a1, b2-b1 ].
And step 304, determining the position offset of the node equipment when shooting two adjacent frames of images based on the coordinate offset and the lens imaging parameters of the camera module.
The second relative position of the node device in the region with respect to the first initial position coordinate when shooting the previous frame image is [ a1, B1], which can be calculated in advance by the method described in this embodiment, the relative position of the node device in the region with respect to the first initial position coordinate when shooting the next frame image is [ a2, B2], [ a2, B2] is the amount to be solved, and [ a1, B1] and [ a2, B2] are both relative to the first initial position coordinate, so [ a1, B1] and [ a2, B2] can be equivalently regarded as the position coordinate in the region.
As shown in FIG. 3E, the coordinate offset of the image plane coordinate system is [ a2-a1, b2-b1] on the premise that the pixel array of the camera module and the ground are all in the horizontal state]If the physical width of the unit pixel is m, based on the imaging theory of the camera module, according to the similar triangle theory, the following results are obtained:
Figure BDA0001338639880000081
wherein f is the focal length of the lens of the camera module, and H is the object distance, i.e. the distance from the ground to the lens. Under the premise that the camera module is fixed on the node equipment, the object distance H can be regarded as a known quantity, and the position offset quantity delta S can be obtained through the formula.
In one embodiment, the phase in the image plane coordinate system can be passedThe coordinate offset in the two adjacent frame images determines the offset direction angle α of the camera module in capturing the two adjacent frame images, i.e.,
Figure BDA0001338639880000082
in another embodiment, it can be calculated by a gyroscope that the node device moves from [ A1, B1]]To [ A2, B2]]α from the above equation
Figure BDA0001338639880000083
After obtaining the position offset amount deltaS, the position offset amount of the node equipment in the area when the adjacent two frames of images are shot can be calculated based on deltaS and α, namely deltaSx,ΔSy]=[ΔS*cosα,ΔS*sinα]。
Step 305, determining a second relative position of the node device with respect to the first initial position coordinate when the previous image in the two adjacent images is taken.
In an embodiment, a second relative position of the node device in the area with respect to the first initial position coordinate when capturing the previous image of the two adjacent images [ a1, B1] may be obtained by iterative accumulation starting from the first initial position coordinate by a method similar to the above step 304, and will not be described in detail herein.
And step 306, determining a first relative position of the node device relative to the first initial position coordinate when the node device moves in the area based on the position offset and the second relative position.
For example, the second relative position is [ a1, B1], the positional offset is [ Δ S cos α, Δ S sin α ], and the first relative position of the node device with respect to the first initial position coordinates is [ a1+ Δ S cos α, B1+ Δ S sin α ].
And 307, determining the position coordinates of the node equipment in the area according to the first initial position coordinates and the first relative position.
For example, the first initial position coordinate is [ x0, y0], the first relative position is [ a1+ Δ S cos α, B1+ Δ S sin α ], and the position coordinates of the node device within the region are [ x0+ a1+ Δ S cos α, y0+ B1+ Δ S sin α ].
In the embodiment, the first positioning pattern can be quickly deployed to the entrance of the area in a coating mode, and the influence on the whole area is small in the deployment process, so that the deployment is flexible, and the maintenance cost of the whole area can be greatly reduced; the position offset of the node equipment when two adjacent frames of images are shot is determined based on the coordinate offset and the lens imaging parameters of the camera module, the first relative position of the node equipment relative to the first initial position coordinate is determined based on the position offset, and the positioning precision of the node equipment can be greatly improved.
FIG. 4A is a schematic illustration of the positions of first and second boundary bars at the entrance of a zone as shown in an exemplary embodiment of the present application; on the basis of the above embodiment, as described in conjunction with fig. 3B, as shown in fig. 4A, for the entrance of the region, a first boundary strip 41 and a second boundary strip 42 may be further coated, wherein the first boundary strip 41 is close to the region 30 shown in fig. 3B, and the second boundary strip 42 is far from the region 30; based on the first boundary bar 41 and the second boundary bar 42, the in-out state of the node device in the area can be determined, which specifically includes the following steps:
acquiring first boundary characteristic information of a first boundary strip and second boundary characteristic information of a second boundary strip;
determining a first temporal order in identifying the first boundary bar and the second boundary bar;
and determining whether the node equipment enters the area or leaves the area based on the first time sequence and the first boundary characteristic information and the second boundary characteristic information.
For example, if the first boundary bar 41 is detected first and then the second boundary bar 42 is detected, it may be determined that the node device leaves the area 30, and if the second boundary bar 42 is detected first and then the first boundary bar 41 is detected, it may be determined that the node device is about to enter the area 30.
FIG. 4B is a schematic illustration of the location of a first boundary and a second boundary at an entrance to a zone as shown in an exemplary embodiment of the present application; on the basis of the above embodiment, as shown in fig. 4B, for the entrance of the area, it is also possible to bury a first boundary 43 and a second boundary 44, wherein the third boundary 43 is close to the area 30 and the fourth boundary 44 is far from the area 30; based on the first boundary 43 and the second boundary 44, the in-and-out state of the node device in the area may also be determined, which specifically includes the following steps:
acquiring a first boundary identifier of the first boundary and a second boundary identifier of the second boundary;
determining a second time sequence when the first boundary identifier and the second boundary identifier are obtained;
determining whether the node device enters the area or leaves the area based on the second time order, the first boundary identification and the second boundary identification.
Similar to the coated border strip, if the first border 43 is detected and then the second border 44 is detected, the node device may be determined to be out of the area 30, and if the second border 44 is detected and then the first border 43 is detected, the node device may be determined to be in the area 30.
As shown in fig. 4B, a first group of signal emitters (e.g., the signal emitters are circles in the first boundary 43 shown in fig. 4B) may be embedded under the floor corresponding to the first boundary 43, and a second group of signal emitters (e.g., the signal emitters are circles in the second boundary 44 shown in fig. 4B) may be embedded under the floor corresponding to the second boundary 44, so that the visual effect of the entire scene is not affected because the signal emitters are embedded at the entrance of the area 30 shown in fig. 3B. The interval between two adjacent signal transmitters of the first group of signal transmitters and the second group of signal transmitters may be determined according to the width of the node device, as long as it is ensured that the node device avoids interference of broadcast signals of adjacent boundaries when moving to the boundary.
In one embodiment, the signal transmitter may continuously transmit the broadcast signal according to the actual time requirement of the area (e.g., during the business hours of the supermarket), and the broadcast signal carries the boundary identifier of the boundary where the signal transmitter is located. In one embodiment, the signal emitter may be an infrared, ultrasonic, or other signal emitter. In an embodiment, after receiving the broadcast signal, the node device may forward the broadcast signal to a computing device, and the computing device parses the boundary identifier from the broadcast signal; in another embodiment, the node device may also parse the boundary identifier directly from the broadcast signal.
The above-mentioned embodiments of fig. 4A and 4B can determine whether to start or stop the positioning of the node device by detecting the in-and-out state of the node device in the area, thereby avoiding unnecessary positioning calculation.
Fig. 5A is a schematic flow chart illustrating another indoor positioning method according to an exemplary embodiment of the present application, fig. 5B is a top view between the area and the sub-area in the embodiment shown in fig. 5A, and fig. 5C is a partial perspective view between the area and the sub-area in the embodiment shown in fig. 5A; the present embodiment may be applied to a personal computer, a server, and a node device (for example, a cart in a supermarket for goods) that needs to be located indoors, as shown in fig. 5A, the method includes the following steps:
step 501, if it is detected that the node device moves to a sub-area in the area, determining a boundary identifier of the sub-area.
In one embodiment, as shown in fig. 5B and 5C, a region 50 may include a plurality of sub-regions, for example, 9 sub-regions shown in fig. 5B, and for the sub-region 51, it includes a physical boundary 511, a physical boundary 512, and a physical boundary 514, where the physical boundary 512 and the physical boundary 514 are containers, and the physical boundary 511 is a wall, so that the sub-region 51 further includes a virtual boundary 513 according to the boundaries that the physical boundary cannot be traversed in the present application. The virtual boundary represents a boundary that a user may access the sub-region, and the virtual boundary 513 may be formed by a line between the ends of two opposing physical boundaries, e.g., the line between the end of the left side of the physical boundary 512 and the end of the left side of the physical boundary 514.
In an embodiment, the boundary identifier may be used to represent an identifier of a sub-region, and in a region, one sub-region may correspond to one or two boundary identifiers, for example, the sub-region 51 corresponds to one accessible boundary line, for example, the virtual boundary 513 corresponds to one boundary identifier, and the sub-region 52 corresponds to two boundary identifiers because it is traversable on both left and right sides. The description of the other sub-regions shown in fig. 5B may refer to the description of sub-region 51 or sub-region 52, and will not be described in detail here.
Step 502, determining a second position coordinate of a second reference point of the sub-area in the area based on the boundary identifier.
In an embodiment, a reference point identifier of a sub-region corresponding to a boundary identifier is searched from a first preset list, wherein the first preset list is used for recording a corresponding relationship between the boundary identifier and the reference point identifier in the region; and determining a second position coordinate corresponding to the reference point identifier from a second preset list, wherein the second preset list is used for recording the corresponding relation between the reference point identifier and the position coordinate of the reference point in the area. The first preset list is shown in table 1, and the second preset list is shown in table 2. In an embodiment, a second reference point, such as a black dot shown in fig. 5B, may be set at each virtual boundary, the position coordinates of the second reference point 515 in the area 50 may be obtained by pre-calibration, a reference point identifier may be set for each second reference point, the second reference point identifier is recorded in the first preset list, and the corresponding relationship between the position coordinates of each second reference point and the reference point identifier may be recorded in the second preset list. For example, after the boundary identifier of the sub-region 51 is identified, a first predetermined list may be searched based on the boundary identifier, the identifier of the reference point 515 may be found from the first predetermined list, and then a second position coordinate [ x1, y1] corresponding to the identifier of the second reference point 515 may be searched from the second predetermined list based on the identifier of the second reference point 515.
TABLE 1
Boundary identification Reference point identification
ABC1 BCD1
ABC2 BCD2
TABLE 2
Reference point identification Second position coordinate
BCD1 [x1,y1]
BCD2 [x2,y2]
In another embodiment, the first preset list and the second preset list may be merged into a same list, which may contain three items, i.e. the boundary identifier, the reference point identifier, and the second position coordinate, and the list is shown in table 3.
TABLE 3
Boundary identification Reference point identification Second position coordinate
ABC1 BCD1 [x1,y1]
ABC2 BCD2 [x2,y2]
In step 503, a third relative position of the node device in the sub-area with respect to the second reference point is determined.
In one embodiment, the coordinates of the reference point in the sub-region may be regarded as the origin coordinates of the sub-region, or may be coordinates in the sub-region having a certain offset from the origin coordinates already calibrated. In one embodiment, images can be acquired by controlling a camera module arranged on the node equipment to the ground, and the relative displacement of the node equipment relative to a reference point when the node equipment moves in the sub-area is determined through the same characteristic point in the acquired images. For example, the second position coordinate of the second reference point 515 in the sub-area is [ M0, N0], and the camera module on the node device continuously takes images of the ground, calculates the coordinate offset of the feature point in the two adjacent frames of images, and calculates a third relative position [ Δ M, Δ N ] of the node device with respect to the second reference point 515 when the node device moves, based on a method similar to the embodiment shown in fig. 3A.
And step 504, updating the position coordinates of the node equipment in the area according to the third relative position and the second position coordinates.
In an embodiment, based on the sum of the third relative position and the second position coordinate, the position coordinate of the node device in the sub-area may be calculated, for example, the third relative position is [ Δ M, Δ N ], the second position coordinate is [ M0, N0], the position coordinate of the node device in the sub-area is [ M0+ Δ M, N0+ Δ N ], and thus the position coordinate of the node device in the area may be updated.
In this embodiment, the two-dimensional limited space region is divided into a plurality of sub-regions, because the second position coordinate of the second reference point of the sub-region in the region is a calibrated coordinate value, the coordinate value can accurately represent the position of the second reference point in the region, and the accumulated error of the node device in the whole region can be eliminated by the relative displacement of the node device in the sub-region with respect to the second reference point, the error of the relative displacement can be corrected in stages by the relative displacement of the node device in the sub-region, so that the positioning accuracy of the node device in the region is improved.
FIG. 6A is a schematic flow chart illustrating yet another indoor positioning method according to an exemplary embodiment of the present application, and FIG. 6B is a schematic shape diagram illustrating a virtual boundary, a third boundary strip and a fourth boundary strip according to the embodiment of FIG. 6A; based on the above embodiments, the present embodiment takes how to determine the boundary identifier when the node device moves to the boundary of the sub-area as an example and is exemplarily described with reference to fig. 5B, as shown in fig. 6A, including the following steps:
step 601, obtaining third boundary feature information of a third boundary bar and fourth boundary feature information of a fourth boundary bar.
In an embodiment, the third boundary bar and the fourth boundary bar may be disposed at a virtual boundary 513 as shown in fig. 5B, for example, as shown in fig. 6B, at the virtual boundary 513, one side is disposed with the third boundary bar 61, and the other side is disposed with the fourth boundary bar 62. In addition, the width of the boundary strip may not be limited as long as the characteristic information of the boundary strip can be recognized by an image recognition method. In an embodiment, the third boundary feature information and the fourth boundary feature information may be color information of the boundary bar, shape information of the boundary bar, or a combination of the color information and the shape information. In an embodiment, the shape of the third boundary bar and the fourth boundary bar may be two-dimensional codes, or may be a shape capable of representing uniqueness, for example, the third boundary bar 61 shown in fig. 6B is a transverse bar, and the fourth boundary bar 62 is an oblique bar.
In step 602, a third temporal order in identifying the third and fourth boundary bars is determined.
For example, a node device moving from the third boundary bar 61 to the fourth boundary bar 62 indicates that the user pushes the node device into the sub-area 51, and if the node device moves from the fourth boundary bar 62 to the third boundary bar 61, the node device pushes the node device out of the sub-area 51. In the process of moving the node equipment, if the third boundary strip 61 is recognized in the image collected by the camera module, the fourth boundary strip 62 is recognized, which indicates that the node equipment enters the sub-area 61; if the fourth boundary bar 62 is recognized first and then the third boundary bar 61 is recognized from the image captured by the camera module, it indicates that the node device leaves the sub-area 51.
Step 603, determining the boundary identifier of the sub-area based on the third time sequence, the third boundary feature information and the fourth boundary feature information.
For example, if the third boundary bar 61 and the fourth boundary bar 62 are recognized from the image captured by the camera module, which indicates that the node device enters the sub-region 51, the boundary bar close to the sub-region 51 (i.e., the fourth boundary bar 62) may be regarded as the boundary of the sub-region, and the boundary identifier corresponding to the fourth boundary bar 62 may be found by searching the list for recording the correspondence between the shape information of the boundary bar and the boundary identifier, and is determined as the boundary identifier corresponding to the sub-region 51.
In the embodiment, since the boundary strip can be printed on the ground, when the shape of the boundary strip is designed to be easily recognized and to be unique in the whole area, the possibility of interference or error in recognizing the boundary strip is extremely small, and the cost of maintaining the boundary strip in the later period is low, so that the pressure of maintenance cost does not exist in the practical application scene (for example, a supermarket for goods). In addition, the whole area is divided from a physical space through the boundary strips, so that the whole area is divided into a plurality of sub-areas, when the node equipment is positioned, only the positioning in the sub-areas is needed, and the positioning error of the node equipment in the area is effectively reduced.
Fig. 7A is a schematic flowchart illustrating a further indoor positioning method according to an exemplary embodiment of the present application, and fig. 7B is a schematic diagram illustrating a position relationship among the virtual boundary, the third boundary, and the fourth boundary in the embodiment illustrated in fig. 7A; based on the above embodiments, the present embodiment takes how to determine the boundary identifier when the node device moves to the boundary of the sub-area as an example and is exemplarily described with reference to fig. 5B, as shown in fig. 7A, including the following steps:
step 701, a third boundary identifier of the third boundary and a fourth boundary identifier of the fourth boundary are obtained.
In one embodiment, as shown in fig. 7B, a third set of signal emitters (e.g., the signal emitters are circles in the third boundary 71 shown in fig. 7B) may be embedded under the floor corresponding to the third boundary 71, and a fourth set of signal emitters (e.g., the signal emitters are circles in the fourth boundary 72 shown in fig. 7B) may be embedded under the floor corresponding to the fourth boundary 72, so that the visual effect of the entire scene is not affected because the signal emitters are embedded under the floor of the area 30 shown in fig. 3B or the area 50 shown in fig. 5B. The third boundary 71 and the fourth boundary 72 may be located at both sides of the virtual boundary 513, wherein the interval between two adjacent signal transmitters in the third group of signal transmitters and the fourth group of signal transmitters may be determined according to the width of the node device, as long as it is ensured that the node device avoids interference of the broadcast signals of the adjacent boundaries when moving to the boundary.
Step 702 determines a fourth time sequence when the third boundary identifier and the fourth boundary identifier are obtained.
In an embodiment, the computing device may record the time point when the boundary identifier is obtained, for example, when the node device moves from the third boundary 71 to the fourth boundary 72, it indicates that the user pushes the node device into the sub-area 51, and when the node device moves from the fourth boundary 72 to the third boundary 71, it indicates that the user pushes the node device out of the sub-area 51. In the process of moving the node device, if the slave computing device receives the boundary identifier of the third boundary 71 first and then receives the boundary identifier of the fourth boundary 72, it indicates that the node device enters the sub-area 51; if the boundary identifier of the fourth boundary 72 is received first, and then the boundary identifier of the third boundary 71 is received, it indicates that the node device leaves the sub-area 51.
And 703, determining the boundary identifier of the sub-region from the third boundary identifier and the fourth boundary identifier based on the fourth time sequence.
For example, if the computing device receives the boundary identifier of the third boundary 71 first and then receives the boundary identifier of the fourth boundary 72, which indicates that the node device enters the sub-area 51, at this time, the boundary close to the sub-area 51 (i.e., the fourth boundary 72) may be regarded as the boundary of the sub-area 51, and the boundary identifier of the fourth boundary 72 may be regarded as the boundary identifier corresponding to the sub-area 51; if the computing device receives the boundary identifier of the fourth boundary 72 first and then receives the boundary identifier of the third boundary 71, it indicates that the node device is about to leave the sub-area 51.
In the embodiment, the whole area is divided from the physical space through the boundary, so that the whole area is divided into a plurality of sub-areas, when the node equipment is positioned, only the positioning in the sub-areas is needed, and the positioning error of the node equipment in the sub-areas is effectively reduced.
FIG. 8A is a schematic flow chart diagram illustrating yet another indoor positioning method according to an exemplary embodiment of the present application, and FIG. 8B is a schematic diagram illustrating a positioning pattern in the embodiment of FIG. 8A; based on the above embodiments, the present embodiment takes how to determine the relative displacement of the node device from the reference point when the node device moves in the sub-area as an example and is exemplarily described with reference to fig. 5B, as shown in fig. 8A, including the following steps:
step 801, determining a second initial position coordinate of the node device when the node device enters the sub-area.
In one embodiment, a second positioning pattern acquired by a camera module is determined, the position coordinates of the second positioning pattern relative to a second reference point are determined, and a second initial position coordinate of the node equipment when the node equipment enters a sub-area is determined based on the position coordinates of the second positioning pattern relative to the second reference point; as shown in fig. 8B, on the virtual boundary 513, second positioning patterns with different filling colors or different filling graphics and shapes may be pre-coated, the position coordinates of each second positioning pattern from a second reference point may be pre-calibrated, and the camera module may collect characteristic information of the second positioning pattern during the movement of the node device, for example, in the second positioning pattern shown in fig. 8B, when the camera module is installed at the center point of the node device, a second initial position coordinate of the node device when entering the sub-area 51 may be determined based on the position coordinates of the second positioning pattern located in the middle with respect to the second reference point by identifying the second positioning pattern located in the middle in the image; for example, as shown in fig. 8B, the rectangular coordinate system in the sub-area 51 may have a second reference point (indicated by a black dot) as an origin, and the x-axis and the y-axis may be parallel to two perpendicular rectangular sides of the sub-area 51, respectively, and if a triangle closest to the reference point is identified as the second positioning pattern, and the center of the triangle is located at a distance Δ x0 from the reference point along the x-axis direction, it may be determined that the second initial position coordinate of the triangle in the sub-area 51 is [ p0, q0 ].
And step 802, determining a fourth relative position of the node equipment in the sub-area relative to the second initial position coordinate based on the same characteristic point in the two adjacent frames of images acquired by the camera module.
In an embodiment, the preset frequency of the camera module when acquiring the image may be adjusted according to a specific requirement of the positioning accuracy, for example, when the positioning accuracy is higher, the preset frequency may be adjusted to be larger, so that it may be ensured that the position offset of the same feature point in two adjacent frames of images is smaller, and the motion trajectory of the node device in the sub-area may be accurately tracked through the image captured by the camera module.
And step 803, determining a third relative position of the node equipment in the sub-area relative to the second reference point based on the position relation between the second initial position coordinates and the second reference point and the fourth relative position.
In an embodiment, how to determine the third relative position of the node device in the sub-area with respect to the second initial position coordinate may refer to the related description of the embodiment shown in fig. 5A, and is not described in detail here.
In this embodiment, the second positioning pattern and the second reference point are not related to hardware updating or improvement, so that the cost is low, the second positioning pattern and the second reference point can be quickly deployed to the indoor ground in a coating manner, and the influence on the whole area is small in the deployment process, so that the deployment is flexible, and the maintenance cost on the whole area can be greatly reduced.
Fig. 9 is a flowchart illustrating a method for recommending merchandise information according to an exemplary embodiment of the present application; the present embodiment may be applied to a node device, and may also be applied to a computing device, and the present embodiment is exemplarily described in combination with the foregoing embodiments, as shown in fig. 9, including the following steps:
step 901, determining the position coordinates of the node device in the area.
The position coordinates of the node device in the area can be obtained by the indoor positioning method provided by the embodiments shown in fig. 2A to fig. 8A.
And step 902, determining commodity information needing to be recommended based on the position coordinates of the node equipment in the area.
In an embodiment, the node device may record the position of each container in the area 30 shown in fig. 3B or the area 50 shown in fig. 5B and the commodity information sold by each container in advance, determine the container identifier closest to the position coordinate of the node device, determine the commodity information associated with the container identifier, and determine the commodity information with the adjusted price as the commodity information to be recommended from the associated commodity information.
And step 903, pushing the information of the commodities needing to be recommended.
In an embodiment, the information of the goods to be recommended may be displayed on a display screen of the node device, or may be notified to the user through a voice playing mode.
In the embodiment, based on the positioning result of the node equipment, the user can conveniently acquire the commodity information needing to be preferentially checked in the area, so that the commodity information can be recommended in time, and the user can purchase cheap and high-quality commodities.
Corresponding to the embodiment of the indoor positioning method, the application also provides an embodiment of the indoor positioning device.
The embodiment of the indoor positioning device can be applied to electronic equipment, and the electronic equipment can be node equipment or computing equipment in the application. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 10, the present application is a hardware structure diagram of an electronic device in which an indoor positioning apparatus is located, where the electronic device in which the apparatus is located in the embodiment may further include other hardware according to an actual function of the electronic device, in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 10, and details of this are not repeated.
The embodiment of the commodity information recommendation device can be applied to electronic equipment, wherein the electronic equipment can be node equipment or computing equipment in the application. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. In terms of hardware, as shown in fig. 11, the electronic device in which the commodity information recommendation apparatus is located in the present application is a hardware structure diagram, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 11, the electronic device in which the apparatus is located in the embodiment may also include other hardware according to the actual function of the electronic device, which is not described again.
The present application further provides a computer-readable storage medium, where the storage medium stores a computer program, where the computer program is used to execute the indoor positioning method provided in the embodiment shown in fig. 2A to 8A, or execute the commodity information recommendation method provided in the embodiment shown in fig. 9.
Fig. 12 is a schematic structural diagram of an indoor positioning apparatus according to an exemplary embodiment of the present application, where as shown in fig. 12, the indoor positioning apparatus includes: a first determining module 121, a second determining module 122, and a third determining module 123.
A first determining module 121, configured to determine a first initial position coordinate of the node device when entering the area;
a second determining module 122, configured to determine, based on the same feature point in two adjacent frames of images, a first relative position of the node device with respect to the first initial position coordinate when moving in the area;
and a third determining module 123, configured to determine the position coordinates of the node device within the area according to the first initial position coordinates determined by the first determining module 121 and the first relative position determined by the second determining module 122.
Fig. 13 is a schematic structural diagram of another indoor positioning apparatus according to an exemplary embodiment of the present application, and as shown in fig. 13, on the basis of the embodiment shown in fig. 12, the first determining module 121 includes:
a first determination unit 1211 for determining a first positioning pattern that the node device passes through when entering the area;
a second determining unit 1212, configured to determine a first initial position coordinate of the node device when entering the area, based on the position coordinate of the first positioning pattern in the area determined by the first determining unit 1211.
In an embodiment, the second determining unit 1212 may be configured to:
determining a relative position of the first positioning pattern with respect to the first reference point;
based on the relative position of the first positioning pattern with respect to the first reference point and the first position coordinates of the first reference point in the area, first initial position coordinates of the node device when entering the area are determined.
In an embodiment, the second determining module 122 may include:
a third determining unit 1221, configured to determine a coordinate offset of the same feature point in two adjacent frames of images in an image plane coordinate system;
a fourth determination unit 1222 for determining a position shift amount when two adjacent frame images are captured, based on the coordinate shift amount determined by the third determination unit 1221 and a lens imaging parameter of the camera module;
a fifth determining unit 1223 configured to determine a second relative position of the node apparatus with respect to the first initial position coordinates when a previous image of two adjacent images is captured;
a sixth determining unit 1224, configured to determine a first relative position of the node device with respect to the first initial position coordinate based on the position offset determined by the fourth determining unit 1222 and the second relative position determined by the fifth determining unit 1223.
In an embodiment, the apparatus may further comprise:
a first obtaining module 124, configured to obtain first boundary feature information of the first boundary bar and second boundary feature information of the second boundary bar;
a fourth determining module 125, configured to determine a first time order of the first obtaining module 124 in identifying the first boundary bar and the second boundary bar;
a fifth determining module 126, configured to determine whether the node device enters the area or leaves the area based on the first time sequence determined by the fourth determining module 125 and the first boundary feature information and the second boundary feature information determined by the first obtaining module 124, where if the node device enters the area, the first determining module 121 performs determining a first initial position coordinate of the node device when the node device enters the area.
In an embodiment, the apparatus may further comprise
A second obtaining module 127, configured to obtain a first boundary identifier of the first boundary and a second boundary identifier of the second boundary;
a sixth determining module 128, configured to determine a second time sequence when the second obtaining module 127 obtains the first boundary identifier and the second boundary identifier;
a seventh determining module 129, configured to determine whether the node device enters the area or leaves the area based on the second time sequence determined by the sixth determining module 128, the first boundary identifier acquired by the second acquiring module 127, and the second boundary identifier, where if the node device enters the area, the first determining module 121 performs determining the first initial position coordinate of the node device when the node device enters the area.
In an embodiment, the apparatus may further comprise:
an eighth determining module 1210, configured to determine, if it is detected that the node device moves to a sub-area in the area, a boundary identifier of the sub-area;
a ninth determining module 1211, configured to determine a second position coordinate of the second reference point of the sub-area in the area based on the boundary identifier determined by the eighth determining module 1210;
a tenth determining module 1212, configured to determine a third relative position of the node apparatus in the sub-area with respect to the second reference point;
a location updating module 1213, configured to update the location coordinates of the node device in the area determined by the third determining module 123 according to the third relative location determined by the tenth determining module 1212 and the second location coordinates determined by the ninth determining module 1211.
In an embodiment, the eighth determining module 1210 is specifically configured to:
acquiring third boundary feature information of a third boundary strip and fourth boundary feature information of a fourth boundary strip;
determining a third temporal order in identifying the third and fourth boundary bars;
and determining the boundary identifier of the sub-area based on the third time sequence, the third boundary feature information and the fourth boundary feature information.
In an embodiment, the eighth determining module 1210 is specifically configured to:
acquiring a third boundary identifier of the third boundary and a fourth boundary identifier of the fourth boundary;
determining a fourth time sequence when the third boundary identifier and the fourth boundary identifier are obtained;
and determining the boundary identifier of the sub-area from the third boundary identifier and the fourth boundary identifier based on the fourth time sequence.
In an embodiment, the ninth determining module 1211 is specifically configured to:
searching a reference point identifier of a sub-region corresponding to the boundary identifier from the first preset list;
and determining a second position coordinate corresponding to a second reference point identifier from a second preset list, wherein the second preset list is used for recording the corresponding relation between the reference point identifier and the position coordinate of the reference point in the area.
In an embodiment, the tenth determining module 1212 is specifically configured to:
determining a second initial position coordinate of the node equipment when the node equipment enters the sub-area;
determining a fourth relative position of the node equipment in the sub-area relative to the second initial position coordinate based on the same characteristic point in the two adjacent frames of images;
and determining a third relative position of the node equipment relative to the second reference point in the sub-area based on the position relation between the second initial position coordinates and the second reference point and the fourth relative position.
In one embodiment, the two adjacent frames of images are images of the ground shot by the camera module according to the preset shooting frequency, and the camera module is located on the node device.
Fig. 14 is a schematic structural diagram of a product information recommendation device according to an exemplary embodiment of the present application, and as shown in fig. 14, the product information recommendation device includes: a position coordinate determination module 141, a commodity information determination module 142, and a commodity information push module 143.
A location coordinate determining module 141, configured to determine location coordinates of the node device in the area by using the indoor positioning method provided in any one of the embodiments of fig. 2A to 8A;
a commodity information determining module 142, configured to determine, based on the position coordinates determined by the position coordinate determining module 141, commodity information that needs to be recommended;
and the commodity information pushing module 143 is configured to push the commodity information that needs to be recommended and is determined by the commodity information determining module 142.
In an embodiment, the product information determining module 142 is specifically configured to:
determining container identifiers adjacent to the position coordinates of the node equipment in the area;
determining commodity information associated with the container identification;
and determining the commodity information with the price reduced from the associated commodity information as the commodity information needing to be recommended.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (17)

1. An indoor positioning method, characterized in that the method comprises:
determining a first initial position coordinate of the node equipment when the node equipment enters the area; the bottom of the node equipment is provided with a camera module, and the camera module is used for acquiring images of the ground;
determining coordinate offset of the same characteristic point in two adjacent frames of images in an image plane coordinate system; the characteristic points are pixel points which can be identified by an image identification method;
determining the position offset when the two adjacent frames of images are shot based on the coordinate offset and the lens imaging parameters of the camera module;
determining a second relative position of the node equipment relative to the first initial position coordinate when the previous frame image in the two adjacent frames of images is shot;
determining a first relative position of the node device relative to the first initial position coordinates as the node device moves in the area based on the position offset and the second relative position;
determining the position coordinates of the node equipment in the area according to the first initial position coordinates and the first relative position;
the method further comprises the following steps:
if the node equipment is detected to move to a sub-area in the area, determining a boundary identifier of the sub-area;
determining second position coordinates of a second reference point of the sub-area in the area based on the boundary identification;
determining a third relative position of the node device in the sub-area with respect to the second reference point;
and updating the position coordinates of the node equipment in the area according to the third relative position and the second position coordinates.
2. The method of claim 1, wherein determining a first initial location coordinate of a node device upon entering an area comprises:
determining position coordinates of a first positioning pattern in an area, which a node device passes through when entering the area;
determining a first initial position coordinate of the node device upon entering the area based on the position coordinates of the first positioning pattern in the area.
3. The method of claim 2, wherein determining location coordinates in the area of a first positioning pattern traversed by a node device upon entering the area comprises:
determining the relative position of a first positioning pattern passed by the node equipment when entering the area relative to a first reference point;
determining position coordinates of the first positioning pattern in the area based on a relative position of the first positioning pattern to a first reference point and first position coordinates of the first reference point in the area.
4. The method of claim 1, further comprising:
acquiring first boundary characteristic information of a first boundary strip and second boundary characteristic information of a second boundary strip;
determining a first temporal order in identifying the first boundary bar and the second boundary bar;
determining whether the node device enters the area or leaves the area based on the first time sequence and the first boundary feature information and the second boundary feature information.
5. The method of claim 1, further comprising
Acquiring a first boundary identifier of the first boundary and a second boundary identifier of the second boundary;
determining a second time sequence when the first boundary identifier and the second boundary identifier are obtained;
determining whether the node device enters or leaves the area based on the second temporal order, the first boundary identification, and the second boundary identification.
6. The method of claim 1, wherein the determining the boundary identity of the sub-region comprises:
acquiring third boundary feature information of a third boundary strip and fourth boundary feature information of a fourth boundary strip;
determining a third temporal order in identifying the third and fourth boundary bars;
and determining the boundary identifier of the sub-area based on the third time sequence, the third boundary feature information and the fourth boundary feature information.
7. The method of claim 1, wherein the determining the boundary identity of the sub-region comprises:
acquiring a third boundary identifier of the third boundary and a fourth boundary identifier of the fourth boundary;
determining a fourth time sequence when the third boundary identifier and the fourth boundary identifier are obtained;
determining a boundary identifier of the sub-region from the third boundary identifier and the fourth boundary identifier based on the fourth temporal order.
8. The method of claim 1, wherein determining second location coordinates of a second reference point of the sub-region in the region based on the boundary identification comprises:
searching a reference point identifier of a sub-region corresponding to the boundary identifier from a first preset list;
and determining a second position coordinate corresponding to the second reference point identifier from a second preset list, wherein the second preset list is used for recording the corresponding relation between the reference point identifier and the position coordinate of the reference point in the area.
9. The method of claim 1, wherein the determining a third relative position of the node device in the sub-area with respect to the second reference point comprises:
determining a second initial position coordinate of the node device when entering the sub-region;
determining a fourth relative position of the node equipment in the sub-area relative to the second initial position coordinate based on the same feature point in two adjacent frames of images;
determining a third relative position of the node device in the sub-area with respect to the second reference point based on the position relationship between the second initial position coordinates and the second reference point and the fourth relative position.
10. The method according to any one of claims 1 to 9, wherein the two adjacent frames of images are images of the ground taken by a camera module according to a preset shooting frequency, and the camera module is located on the node device.
11. A commodity information recommendation method, characterized in that the method comprises:
determining the position coordinates of the node device in the area by the indoor positioning method of any one of the preceding claims 1-10;
determining commodity information needing to be recommended based on the position coordinates;
and pushing the information of the commodities needing to be recommended.
12. The method of claim 11, wherein the determining information about the item to be recommended based on the location coordinates comprises:
determining a container identifier adjacent to the location coordinates of the node device in the area;
determining merchandise information associated with the container identification;
and determining the commodity information with the reduced price as the commodity information needing to be recommended from the associated commodity information.
13. An indoor positioning device, the device comprising:
the first determining module is used for determining a first initial position coordinate of the node equipment when the node equipment enters the area; the bottom of the node equipment is provided with a camera module, and the camera module is used for acquiring images of the ground;
the second determining module is used for determining the coordinate offset of the same characteristic point in the two adjacent frames of images in the image plane coordinate system; the characteristic points are pixel points which can be identified by an image identification method; determining the position offset when the two adjacent frames of images are shot based on the coordinate offset and the lens imaging parameters of the camera module; determining a second relative position of the node equipment relative to the first initial position coordinate when the previous frame image in the two adjacent frames of images is shot; determining a first relative position of the node device relative to the first initial position coordinates as the node device moves in the area based on the position offset and the second relative position;
a third determining module, configured to determine, according to the first initial position coordinate determined by the first determining module and the first relative position determined by the second determining module, a position coordinate of the node device within the area;
an eighth determining module, configured to determine, if it is detected that the node device moves to a sub-area in the area, a boundary identifier of the sub-area;
a ninth determining module, configured to determine, based on the boundary identifier determined by the eighth determining module, a second position coordinate of a second reference point of the sub-area in the area;
a tenth determining module, configured to determine a third relative position of the node device in the sub-area with respect to the second reference point;
and the position updating module is used for updating the position coordinates of the node equipment in the area determined by the third determining module according to the third relative position determined by the tenth determining module and the second position coordinates determined by the ninth determining module.
14. An article information recommendation apparatus characterized in that the apparatus comprises:
a location coordinate determination module, configured to determine location coordinates of the node device in the area through the indoor positioning method according to any one of claims 1 to 10;
the commodity information determining module is used for determining commodity information to be recommended based on the position coordinates determined by the position coordinate determining module;
and the commodity information display module is used for displaying the commodity information which is determined by the commodity information determination module and needs to be recommended.
15. A computer-readable storage medium storing a computer program for executing the indoor positioning method according to any one of claims 1 to 10 or the commodity information recommendation method according to any one of claims 11 to 12.
16. An electronic device, characterized in that the electronic device comprises:
a processor; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the indoor positioning method according to any one of claims 1 to 10.
17. An electronic device, characterized in that the electronic device comprises:
a processor; a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the merchandise information recommendation method according to any one of claims 11 to 12.
CN201710526796.8A 2017-06-30 2017-06-30 Indoor positioning method, commodity information recommendation method and device and electronic equipment Active CN107273907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526796.8A CN107273907B (en) 2017-06-30 2017-06-30 Indoor positioning method, commodity information recommendation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526796.8A CN107273907B (en) 2017-06-30 2017-06-30 Indoor positioning method, commodity information recommendation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107273907A CN107273907A (en) 2017-10-20
CN107273907B true CN107273907B (en) 2020-08-07

Family

ID=60070539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526796.8A Active CN107273907B (en) 2017-06-30 2017-06-30 Indoor positioning method, commodity information recommendation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107273907B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948479B (en) * 2019-03-06 2021-11-02 百度在线网络技术(北京)有限公司 Factory monitoring method, device and equipment
CN114111780A (en) * 2020-08-26 2022-03-01 深圳市杉川机器人有限公司 Positioning error correction method, device, self-moving equipment and system
CN112437487B (en) * 2021-01-26 2021-04-16 北京深蓝长盛科技有限公司 Position positioning method, event identification method, device and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639345A (en) * 2009-08-03 2010-02-03 塔米智能科技(北京)有限公司 Indoor locating method
CN101854384A (en) * 2010-04-29 2010-10-06 浙江大学城市学院 Supermarket shopping guiding and advertising system and control method based on wireless sensor network
CN102749072A (en) * 2012-06-15 2012-10-24 易程科技股份有限公司 Indoor positioning method, indoor positioning apparatus and indoor positioning system
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN106708048A (en) * 2016-12-22 2017-05-24 清华大学 Ceiling image positioning method of robot and ceiling image positioning system thereof
CN106780553A (en) * 2016-11-18 2017-05-31 腾讯科技(深圳)有限公司 A kind of shift position of aircraft determines method, device and aircraft

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639345A (en) * 2009-08-03 2010-02-03 塔米智能科技(北京)有限公司 Indoor locating method
CN101854384A (en) * 2010-04-29 2010-10-06 浙江大学城市学院 Supermarket shopping guiding and advertising system and control method based on wireless sensor network
CN102749072A (en) * 2012-06-15 2012-10-24 易程科技股份有限公司 Indoor positioning method, indoor positioning apparatus and indoor positioning system
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN106780553A (en) * 2016-11-18 2017-05-31 腾讯科技(深圳)有限公司 A kind of shift position of aircraft determines method, device and aircraft
CN106708048A (en) * 2016-12-22 2017-05-24 清华大学 Ceiling image positioning method of robot and ceiling image positioning system thereof

Also Published As

Publication number Publication date
CN107273907A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
Wasenmüller et al. Comparison of kinect v1 and v2 depth images in terms of accuracy and precision
CN107273907B (en) Indoor positioning method, commodity information recommendation method and device and electronic equipment
CN110411441B (en) System and method for multi-modal mapping and localization
US9928438B2 (en) High accuracy localization system and method for retail store profiling via product image recognition and its corresponding dimension database
US10147192B2 (en) Coordinate-conversion-parameter determination apparatus, coordinate-conversion-parameter determination method, and non-transitory computer readable recording medium having therein program for coordinate-conversion-parameter determination
CN108700947A (en) For concurrent ranging and the system and method for building figure
Lee et al. Low-cost 3D motion capture system using passive optical markers and monocular vision
CN102763132A (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
CN108700946A (en) System and method for parallel ranging and fault detect and the recovery of building figure
JP2017526082A (en) Non-transitory computer-readable medium encoded with computer program code for causing a motion estimation method, a moving body, and a processor to execute the motion estimation method
US11915192B2 (en) Systems, devices, and methods for scanning a shopping space
JP2018115072A (en) Information collection device and information collection system
JP6969668B2 (en) Video monitoring device, its control method, and program
US10991105B2 (en) Image processing device
WO2015125300A1 (en) Local location computation device and local location computation method
WO2013135968A1 (en) Method, arrangement, and computer program product for coordinating video information with other measurements
JP2018205870A (en) Object tracking method and device
CN111429194B (en) User track determination system, method, device and server
CN109313822B (en) Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment
CN107438863B (en) Flight positioning method and device
CN109416251B (en) Virtual wall construction method and device based on color block labels, map construction method and movable electronic equipment
Neves et al. A calibration algorithm for multi-camera visual surveillance systems based on single-view metrology
CN104937608A (en) Road region detection
Jackson et al. Registering aerial video images using the projective constraint
Godil et al. 3D ground-truth systems for object/human recognition and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant