The invention relates to a mobile indoor navigation system.
Background to the |πν¾η¾ίοη Prior art embodiments known to the applicants are wireless local area networks (Wi-Fi) and Bluetooth trianguiation systems. However, both of these systems require indoor electronic hardware to function.
Also known to the applicant is satellite navigation technology which uses global positioning systems in order to navigate roads or the like. The disadvantage of this technology is that it has not been adapted to work in an indoor environment in order to guide a user to a target destination within premises.
The following prior art documents are acknowledged; US2013/0045751,
US2010/0092034, US2008/0137912, JP2009222682, R 10-2012 -0116202. None of the
prior art documents identified addresses the problems which the invention tackles, let- alone are the solutions proposed by the applicant envisaged.
US 2013/0045751 A1 relies on the detection of ordinary irregularly disposed logos such as the brands on a store's fa¾ade, It provides a very rough estimate of location but specific real time guidance cannot be achieved in this prior art configuration,
US 2010/0092034 concerns tags disposed on the wall of a building and the determination of an individual's position through the analysis of deformation of a tag, It does not provide real time guidance or navigation to an individual
US 2008/0137312 concerns a barcode based system for determining a user's position. The configuration relies on barcodes mounted on a wall and a camera configured to simulate a user's view, it does not provide readable real time navigation to a user.
JP 2009-222682 provides visual indicators which may be detected by camera for the identification of objects in very much the same way as brail provides a means to communicate an object's purpose to the visually impaired.
In a first broad independent aspect, the invention provides a mobile indoor navigation system comprising:
® a mobile communication device incorporating a user interface; a display for navigating a user from a first location to a second location; and a camera;
* a number of passive navigation markers disposed at a number of locations; at least one of which in use is viewed by said camera as said marker is in the field of view of said camera; and
* a processor which compares signals representative of said viewed passive navigation markers with signals stored on a database and displays navigation information dependent upon said comparison.
This configuration provides a system by which a user may navigate premises such as a supermarket in order to efficiently locate desired products, it particularly allows a user to
hold a mobile communication device in an ordinary interfacing position i.e. with the user holding the device in the palm of his/her hand whilst the camera captures in preferred embodiments ceiling or floor images where passive markers are disposed. Preferably the passive navigation markers are disposed at regular intervals which provide real time
5 updates of a map displayed for the user's guidance. In other words, Indoor navigation has been simplified when compared to the prior art embodiments discussed above where the user would be required to identify the logos or barcodes, point in effect to them in a manner rendering the simultaneous display of navigation information impractical Whilst simplifying the user's interaction with the navigation system, the only electronic device0 required may be the communication device. This may be a phone, tablet or an electronic trolley. No other hardware may be required within the premises to allow the system to function, in a preferred embodiment, the system incorporates no additional hardware.
Preferably, said mobile communication device may be configured to scan for prominent5 features and/or one or more particular highlights in the front or rear facing cameras' field of view and may be configured to track their movement across the field of view, thus identifying lateral movement of the communication device in relation to the environment.
In a preferred embodiment, the device will be configured to scan to identify right-angles,ΰ
in a further subsidiary aspect, the device will be configured to scan to identify marb which may for example comprise geometric shapes.
In a further subsidiary aspect, the device comprises means for identifying lights and/or5 light fittings. This may be achieved by identifying an area with a characteristic
corresponding to a light In a further subsidiary aspect, the device is configured to identify a static One, pipe, or cabling in order to determine a direction of travel.
Preferably, said processor determines and assesses the distance between a passive
& navigation marker in the field of view and the mobile communication device; and displays navigation information dependent upon said assessment. This configuration allows accurate positioning of a user within the premises and therefore accurate navigation towards a destination. Preferably, the mobile communication device incorporates a screen and a camera in substantially the same plane as the screen; and the device's screen
remains in a plane facilitating the user's viewing whilst capturing images of a passive marker located on a substantially horizontal plane such as a ceiling or a floor.
Preferably, said processor determines and assesses one or more angles between a passive navigation marker in the field of view and the mobile communication device; and displays navigation information dependent upon said assessment This configuration allows accurate positioning and preferably orientation of a user within the premises and therefore accurate navigation towards a destination. Preferably, said mobile communication device incorporates an accelerometer and the selection of the navigation information is at least partly dependent on values derived from said accelerometer. This provides the advantage of more accurately determining the position of a user in comparison to the use of the processor alone. Preferably, said mobile communication device locks the exposure of said camera in order to most effectively read said markers, This configuration allows the processor to effectively compare signals representative of the viewed passive navigation markers with signals stored on a database in variable light conditions. Preferably, said mobile communication device will automatically lock the exposure of the camera to best suit the reading of the markers in variable lighting conditions.
Preferably, said communication device utilises both front and rear facing cameras. This allows markers positioned on the ceiling, floor or wail to be identified by the mobile communication device,
Preferably, said front and rear facing cameras are utilised simultaneously. This
configuration provides the advantage of allowing the mobile communication device to identify navigational markers on the ceiling, floor or wail of premises at the same time.
Preferably, said mobile communication device incorporates a compass and the selection of the navigation information is at least partly depe dent on values derived from said compass. This provides the advantage of more accurately determining the position of a user in comparison to the use of the processor alone.
s
Preferably, said mobile communication device incorporates an accelerometer and/or a gyroscope which are configured to identify displacement characteristics and compare displacement characteristics against pre-determined displacement characteristics; whereby a mode of displacement may be determined, Embodiments of this configuration are particularly advantageous since the system avoids any reliance on a compass which may be prone to errors due to electro-magnetic interference.
Preferably, said mobile communication device incorporates an accelerometer and/or a gyroscope which are configured to determine the orientation of the device and display navigation information in accordance with said determined orientation.
Preferably, said mobile communication device determines its orientation dependent upon whether a passive navigation marker Is in the field of view of said camera; and dependent solely upon data obtained from said gyroscope and/or accelerometer if no passive navigation marker is present in the field of view.
Preferably, said mobile communication device incorporates a pedometer. This
configuration coupled with the orientation detection can allow map details to be accurately updated in real time to provide improved indoor navigation.
Preferably, said mobile communication device records a user's walking movement in order to determine the user's walking profile and compares his profile against pre-determined profiles.
Preferably, at least one of said passive navigation markers Is asymmetric; whereby the position of the user relative to the marker is determined. This configuration provides the advantage of determining in which direction a user is travelling. As a navigation marker is asymmetric, at any point when in the field of view of said camera, the processor may determine in which direction the user Is travelling towards or away from the marker.
Preferably, said asymmetric navigation markers are located on the ceiling, floor or wall of a building. This enables the front or back facing camera of the mobile communication device to position the user accurately within the premises.
Preferably, said navigation markers are held apart from said ceiling or floor by one or more fasteners. Preferably, said navigation markers can be visible solely in the infra-red spectrum. This configuration allows accurate detection by the camera of the mobile communication device, where detection in other parts of the spectrum may be difficult
Preferably, the initial location of said user can be selected from a menu in said user interface, in order to determine the location of said user. This provides the advantage of allowing the user to select their initial location if this cannot be otherwise determined using the navigation system.
Preferably, global position satellite (GPS) data is used for positioning said user in an outdoor environment
Preferably, a predetermined list of product destinations may be stored within said navigation system. This allows a user to compile a list of products, such as a shopping list, that upon entering the premises can be mapped using said navigation system in order to formulate the optimum path by which a user can travel to efficiently locate all sought after products.
Preferably, said processor launches said navigation system dependent on the detection of a Quick Response {QR} code. This configuration allows the launching of the processor without the requirement for additional hardware within the building. The QR code can also direct the user online, if they have not already done so, to download and install the navigation system, Additionally, said QR codes may generate the latest special offers to be shown on a user's mobile device each time they launch the system. Optionally, said user interface overlays said navigation information display over the onscreen camera view of said mobile communication device. This configuration allows the user to see the optimum path generated laid over the image seen by the mobile device so that they may more easily be able to locate a product.
Preferably, the navigation information may be overlaid on a floor map not the camera view. This will take the shape of a line to follow, which updates as the user moves to new detectable locations.
Figure 1 shows the interaction between a mobile device and said asymmetric navigation markers. Figure 2 shows the measurement of pitch roll and yaw angles of the communication device and the marker position in the field of view, to calculate the lateral distance of the device from the marker as a polar co-ordinate.
Figure 3 shows a user holding a communication device.
Figure 1 shows a mobile communication device, indicated by 1, that Interacts with asymmetric navigation markers placed below 2 or above 3, horizontally spaced between 3 to 10 meters from each other, within premises. Each marker incorporates a number of shapes spatially arranged so that the marker may not be superimposed on its mirror image. Thus from any direction, the view of the marker Is different therefore allowing the navigation system to determine in which direction the user is travelling. In a preferred embodiment, the marker incorporates two or more rectangles, in a further preferred embodiment, the marker incorporates a first rectangle at 90° from a second rectangle. In a further preferred embodiment the markers are infrared markers and the camera Is adapted to operate in the infrared spectrum. The markers are located on either the floor, ceiling or other high level fixing point such as structural beams or lighting rigs In said premises. The view frustum (field of view) of the camera is shown 4.
Figure 2 shows the process of calculating the lateral distance and angle as a polar coordinate 5 from the marker 3, by comparing the pitch, roll, and yaw angle of the communication device 1 with the height 6 and the position of the marker 3 in the communication device camera view frustum (field of view) 4,
Figure 3 shows the user holding their communication device 1, the camera field of view 4 and the live map interface 7. This position allows the user to comfortably view the screen in order to readily follow navigation instructions whilst at the same time allowing the camera to capture the requisite images, in that sense, the operation of navigation is simplified to the extent that the user needn't be aware of the position of the markers.
As a user enters the premises, the mobile indoor navigation system is launched on the mobile communication device 1, initially, the navigation system can be Installed on the mobile communication device 1 by scanning a Q code upon entry to the premises.
Upon launching of the navigation system, the mobile device 1 will determine the location of the user based on GPS positioning. This may be used to determine which store/building they are in, and which map and database to use. If this fails, however, the user can select their location through a menu contained in the navigation system. Optionally, the user may sign into an account associated with the premises, such as an online shopping account, or loyalty card account.
Once the navigation system is launched, the user may select a desired item that is located within the premises. The navigation system will subsequently search for the location of the chosen item within a database of item locations for the premises. The navigation system will plot, the location of the chosen item on an on-screen map 7 of the premises, generated on the mobile device 1. The navigation system will provide a path for the user to follow from their current position to the location of the desired item. The navigation system will track the user's location as they travel towards the item destination, providing feedback related to the user's position relative to the target location of the item, using the asymmetric navigation markers 2, 3 and the field of view 4 of the front or rear camera 2 of the mobile device 1. Alternatively, the navigation system may indicate useful information, such as special offer information, to the user as they travel through the premises. This useful information may be based on known purchasing patterns of the user, obtained through the account associated with the premises.
Alternatively, the user may pre-load a list of desired items located within the premises. The navigation system will determine an optimised route by beating the next nearest item on the user's list for the user to follow in order to efficiently locate all desmd items. s Alternatively, the user may scan a Q code as they enter the premises which is associated with a special offer or other points of interest. The navigation system will then guide the user to the location of the special offer or point of interest in the same manner as described above, if the navigation system is not yet installed on the mobile device, scanning of the QR code will direct them to an online location where they will be able to w download and install the navigation system.
In a preferred embodiment, detection of the user's location is determined through the detection of visual asymmetric navigation markers 2f 3 that are mounted at ceiling level, on the floor, or other high level fixing point of the premises. Each asymmetric navigation is marker represents a unique location within the premises and may be detected by the front or rear camera of the mobile device. When an asymmetric navigation marker passes into the field of view of the front or rear camera of the mobile device 1, its unique location is determined and compared against a map and an associated database of each unique location held internally within the navigation system. From this, the user's location can be 0 determined and plotted on a schematic map of the premises. As the asymmetric marker moves through the field of view 4 of the front or rear camera, the user's speed and direction can be determined. The accuracy of this may be increased by employing the accelerometer and/or magnetic compass of the mobile device in conjunction with the navigation system.
5
In a further embodiment, a pedometer might be employed to measure the number of steps taken by the user in order to better predict the location of a user.
In a preferred embodiment, the user's position relative to the target item destination0 would be presented as a schematic map of the premises with the optimum route plotted on the map. Alternatively, navigation information can be presented as a compass-style arrow which points in the direction of the target item destination. Alternatively, navigation information can be presented over the on-screen view of the camera, in a 3-dimensional presentation, known as Augmented Reality (A ).
Additionally, when the user is travelling, navigation information, in the form of a map or otherwise, would rotate and move in relation to the direction in which the user is facing, In a preferred embodiment, the following processing steps are followed:
Stage 1: Marker Detection
The front or rear facing cameras of the communication device are initialised and may be preferably set to constantly scan for markers using the Aruco augmented reality Library.
The optical processing via the Aruco system is as follows:
® Marker Detection
Apply an Adaptive Thresholding so as to obtain borders of markers.
» Find contours. After that not only the real markers are det&cied but also a lot of undesired. borders. The process then aims to filter out unwanted borders.
® Remove borders with a small number of points,
* Polygonal approximation of contour and preferably keep the concave contours with exactly 4 corners (i.e., rectangles).
® Sort corners in preferably an anti-clockwise direction,
» Remove too close rectangles. This is required because the adaptive threshold
normally detects the internal and external part of the marker's border. At this stage, the system preferably keeps the most external border, Stage 2: Marker identification
* Narker identification
® Remove the projection perspective so as to preferably obtain a frontal view of the rectangle area using homography.
® Threshold the area preferably using Otsu. Otsu s algorithms assume a birnodal distribution and find the threshold that maximizes the extra-class variance while keeping a low intra - class variance.
Identification of the internal code, if it is a marker, then it has an internal code in preferred embodiments. The marker is preferably divided In a 6x6 grid, of which the internal 5x5 cells contain ID information. The rest corresponds to the external black border, Here, we first check that the external black border is present, Afterwards, we read the internal 5x5 cells and check if they provide a valid code (it might be required to rotate the code to get the valid one).
The Harker Identification value Is preferably stored In memory and the centre of the marker is preferably identified in relation to the pixel dimensions of the camera frame as a percentage in X and Y referred to as MarkerPosJC and MarkerPosJ/. 0%X)% being the top left of the frame. : Relative Position Calculation
If the communication device is flat and level with the camera facing straight up or down, then the lateral position of the communication device relative to the marker can be identified as follows.
The frustum of the camera field of view is calculated for both X and Y by
FOV.. X = 2 * tan(AngieOfViewJ</2) * (MarkerHeightFromGround -
AveragePhoneHeightFromGround)
FOV...Y = 2 * tan{Ang[eOfVlew„Y/2) * (MarkerHeightFromGround - AveragePhoneHeightFromGround}
Once the frustum is obtained then total cross-section area of the viewable ceiling floor is determined.
The percentage positioning of the marker in the camera frame is combined with the FOV values to calculate the lateral position
LateralPosition„Y = (FOV Y * (PosY/100)) / 2
tateraiPosstionJC « (FOV.. X * {PosY/100)) / 2
As the communication device will likely not be completely flat, the accelerometer values are used to calculate the angle of the phone to ground in Pitch (Y) and Roll
(x)
These Pitch (Ay) and Roll (Ax) values are used to compensate for the angle the communication device is being held by a gain factor calibrated during installation. LateralPositon„X„corrected = (LateralPositionJC - (Ax * gainfactor)
* LateralPosltonJCcorrected ~ {LateralPosition„.Y - (Ay * gainfactor)
* This helps in differentiating the preferred embodiment from prior art embodiments which tend to use. the deformation and extrlnsics of the marker to calculate lateral position, not the accelerometer data from the mobile communication device, Stage ; Real World Position Calculation
* With the Marker ID identified, the system preferably lookups its real world position via a lookup table and then use the LateralPositlon J„.corrected
and LateralPosition„Y„corrected values to calculate the offset value, and thus the communication devices real world position in X. Y, and Z (floor).
« MarkerRealWorldPositlonJC + LateraiPositonJ correeted
* MarkerRealWorfdPositsonJf* - LateralPositon„Y„corrected