CN113674362B - Indoor imaging positioning method and system based on spatial modulation - Google Patents

Indoor imaging positioning method and system based on spatial modulation Download PDF

Info

Publication number
CN113674362B
CN113674362B CN202110972401.3A CN202110972401A CN113674362B CN 113674362 B CN113674362 B CN 113674362B CN 202110972401 A CN202110972401 A CN 202110972401A CN 113674362 B CN113674362 B CN 113674362B
Authority
CN
China
Prior art keywords
feature
positioning
image
features
rear camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110972401.3A
Other languages
Chinese (zh)
Other versions
CN113674362A (en
Inventor
冯立辉
杨景宏
杨爱英
陈威
卢继华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110972401.3A priority Critical patent/CN113674362B/en
Publication of CN113674362A publication Critical patent/CN113674362A/en
Application granted granted Critical
Publication of CN113674362B publication Critical patent/CN113674362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • G03B15/03Combinations of cameras with lighting apparatus; Flash units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention relates to an indoor imaging positioning method and system based on spatial modulation, and belongs to the technical field of indoor positioning and position identification. The system comprises a feature code, an LED lamp, a positioning terminal and a positioning module; the feature code is connected with the LED lamp, and the positioning terminal is connected with the positioning module; the feature code comprises features and two-dimensional codes, each feature is a sector with gradient gradual change, and angle difference is formed between the features; the two-dimensional code uses black and white stripes to carry out line coding; the method comprises the following steps: setting the working distance between the LED lamp and the rear camera; determining an optimal exposure time and IOS value; determining the optimal transparency of the feature code; calibrating a rear camera, and determining an internal reference matrix and a distortion coefficient; initializing a system; acquiring image information; and processing the image information and resolving the 2D coordinates of the rear camera. The method avoids modification of the LEDs and reduces the cost; a large number of IDs can be provided, positioning requirements of a large environment are met, and the problem that artificial features are easily affected by illumination conditions is solved.

Description

Indoor imaging positioning method and system based on spatial modulation
Technical Field
The invention relates to an indoor imaging positioning method and system based on spatial modulation, and belongs to the technical field of indoor positioning and position identification.
Background
With the development of 5G communication, the goal of the internet of everything is possible to achieve. There is also an increasing demand for LBS. There are currently a number of indoor positioning systems based on different positioning principles, such as Wireless Local Area Network (WLAN) based positioning schemes, ultra Wideband (UWB) based positioning techniques, ultrasound based positioning techniques, infrared (IR) based positioning techniques, inertial navigation based positioning techniques and indoor visible light positioning techniques. Compared with other indoor positioning technologies, the indoor visible light positioning technology has obvious advantages in the aspects of high precision, low energy consumption, low cost and easy commercial lamp, and is paid more attention to by more researchers.
There are a number of sophisticated indoor positioning schemes with high accuracy and high robustness. Most current solutions require modification of the LED light source, increasing costs. The spread of LED IDs is insufficient due to the influence of the modulation frequency. A single modulation frequency can provide 20-30 different IDs, and the application in a large environment such as a supermarket is severely limited. Some methods can provide thousands of IDs, but decoding one ID requires taking multiple pictures, which results in more power consumption. Some methods are external to the feature map, so that modification to the LEDs is avoided, and the method is based on feature point extraction. But these methods have a large amount of room for improvement in terms of computational time. In addition, the positioning accuracy of such methods is susceptible to illumination.
Disclosure of Invention
Aiming at the problems that in the existing indoor imaging positioning method and system, the light source needs to be modified, the ID is limited or the long ID is provided to increase the energy consumption, and the scheme based on the feature map is easily influenced by illumination; in the indoor positioning system, a special number and a corresponding output signal are required to be set for each LED lamp in the LED generation and installation links, and the problems of complicated system construction and error-prone positioning caused by the fact that LED lamp information needs to be recorded one by one in the use process are solved.
The indoor imaging positioning system based on spatial modulation comprises a feature code, an LED lamp, a positioning terminal and a positioning module;
the feature codes are attached to the LED lamp and comprise K features and two-dimensional codes, and the feature codes are obtained by modulating the position information of the LEDs; the LED lamp provides a stable background for the artificial feature in the feature code, so that the feature is not easily affected by the change of the illumination condition;
wherein K is greater than or equal to 4, each feature is a sector with gradient gradual change, and the differences among the features are reflected by different angles; each feature is used for representing the real position of the corresponding feature;
the decoding method adopts a uniform pooling method, and decoding information is obtained by comparing the average value of total pixels in a pooling window with a preset threshold value;
the two-dimensional code comprises ID number, X coordinate, Y coordinate and Z coordinate information of the LED lamp, wherein XYZ is three-dimensional information of a GPS or a self-defined reference system;
each row of the two-dimensional code is encoded by using T black and white stripes, wherein a black pixel block represents logic '0', and a white pixel block represents logic '1'; the number of IDs that T bits can provide to the system is 2 T If the S rows are ID numbers, the bits that can be encoded by the ID are s×t bits, and the number of available IDs is 2 S*T
The positioning terminal comprises a rear camera; the rear camera acquires image information of the LED lamp, the positioning module relies on a standard template, the standard template is used for characteristic sequence and direction determination, and the positioning module performs information calculation on an image acquired by the positioning terminal to acquire the current position of the positioning terminal;
the connection relation of each component in the imaging positioning system is as follows: the feature code is connected with the LED lamp, and the positioning terminal is connected with the positioning module; a rear camera in the positioning terminal acquires LED lamp information;
the indoor imaging positioning method based on spatial modulation comprises the following steps:
step 1, setting the working distance between an LED lamp and a rear camera;
wherein the range of the working distance is 1 to 3 meters;
step 2, determining the optimal exposure time and IOS value of the rear camera;
step 3, determining the optimal transparency of the feature code;
step 4, calibrating the rear camera, and determining an internal reference matrix and a distortion coefficient of the rear camera;
step 5, initializing a system, which specifically comprises initializing parameters and loading parameters;
the loading parameters comprise loading standard templates, and reading the internal reference matrix and the distortion coefficient determined in the step 4;
initializing parameters, including: initializing a feature diameter, a two-dimensional code length and width, an ORB (object reference number) detection extraction parameter, a uniform pooling threshold value and a binarization threshold value in a feature code;
step 6, acquiring image information by a rear camera;
and 7, processing the image information obtained in the step 6, and calculating the 2D coordinate information of the rear camera, wherein the method specifically comprises the following sub-steps:
step 7.a), respectively preprocessing the standard template loaded in step 5 and the image obtained in step 6 to obtain a preprocessed standard template and an preprocessed image;
the pretreatment comprises denoising and sharpening;
step 7. B), respectively extracting the characteristics of the standard template and the image preprocessed in the step 7.a) by using an ORB corner point detector to obtain standard template characteristics and image characteristics;
step 7. C), matching the standard template features and the image features extracted in the step 7. B), and obtaining image two-dimensional coordinates of K features through a matching relationship;
k is the number of standard template features, and K is more than or equal to 4;
step 7. D) correcting the image preprocessed in step 7.a) and intercepting the two-dimensional code, specifically:
step 7.d1) determining the slope of a connecting line of two features of the image by using the feature sequence defined in the standard template;
step 7.d2), determining the slope of the image feature connecting line by using standard template features, and rotating and correcting the image;
step 7.d3) determining a two-dimensional code range by utilizing the geometric proportion relation between the features and the two-dimensional code and intercepting the two-dimensional code range;
step 7.e) binarizing the two-dimensional code intercepted in the step 7. D) to obtain a binarized image;
step 7.f) uniformly pooling the binarized image obtained in step 7.e) to obtain three-dimensional coordinate information of the LED lamp, specifically: uniformly pooling the binarized image from left to right and from top to bottom based on the two-dimensional code to obtain binary sequence information, and performing binary conversion on the binary sequence information to obtain three-dimensional coordinate information of the LED lamp;
step 7.g) obtaining three-dimensional coordinate information corresponding to K features of the standard template by using the three-dimensional coordinate information obtained in step 7.f) and through the geometric relationship between the center of the LED lamp and the features;
step 7.h) iterates based on nonlinear optimization, specifically: solving by using the two-dimensional coordinate information of the K features and the corresponding three-dimensional coordinate information, the camera internal reference matrix and the distortion coefficient to obtain the pose of the rear camera;
the pose comprises a rotation matrix and a translation matrix of the positioning terminal relative to a coordinate system where XYZ is located, and the rotation matrix and the translation matrix are respectively marked as R and t;
step 7.i) carrying out inverse transformation on the pose obtained by solving in the step 7.h) to obtain coordinate information of a rear camera, namely the position of the positioning terminal;
wherein the inverse transformation is: p= -R -1 ·t;
The position information of the positioning terminal is denoted as P, namely, the 2D coordinate information of the rear camera.
Advantageous effects
Compared with the existing visible light positioning method and system, the indoor imaging positioning method and system based on spatial modulation have the following beneficial effects:
1. the method avoids modification of the LEDs and reduces the cost;
2. the method simplifies the installation flow of the indoor visible light positioning system, reduces the possibility of installation errors, and improves the expansibility of the system;
3. the method can provide a large number of IDs and meet the positioning requirement of a large environment;
4. compared with the common feature map positioning scheme, the method solves the problem that the artificial feature is easily affected by illumination conditions.
Drawings
FIG. 1 is a feature code diagram used in an indoor imaging positioning method and system based on spatial modulation of the present invention;
FIG. 2 is a schematic diagram of a connection mode between a feature code and an LED lamp in an indoor imaging positioning system based on spatial modulation;
FIG. 3 is a schematic diagram illustrating the processing of feature codes by an indoor imaging positioning method based on spatial modulation;
FIG. 4 is a schematic diagram of a decoding method of a feature code in an indoor imaging positioning method based on spatial modulation;
fig. 5 is a system diagram of an embodiment of an indoor imaging positioning method and system based on spatial modulation in the present invention.
Detailed Description
The implementation of a spatially modulated indoor imaging positioning method and system of the present invention is further described in detail below with reference to the drawings and examples.
Example 1
The present embodiment describes specific cases of a configuration method of the feature code, an encoding method of the feature code, a combination method of the feature code and the illumination apparatus, and a decoding method of the feature code. As shown in fig. 1 (a), the feature code is composed of 4 features and a two-dimensional code, the four features are respectively distributed on the upper left, the upper right, the lower left and the lower right of the two-dimensional code, the first feature on the upper right is denoted as (1), and the remaining features are respectively denoted as (2), (3) and (4) in a counterclockwise order. The two-dimensional code consists of four rows and eight columns and 32 black-and-white pixel blocks in total; wherein a black pixel block characterizes a logic 0 and a white pixel block characterizes a logic 1. The first row is LED ID information, in this example, the LED ID is a binary sequence 01001001, and the second, third, and fourth row sequences are binary sequences 00000000, and the corresponding coordinates are (0.00,0.00,0.00).
The diameters of the four features are 0.026m, and the side length of the two-dimensional code is 0.09m. The four features have the function of extracting the positions of the four features in the image by using an algorithm, and locking the two-dimensional code through the four positions.
To reduce the influence of the feature code on the LED illumination, the transparency of the feature code is adjusted. As shown in fig. 2, the feature code is attached to the LED, so as to complete the arrangement of the transmitting end in the positioning system.
As shown in fig. 3, the decoding main process is shown. The method comprises the following steps: (1) feature matching: performing feature matching of the image and a standard template, wherein the resolution of the image is 4000×3000, and the shooting distance is 1.6m; (2) locking the two-dimension code: obtaining pixel coordinate information of the features (1) (2) (3) (4) (1564.0,1004.0), (1365.0,867.0), (1233.0,1073.0), (1423.5,1204.5) by utilizing the matching relation obtained in the step (1), calculating the deflection angle of the feature code to obtain a correction angle of 34.55 degrees, correcting the feature code by the correction angle to obtain a corrected image and pixel coordinate information (1515,914), (1274,914), (1282,1159), (1513,1159) of four corrected features, and intercepting the two-dimensional code by utilizing the corrected four feature pixel coordinate information; (3) two-dimensional code interception: obtaining a complete two-dimensional code by utilizing the image and the geometric relationship between the features and the two-dimensional code obtained in the step (2); (4) binarization: and (3) binarizing the complete two-dimensional code obtained in the step (3) to obtain a binarized image.
A specific decoding method is shown in fig. 4. (4) The obtained binarized pixel size is 152 multiplied by 150, and the size of the pooling window is 37 multiplied by 19 according to the constitution mode of four rows and eight columns of the two-dimensional code. When the pooling window slides to a position, calculating the average value of pixels in the window, comparing the average value with a threshold value, judging that the average value is larger than the threshold value to be 1, and otherwise, judging that the average value is 0.
Example 2
The present embodiment describes that the codec scheme involved can provide a large number of IDs. The feature code consists of K features and a two-dimensional code, wherein K is greater than or equal to 4, each feature is a sector with gradient gradual change, and the differences among the features are reflected by different angles; each feature is used for representing the real position of the corresponding feature; the two-dimensional code comprises ID number, X coordinate, Y coordinate and Z coordinate information of the LED lamp; each row of the two-dimensional code is encoded by using T black and white pixel blocks, wherein the black pixel blocks represent logic 0, and the white pixel blocks represent logic 1; the number of IDs that T bits can provide to the system is 2 T If the S rows are all ID numbers, the bits that can be encoded by the ID are s×t bits, which can be mentionedThe number of supplied IDs is 2 S*T
Specific embodiments as shown in fig. 1 (a), the feature code is composed of 4 features and a two-dimensional code, where the composition mode of the two-dimensional code is specifically located: from top to bottom, each row respectively represents ID number, X coordinate, Y coordinate and Z coordinate information of the LED lamp; each row is encoded using 8 black and white pixel blocks, a black pixel block representing a logic 0, a white pixel block representing a logic 1, each type of pixel block representing a bit; the 8 bits can provide 256 IDs for the system, if four rows are ID numbers, the ID can be encoded into 32 bits, and the number of available IDs is 2 32 . If the two-dimensional code only codes the LED ID, and the database is used for storing the coordinate information of the LED, the separation of the coordinate information and the feature code can be realized. The change of the environment is only needed to modify the coordinate information of the corresponding ID in the database.
Example 3
This embodiment 3 illustrates the construction of a layer of indoor positioning system for a small mall using the method and system.
As shown in fig. 5, under the condition that the market LED lamp is not modified, corresponding feature codes are formulated according to the current position of the LED lamp and attached to the LED.
The specific feature code is manufactured as follows: for PNG formatted images, its transparency is changed by changing the alpha-th channel. If alpha=0, the image is transparent, and if alpha=255, the image is identical to the original image. In this embodiment, the alpha-th channel of the feature code image is set to alpha=160, as shown in fig. 1 (b). It is printed and attached to the LED as in fig. 2. The LED has an ID of 01001001 and coordinates (0, 0), i.e., the origin of the reference frame. The diameter of the features in the feature code is 0.025m, and the side length of the two-dimensional code is 0.09m.
In the embodiment, the mobile phone Nova5Pro is taken as an example, the mobile phone is taken as a positioning terminal, the distance between the mobile phone and the LED is 1-2.5m, the rear camera of the mobile phone faces upwards, and the image information is obtained through the rear camera. Obtaining an internal reference matrix of the camera through calibrating the rear camera
Figure BDA0003226327330000061
Distortion coefficient
[0.098452,-0.172198,0.048389,0.001645,-0.000265]
The internal reference matrix and the distortion coefficient are used for correcting the distortion of the photographed image and solving the pose of the camera. The positioning algorithm adopts a computer vision technology to extract the features of the feature codes, and matches the features with the standard template to obtain the three-dimensional coordinates of the LEDs, the three-dimensional coordinates of the features in the feature codes and the two-dimensional coordinates of the features in the feature codes.
The method comprises the following specific steps:
step 1, acquiring image information at a position (-0.4, -0.4, -2.0) by using a rear camera, wherein the unit is m;
step 2. A), respectively preprocessing the standard template and the image obtained in the step 1 to obtain a preprocessed standard template and an preprocessed image;
the pretreatment comprises denoising, sharpening and contrast improvement;
step 3, calculating two-dimensional coordinate information of the rear camera, which specifically comprises the following sub-steps:
step 3. A), respectively extracting features of the standard template and the image after the pretreatment in the step 2 by using an ORB corner point detector to obtain standard template features and image features;
wherein the parameters of the corner detector are set as follows:
nFeatures=1000
scaleFactor=1.12
nLevels=8
edgeThreshold=36
patchSize=33
fastThreshold=31
step 3.b) matching the standard template features and the image features extracted in the step 3. A), obtaining 2D coordinates of 4 features through a matching relationship, recording the first feature on the right as (1), and recording the remaining features as (2), (3) and (4) in a counterclockwise sequence, wherein the four coordinates are (2516.0,2172.0), (2670.0,2013.0), (2502.0,1862.0) and (2315.0,2018.0) respectively, and recording the left features as uncorrected two-dimensional coordinates;
step 3.b) calculating deflection angles of the obtained two-dimensional coordinates of the four features through the step 3. A) by using the features (1) and (2) to obtain 134.08 degrees, and correspondingly rotating an image according to the deflection angles to correct the image; calculating and recording two-dimensional coordinates of the four rotated features, and recording the two-dimensional coordinates as corrected two-dimensional coordinates (2616,1902), (2394,1902), (2403,2128) and (2620,2128);
step 3.b) step 3. A) correcting the image to obtain corrected two-dimensional coordinates of four features, and intercepting the two-dimensional code by using the corrected two-dimensional coordinates and the geometric relationship between the features in the feature code and the two-dimensional code to obtain a two-dimensional code image with the size of 141 multiplied by 137;
step 3.c), binarizing the two-dimensional code image obtained in the step 3.b) to obtain a binarized image;
step 3. D) performing pooling decoding on the binarized image obtained in the step 3.c) to obtain a binary sequence:
Figure BDA0003226327330000081
decoding the binary sequence to obtain LED three-dimensional coordinate information (0, 0);
wherein the pooling mode uses uniform pooling, and the threshold value of the pooling is set to 128;
step 3.e) step 3.f) to obtain three-dimensional coordinate information of the LED, and obtaining three-dimensional coordinates (-0.0725,0.0725,0), (0.0725,0.0725,0), (0.0725, -0.0725,0) (-0.0725, -0.0725, 0) in unit m by utilizing geometric relations between features in the feature code and the two-dimensional code;
step 3.h) using the uncorrected two-dimensional coordinates of (1), (2), (3) and (4) obtained in step 3.b), the three-dimensional coordinates of (1), (2), (3) and (4) obtained in step 3.e), and the distortion coefficients of the camera internal reference matrix, and introducing a solving function based on nonlinear optimization to obtain a rotation matrix and a translation matrix;
wherein the rotation matrix is
Figure BDA0003226327330000082
Translation matrix of
Figure BDA0003226327330000083
The solving function is an iterative method based on Levenberg-Marquardt optimization;
step 7.i) carrying out inverse transformation on the rotation matrix and the translation matrix of the pose obtained by solving in the step 7.h) to obtain coordinate information (-0.462, -0.481, -1.98) of the rear camera;
wherein, the inverse transformation formula is: p= -R -1 T, wherein P is position information of the positioning terminal, i.e. coordinate information of the rear camera, R, t is rotation and translation of the positioning terminal with respect to the world coordinate system, respectively.
The present invention has been described in detail in the above examples, but the embodiment of the present invention is not limited thereto. The description of this implementation is only intended to help understand the method of the invention and its core ideas; also, as will occur to those of ordinary skill in the art upon reading the teachings of the present invention, the present specification should not be construed as limited to the embodiments and applications described herein. Various obvious modifications thereof are within the scope of the present invention without departing from the spirit of the method and the scope of the claims.

Claims (9)

1. An indoor imaging positioning method based on spatial modulation is characterized in that: the supported indoor imaging positioning system comprises a feature code, an LED lamp, a positioning terminal and a positioning module; the feature codes are attached to the LED lamp and comprise K features and two-dimensional codes, and the feature codes are obtained by modulating the position information of the LEDs; the LED lamp provides a stable background for the artificial feature in the feature code, so that the feature is not easily affected by the change of the illumination condition; the two-dimensional code comprises ID number, X coordinate, Y coordinate and Z coordinate information of the LED lamp, wherein XYZ is three-dimensional information of a GPS or a self-defined reference system; the feature code is connected with the LED lamp, and the positioning terminal is connected with the positioning module; a rear camera in the positioning terminal acquires LED lamp information; the indoor imaging positioning method comprises the following steps:
step 1, setting the working distance between an LED lamp and a rear camera;
step 2, determining the optimal exposure time and IOS value of the rear camera;
step 3, determining the optimal transparency of the feature code;
step 4, calibrating the rear camera, and determining an internal reference matrix and a distortion coefficient of the rear camera;
step 5, initializing a system, which specifically comprises initializing parameters and loading parameters;
the loading parameters comprise loading standard templates, and reading the internal reference matrix and the distortion coefficient determined in the step 4; initializing parameters, including: initializing a feature diameter, a two-dimensional code length and width, an ORB (object reference number) detection extraction parameter, a uniform pooling threshold value and a binarization threshold value in a feature code;
wherein the standard template is a feature code;
step 6, acquiring image information by a rear camera;
and 7, processing the image information obtained in the step 6, and calculating the 2D coordinate information of the rear camera, wherein the method specifically comprises the following sub-steps:
step 7.a), respectively preprocessing the standard template loaded in step 5 and the image obtained in step 6 to obtain a preprocessed standard template and an preprocessed image;
step 7.a), the preprocessing includes denoising and sharpening;
step 7. B), respectively extracting the characteristics of the standard template and the image preprocessed in the step 7.a) by using an ORB corner point detector to obtain standard template characteristics and image characteristics;
step 7. C), matching the standard template features and the image features extracted in the step 7. B), and obtaining image two-dimensional coordinates of K features through a matching relationship;
step 7. D) correcting the image preprocessed in the step 7.a) and intercepting the two-dimensional code;
step 7.e) binarizing the two-dimensional code intercepted in the step 7. D) to obtain a binarized image;
step 7.f) uniformly pooling the binarized image obtained in step 7.e) to obtain three-dimensional coordinate information of the LED lamp, specifically: uniformly pooling the binarized image from left to right and from top to bottom based on the two-dimensional code to obtain binary sequence information, and performing binary conversion on the binary sequence information to obtain three-dimensional coordinate information of the LED lamp;
step 7.g) obtaining three-dimensional coordinate information corresponding to K features of the standard template by using the three-dimensional coordinate information obtained in step 7.f) and through the geometric relationship between the center of the LED lamp and the features;
step 7.h) iterates based on nonlinear optimization, specifically: solving by using the two-dimensional coordinate information of the K features and the corresponding three-dimensional coordinate information, the camera internal reference matrix and the distortion coefficient to obtain the pose of the rear camera;
and 7.i) carrying out inverse transformation on the pose obtained by solving in the step 7.h) to obtain coordinate information of the rear camera, namely the position of the positioning terminal.
2. The spatial modulation-based indoor imaging positioning method as set forth in claim 1, wherein: each feature of the feature code in the supported indoor imaging positioning system is a sector with gradient, and the differences among the features are reflected by different angles; each feature is used to characterize the true position of the corresponding feature.
3. The spatial modulation-based indoor imaging positioning method as set forth in claim 2, wherein: and the decoding mode of the feature codes in the indoor imaging positioning system adopts a uniform pooling mode, and the decoding information is obtained by comparing the total pixel mean value in the pooling window with a preset threshold value.
4. A spatial modulation based indoor imaging positioning method according to claim 3, wherein: each row of two-dimensional codes in the supported indoor imaging positioning system is encoded by using T black and white stripes, wherein a black pixel block represents logic '0', and a white pixel block represents logic '1'; t bitsThe number of IDs that can be provided to the system is 2 T If the S rows are ID numbers, the bits that can be encoded by the ID are s×t bits, and the number of available IDs is 2 S*T
5. The spatial modulation-based indoor imaging positioning method according to claim 4, wherein: the supported indoor imaging positioning system comprises a positioning terminal and a positioning system, wherein the positioning terminal comprises a rear camera; the rear camera acquires image information of the LED lamp, the positioning module relies on a standard template, the standard template is used for feature sequence and direction determination, and the positioning module performs information calculation on an image acquired by the positioning terminal to acquire the current position of the positioning terminal.
6. The spatial modulation-based indoor imaging positioning method according to claim 5, wherein: in the step 1, the value range of the working distance is 1 to 3 meters.
7. The spatial modulation-based indoor imaging positioning method as set forth in claim 6, wherein: in the step 7. C), K is the number of standard template features, and K is more than or equal to 4.
8. The spatial modulation-based indoor imaging positioning method according to claim 7, wherein: step 7. D), specifically:
step 7.d1) determining the slope of a connecting line of two features of the image by using the feature sequence defined in the standard template;
step 7.d2), determining the slope of the image feature connecting line by using standard template features, and rotating and correcting the image;
step 7.d3) determining a two-dimensional code range by utilizing the geometric proportion relation between the features and the two-dimensional code, and intercepting the two-dimensional code range.
9. The spatial modulation-based indoor imaging positioning method according to claim 8, wherein: 7.h), the pose includes a rotation matrix and a translation matrix of the positioning terminal relative to the coordinate system where the XYZ is located, which are respectively denoted as R and t;
in step 7. I), the inverse transformation is: p= -R -1 T; and P is the position information of the positioning terminal, namely the 2D coordinate information of the rear camera.
CN202110972401.3A 2021-08-24 2021-08-24 Indoor imaging positioning method and system based on spatial modulation Active CN113674362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110972401.3A CN113674362B (en) 2021-08-24 2021-08-24 Indoor imaging positioning method and system based on spatial modulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110972401.3A CN113674362B (en) 2021-08-24 2021-08-24 Indoor imaging positioning method and system based on spatial modulation

Publications (2)

Publication Number Publication Date
CN113674362A CN113674362A (en) 2021-11-19
CN113674362B true CN113674362B (en) 2023-06-27

Family

ID=78545425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110972401.3A Active CN113674362B (en) 2021-08-24 2021-08-24 Indoor imaging positioning method and system based on spatial modulation

Country Status (1)

Country Link
CN (1) CN113674362B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855407A (en) * 2016-12-26 2017-06-16 中国科学技术大学 A kind of indoor positioning navigation system comprising visible light attenuation piece
CN107194448A (en) * 2017-04-28 2017-09-22 南京邮电大学 A kind of transmission and localization method based on visible light hidden Quick Response Code
CN110261823A (en) * 2019-05-24 2019-09-20 南京航空航天大学 Visible light indoor communications localization method and system based on single led lamp
CN111928852A (en) * 2020-07-23 2020-11-13 武汉理工大学 Indoor robot positioning method and system based on LED position coding
CN112033408A (en) * 2020-08-27 2020-12-04 河海大学 Paper-pasted object space positioning system and positioning method
CN113276106A (en) * 2021-04-06 2021-08-20 广东工业大学 Climbing robot space positioning method and space positioning system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855407A (en) * 2016-12-26 2017-06-16 中国科学技术大学 A kind of indoor positioning navigation system comprising visible light attenuation piece
CN107194448A (en) * 2017-04-28 2017-09-22 南京邮电大学 A kind of transmission and localization method based on visible light hidden Quick Response Code
CN110261823A (en) * 2019-05-24 2019-09-20 南京航空航天大学 Visible light indoor communications localization method and system based on single led lamp
CN111928852A (en) * 2020-07-23 2020-11-13 武汉理工大学 Indoor robot positioning method and system based on LED position coding
CN112033408A (en) * 2020-08-27 2020-12-04 河海大学 Paper-pasted object space positioning system and positioning method
CN113276106A (en) * 2021-04-06 2021-08-20 广东工业大学 Climbing robot space positioning method and space positioning system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Indoor location technology based on LED visible light and QR code;QIAN Wu等;《Research Article》;论文第4606-4612页 *
可见光定位系统中扩展LED ID的 双频率或运算调制方法;曲若彤等;《光学技术》;论文第677-683页 *

Also Published As

Publication number Publication date
CN113674362A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN109920007B (en) Three-dimensional imaging device and method based on multispectral photometric stereo and laser scanning
CN111145238A (en) Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN106683139B (en) Fisheye camera calibration system based on genetic algorithm and image distortion correction method thereof
CN107610164B (en) High-resolution four-number image registration method based on multi-feature mixing
CN113129384B (en) Binocular vision system flexible calibration method based on one-dimensional coding target
CN112929626B (en) Three-dimensional information extraction method based on smartphone image
CN111768452A (en) Non-contact automatic mapping method based on deep learning
CN110400278A (en) A kind of full-automatic bearing calibration, device and the equipment of color of image and geometric distortion
CN112016478A (en) Complex scene identification method and system based on multispectral image fusion
CN115376024A (en) Semantic segmentation method for power accessory of power transmission line
CN112508812A (en) Image color cast correction method, model training method, device and equipment
CN113486975A (en) Ground object classification method, device, equipment and storage medium for remote sensing image
CN110009670A (en) The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
Feng et al. A pattern and calibration method for single-pattern structured light system
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN113674362B (en) Indoor imaging positioning method and system based on spatial modulation
CN108491747B (en) Method for beautifying QR (quick response) code after image fusion
CN113959439A (en) Indoor high-precision visible light positioning method and system under sparse light source
CN112243518A (en) Method and device for acquiring depth map and computer storage medium
CN114066954A (en) Feature extraction and registration method for multi-modal images
CN115578539B (en) Indoor space high-precision visual position positioning method, terminal and storage medium
CN115880683B (en) Urban waterlogging ponding intelligent water level detection method based on deep learning
CN115620150B (en) Multi-mode image ground building identification method and device based on twin transformers
CN111325218A (en) Hog feature detection and matching method based on light field image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant