CN106291517A - Indoor cloud robot angle positioning method based on position and visual information optimization - Google Patents

Indoor cloud robot angle positioning method based on position and visual information optimization Download PDF

Info

Publication number
CN106291517A
CN106291517A CN201610658178.4A CN201610658178A CN106291517A CN 106291517 A CN106291517 A CN 106291517A CN 201610658178 A CN201610658178 A CN 201610658178A CN 106291517 A CN106291517 A CN 106291517A
Authority
CN
China
Prior art keywords
image
cloud
indoor
positioning
rfid reader
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610658178.4A
Other languages
Chinese (zh)
Inventor
李阳
纪其进
朱艳琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201610658178.4A priority Critical patent/CN106291517A/en
Publication of CN106291517A publication Critical patent/CN106291517A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • G01S11/04Systems for determining distance or velocity not using reflection or reradiation using radio waves using angle measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Automation & Control Theory (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an indoor cloud robot angle positioning method based on position and visual information optimization. The method adopts a cloud computing framework, data acquisition is completed by a mobile robot end, namely a client, data computation is completed by a cloud end, the computing speed and the storage capacity of the cloud end are greatly improved, the two positioning methods are combined and the cloud computing mode is adopted, so that the time required by positioning is shortened, and more accurate position coordinates and angle information can be obtained.

Description

Indoor cloud robot angle positioning method based on position and visual information optimization
Technical Field
The invention relates to the technical field of indoor mobile robot positioning, in particular to an indoor cloud robot angle positioning method based on position and visual information optimization.
Background
With the development of society and the emergence of various new technologies, people increasingly need to improve their quality of life and freedom of life through modern high and new technologies. Therefore, research on indoor mobile robots, such as service robots of intelligent wheelchairs, is becoming a focus. The cloud robot is an emerging robot concept, and utilizes cloud computing, cloud storage and other internet technologies, so that the traditional robot has strong computing capability, storage capability and resource sharing capability. The cloud robot enables the robot to make full use of a cloud data processing and storing mechanism, extra hardware equipment required by local computing is reduced, the running speed and the practicability of the robot are greatly improved, and the cloud robot becomes one of main directions in the research field of mobile robots.
The indoor cloud robot has the functions of password identification, autonomous positioning, real-time navigation, path planning, dynamic obstacle avoidance and the like as a common robot. However, for autonomous movement, the position of the robot in the working environment must be accurately known, i.e. the robot has an autonomous positioning problem. Meanwhile, in the practical application of autonomous navigation, the robot must know the specific position of the robot in the motion environment to complete other tasks. Autonomous positioning is therefore said to be the most fundamental problem that mobile robots achieve, which has also led to active research by many universities and enterprises.
In recent years, many positioning theories and positioning systems for indoor mobile robots have appeared. On the basis of non-visual sensors and visual sensors, a great deal of research work is done on a positioning system at home and abroad, and the research results mainly comprise: a monocular and binocular vision positioning system, a positioning system based on a wireless signal intensity attenuation model, a positioning system based on ultrasonic positioning and the like. Some of the technologies are developed and utilized to form a positioning service solution of a comparison system or form a complete commercial product, but many technologies are still under research and test. The distance measurement method required by indoor positioning mainly comprises the following steps: time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA), signal strength (RSSI), etc. These methods can effectively calculate the distance in the positioning system, wherein the measurement accuracy of the time of arrival, the time difference of arrival and the angle of arrival technology is high, but the positioning accuracy is greatly influenced due to the complexity of the indoor environment. In addition, the positioning accuracy reaches centimeter level by using an Ultra Wide Band (UWB) method, but the equipment required by the UWB technology is expensive, and cannot be widely used in the common civil field at present.
In addition, most of the existing positioning methods can only determine the coordinate position of the mobile robot, rarely obtain the angle information of the robot, and cannot further develop the functions of path planning, dynamic obstacle avoidance and the like of the robot. Currently, the most indoor mobile robots are mainly positioned by the following two methods: (1) some mobile robots can obtain accurate direction information by adopting an electronic compass, but the electronic compass used in indoor space has certain defects: on the one hand, the angular information obtained by the electronic compass is an absolute angle with the earth as a reference frame, and the angular information required in the indoor space is only a relative angle of a certain specific object. Meanwhile, the working principle of the electronic compass is that the angle information of the electronic compass is obtained by utilizing the calculation of the included angle between the near-ground magnetic field and the magnetometer in the compass, but when the robot moves, the rotating motor can generate the change of the magnetic field intensity, thereby influencing the precision of the electronic compass; on the other hand, the most important of the indoor mobile robot is the position coordinates of the indoor mobile robot, but the electronic compass method can only obtain angle information, and the method does not help to obtain accurate position information by correction, so that the method is difficult to achieve the full beauty. (2) Some indoor mobile robots adopt an autonomous positioning and orientation method, and the idea of two-dimensional bar codes is used for reference. And placing road sign pictures indoors, and analyzing the road sign pictures to obtain coordinate information and deflection angles. However, the method needs to partially reform the indoor environment, for example, a plurality of road sign pictures are pasted on the positions such as a ceiling according to a certain rule, so that the indoor attractiveness is affected, and meanwhile, the method has a high requirement on the resolution of a camera used by the robot, otherwise, the information carried by the road sign pictures cannot be effectively analyzed.
Due to the influences of cost, positioning accuracy, reliability, usability and the like, the indoor positioning technology is not widely applied to daily life of people, and the indoor positioning system still has a great number of problems to be solved.
Disclosure of Invention
The invention considers that most of the use cases of the indoor mobile robot are indoor space with small relative range and factors such as positioning time, error range and cost price of various positioning systems. A visual positioning method based on image search and a method combining an indoor wireless positioning technology based on RFID are provided, and a composite positioning system is generated.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
an indoor cloud robot angle positioning method based on position and visual information optimization comprises a cloud end, a mobile robot to be positioned, an RFID reader-writer and a camera, wherein the RFID reader-writer and the camera are arranged on the mobile robot, and the method comprises the following steps:
step 1) receiving preliminary positioning information, and using a mobile robot to receive a signal strength RSSI (received signal strength indicator) sent by an active label and a corresponding identification ID (identity) value thereof as the preliminary positioning information through an RFID reader-writer;
step 2) acquiring preliminary position information, sending the received preliminary positioning information to a cloud end by the mobile robot, and calculating a preliminary position coordinate of the mobile robot by the cloud end according to the acquired RSSI value;
step 3) determining an image searching range, selecting an interval range which takes the preliminary position coordinate result of the RFID positioning in the step 2) as a dot and takes the prior probability error R as a radius as an image searching area in an image database;
step 4) image matching is carried out, the mobile robot end obtains images of surrounding scenes through a camera and sends the images to the cloud, denoising processing is carried out on image information obtained in real time by using an image processing method, the images shot in real time are matched with the images in the image searching area in the step 3) by adopting a structural similarity SSIM algorithm, the best matching image is searched, if matching is successful, the next step is carried out, and if matching is unsuccessful, the step 1 is skipped;
and 5) obtaining coordinate position and angle information, and obtaining the position coordinate and angle information of the image according to the corresponding image in the matched image database.
Further, in the step 1) and the step 2), the RSSI value of the signal strength read by the RFID reader is converted into the absolute distance between the RFID reader and the corresponding tag according to the equation of the signal propagation model, and then the position coordinate of the mobile robot is obtained by using a trilateral positioning method.
Further, the signal propagation model is: the RFID reader-writer receives an equation of the attenuation degree of the RSSI transmitted by the label along with the increase of the propagation distance in an indoor environment, a signal propagation model adopts a logarithm-normal distribution propagation loss model, and the equation is as follows:
in the formula,for the RSSI value of the tag received by the RFID reader,for RFID reader at reference pointReceiving a signal strength RSSI value sent by a label; n is an environment-dependent path loss exponent;a gaussian distributed random variable with an average value of 0, i.e. the attenuation of the signal through an obstacle;is a near ground reference distance;distance interval between the reader-writer and the label is set;
obtaining the distance from the RFID reader-writer to the label by adopting the logarithm-normal distribution propagation modelAnd the strength of the label signal received by the RFID reader-writerThe functional relationship between the two is:
in the formula, constantThe value of N determines the received signal strength RSSI value versus the signal transmission distance,and N is closely related to the use environment, and can be used as a constant after the use environment is determined.
Furthermore, n received signal strength RSSI values at the same label position adopt a Gaussian model to select RSSI values with high probability, and then a geometric mean value of the RSSI values is taken to reduce some small probability noisy data and increase positioning accuracy, wherein a Gaussian distribution function is as follows:
in the formula,
after the Gaussian filtering, the range of the effective signal strength RSSI value is obtainedAnd all the RSSI values within the range are taken out, and then the geometric mean value is calculated to obtain the final RSSI value.
Further, in the step 1), a plurality of labels are provided, and the plurality of labels are respectively placed at each corner of the indoor space.
Further, in the step 3), the image database is configured to divide an indoor space to be located into a plurality of parts, take every certain distance as a point, take all surrounding scenes at the point at a fixed changing angle, record position coordinates and angle information of the scenes when the images are taken, and establish a corresponding database to be stored in a cloud to form the image database.
Further, in the step 2), the mobile robot and the cloud end adopt a C/S architecture for communication, wherein the mobile robot is used as a client end to collect data, and the cloud end is used as a server end to perform data related operation and operation.
Further, in the step 4), the structure similarity SSIM algorithm is as follows:
wherein,which represents the function of the brightness,which represents a function of the contrast ratio,represents the function of the structure and represents the structure,andrespectively representing the pixel matrices corresponding to the two image blocks,andandrespectively representing mean, variance and covariance;is a small constant that prevents the denominator from being equal to 0;
the image similarity formula obtained by the above three formulas is:
and a, b and c are used for adjusting the importance degrees of the three components and customizing values, and the larger the result calculated by the image similarity formula is, the higher the matching degree of the two images is.
The invention has the beneficial effects that:
(1) short positioning time
When the mobile robot moves continuously indoors, real-time position information is needed, and the positioning speed is required to be adaptive to the moving speed of the robot. The invention determines the range of the current position of the wheelchair and reduces the image searching range by the preliminary positioning of the RFID and considering the error of the positioning method based on the RFID, the time required by the whole positioning process is the sum of the RFID positioning time and the image matching time, and the positioning method based on the RFID is faster and does not consume too much time when image matching is carried out in a small range, so the invention has shorter time on the whole and can completely adapt to the indoor positioning requirement of the mobile robot.
(2) The error is small
The invention realizes positioning based on an image searching method, firstly shoots different images of each position and angle in indoor space, numbers the images, records the coordinates and angle information of the shot position and establishes an image database. Secondly, shooting an image of the direction in which the mobile robot is located at the current position through a vision sensor in the positioning process, and finally performing feature matching on the current image and the image in the image database.
(3) The cost is low
Common indoor positioning methods are as follows: the ZigBee positioning technology is not high in price but too large in positioning error to meet the requirement of an indoor mobile robot; although the UWB positioning technology has high positioning precision, the UWB positioning technology is not easy to popularize due to the fact that the UWB positioning technology is too expensive; the WLAN positioning technology has the lowest cost but the stability and the precision of the WLAN positioning technology are required to be improved, and the price and the precision of the RFID positioning technology tend to be in a medium level; the method of combining the RFID positioning technology and the visual positioning technology is adopted, only a common camera is required to be added on the basis of the RFID positioning, and the price of the camera is lower, so that the cost is lower by comprehensively considering the method, and the cost performance of positioning is highest.
(4) Positioning module independence
The whole positioning system is independent, does not influence other functions of the mobile robot, and is favorable for further development of the functions of the mobile robot.
Drawings
Fig. 1 is a flowchart of an indoor cloud robot positioning operation process based on position and vision joint optimization according to an embodiment of the present invention;
FIG. 2 is a schematic plan view of a system hardware deployment in a simulation experiment performed according to the method provided by the embodiment of the present invention;
fig. 3 is a schematic diagram of distribution of image acquisition points in a simulation experiment performed according to the method provided by the embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As shown in fig. 1, an indoor cloud robot angle positioning method based on position and visual information optimization includes a cloud end, a mobile robot to be positioned, and an RFID reader and a camera provided on the mobile robot, and is characterized in that the method includes the following steps:
step 1) receiving preliminary positioning information, and using a mobile robot to receive a signal strength RSSI (received signal strength indicator) sent by an active label and a corresponding identification ID (identity) value thereof as the preliminary positioning information through an RFID reader-writer;
step 2) acquiring preliminary position information, sending the received preliminary positioning information to a cloud end by the mobile robot, and calculating a preliminary position coordinate of the mobile robot by the cloud end according to the acquired RSSI value;
step 3) determining an image searching range, selecting an interval range which takes the preliminary position coordinate result of the RFID positioning in the step 2) as a dot and takes the prior probability error R as a radius as an image searching area in an image database;
step 4) image matching is carried out, the mobile robot end obtains images of surrounding scenes through a camera and sends the images to the cloud, denoising processing is carried out on image information obtained in real time by using an image processing method, the images shot in real time are matched with the images in the image searching area in the step 3) by adopting a structural similarity SSIM algorithm, the best matching image is searched, if matching is successful, the next step is carried out, and if matching is unsuccessful, the step 1 is skipped;
and 5) obtaining coordinate position and angle information, and obtaining the position coordinate and angle information of the image according to the corresponding image in the matched image database.
In the step 1) and the step 2), the RSSI value of the signal strength read by the RFID reader-writer is converted into the absolute distance from the RFID reader-writer to the corresponding label according to the equation of the signal propagation model, and then the position coordinate of the mobile robot is obtained by utilizing a trilateral positioning method.
The signal propagation model is: the RFID reader-writer receives an equation of the attenuation degree of the RSSI transmitted by the label along with the increase of the propagation distance in an indoor environment, a signal propagation model adopts a logarithm-normal distribution propagation loss model, and the equation is as follows:
in the formula,for the RSSI value of the tag received by the RFID reader,for RFID reader at reference pointThe received signal strength RSSI value of the tag transmission, N is the path loss exponent associated with the environment,is a gaussian distributed random variable with an average value of 0, i.e. the attenuation of the signal through an obstacle,in order to refer to the distance to the ground closely,distance interval between the reader-writer and the label is set;
obtaining the distance from the RFID reader-writer to the label by adopting the logarithm-normal distribution propagation modelAnd the strength of the label signal received by the RFID reader-writerThe functional relationship between the two is:
in the formula, constantThe value of N determines the relationship between the RSSI value and the signal transmission distance, and after the use environment is determined, the RSSI value and the signal transmission distance can be both used as constants.
In the signal transmission process, due to multipath effect (generated by reflection and superposition principles of signal waves), non-line-of-sight transmission, signals of other electronic equipment and interference of human bodies, the obtained RSSI value has certain randomness and instability, so that the received signal strength needs to be filtered to obtain a more accurate value in order to reduce the final positioning error, and then the value is substituted into calculation;
in n RSSI values received at the same label position, RSSI values which are randomly generated and unstable are necessarily small probability events, most signal strength values are obeyed or approximately distributed normally, RSSI values with high probability are selected through a Gaussian model, and then the geometric mean value of the RSSI values is taken, so that noise data with small probability are reduced, and the positioning accuracy is improved, wherein the Gaussian distribution function is as follows:
in the formula,
the probability of selection of a high probability event should generally be in the range of greater than 0.6, i.e., 0.6 ≦Less than or equal to 1, as known from the standard normal distribution tableThe probability of occurrence of the current event is 68.27 percent
After the Gaussian filtering, the range of the effective signal strength RSSI value is obtainedAll the RSSI values within the range are taken out,and then, calculating a geometric mean value to obtain a final signal strength RSSI value.
In this embodiment, the working range of the RFID reader with the omnidirectional antenna is 50m to 100m, and the indoor environment can be completely covered, in the method, one RFID reader with the omnidirectional antenna and four actual reference tags are selected, the mobile robot to be positioned carries the RFID reader, the four actual reference tags are placed at four corners of the indoor space, and assuming that the positioning space is an approximately rectangular indoor environment of 10m × 6m, a schematic deployment plane of the test system of the method is shown in fig. 2.
The RFID reader-writer reads the RSSI value of the reference tag, the RSSI value is converted into the distance from the reader-writer to the tag through the signal propagation model, and in the actual positioning process, an obstacle possibly exists between the RFID reader-writer and the tag, so that the measured RSSI value is increased, and the accuracy of the calculated distance is influenced. In order to reduce the error, three tags with smaller RSSI values are selected from the four reference tags as positioning auxiliary tags, and the distance from the RFID reader-writer to the actual tag and the coordinates of the known tags are utilized to calculate the position coordinates of the RFID reader-writer by adopting a triangulation method, namely the coordinates of the mobile robot.
In the embodiment, considering that the average coordinate error of about 1m still exists after the initial positioning in the step 2), in order to better cover all possible coordinates in the image-based searching and positioning process, the method divides the indoor space into a plurality of small blocks at an interval distance of 50cm as shown in fig. 3, and the intersection point of the dotted lines is the position of the shot image; in addition, in order to acquire the rotation angle of the mobile robot, the camera needs to shoot images at certain angle intervals; on one hand, in order to obtain more accurate angle information, the angle interval cannot be selected too large; on the other hand, a common camera with the model of 'universal flying M200' is adopted in the test of the method, and the shooting wide angle of the camera is about 40 degrees; by combining the two points, the method adopts an angle interval of 30 degrees to shoot images; the entire experimental image contained 1980 images (without regard to the presence of fixed objects such as chairs, and not all 1980 images/squares were otherwise listed in fig. 3).
In the step 3), the image database divides the indoor space to be located into a plurality of parts, in this embodiment, every 50cm is taken as a point, all surrounding scenes are photographed at the point at a variable angle of 30 degrees, position coordinates and angle information of the scenes during image photographing are recorded, and a corresponding database is established and stored in a cloud to form the image database.
After a large number of images are shot, the images are stored in the cloud, when a corresponding image database is established, the MySQL database is adopted in the embodiment, due to the fact that the number of the images is large, if the images are directly stored in the database, the database is inevitably huge, reading of image information from the database is troublesome, the calculation speed is reduced, and positioning real-time performance is affected, all the images are stored in the same folder and stored in the database in the form of absolute paths, and on one hand, the size of the database can be reduced; on the other hand, when the program runs, the image can be read conveniently and quickly just like reading a file. The data stored in the database includes the image number, the address of the image to be stored for reading the image, the coordinates of the image capturing point for limiting the search range and returning the final positioning result, and the deflection angle.
In the step 2), the mobile robot and the cloud end are communicated by adopting a C/S architecture, wherein the mobile robot is used as a client end to collect data, the cloud end is used as a server end to perform data correlation operation, and socket programming based on a TCP/IP protocol is respectively performed on the cloud end and the mobile robot.
In the step 4), the similarity between the real-time photographed image and the image in the database is judged by using a structural similarity SSIM algorithm. The SSIM algorithm is a method for evaluating the quality of an image, and the algorithm simulates the function of each organ related to quality perception in the human visual system based on an HVS model, and then combines each part to extract structural information from the image, so that the similarity of the structural information of the two images can be calculated to be used as the similarity of the images. Firstly, the structure information of the image is not influenced by illumination, so that the brightness information needs to be removed when the structure information is calculated, namely the mean value of the image needs to be reduced; secondly, the structure information should not be influenced by the contrast of the image, so the variance of the image needs to be normalized when the structure information is calculated; finally, the correlation coefficients of the two processed images including the mean deviation, the equation and the covariance of the images need to be calculated, and the structural information of the images can be obtained by using the three coefficients, wherein the specific calculation mode is as follows:
wherein,which represents the function of the brightness,which represents a function of the contrast ratio,represents the function of the structure and represents the structure,andrespectively representing the pixel matrices corresponding to the two image blocks,andandrespectively representing mean, variance and covariance;is a small constant that prevents the denominator from being equal to 0.
The image similarity formula obtained by the above three formulas is:
wherein a, b and c are used for adjusting the importance degree of the three components, and are all selected to be 1 for convenience of calculation. The larger the result of the image similarity formula calculation is, the higher the matching degree of the two images is.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An indoor cloud robot angle positioning method based on position and visual information optimization comprises a cloud end, a mobile robot to be positioned, an RFID reader-writer and a camera, wherein the RFID reader-writer and the camera are arranged on the mobile robot, and the method is characterized by comprising the following steps:
step 1) receiving preliminary positioning information, and using a mobile robot to receive a signal strength RSSI (received signal strength indicator) sent by an active label and a corresponding identification ID (identity) value thereof as the preliminary positioning information through an RFID reader-writer;
step 2) acquiring preliminary position information, sending the received preliminary positioning information to a cloud end by the mobile robot, and calculating a preliminary position coordinate of the mobile robot by the cloud end according to the acquired RSSI value;
step 3) determining an image searching range, selecting an interval range which takes the preliminary position coordinate result of the RFID positioning in the step 2) as a dot and takes the prior probability error R as a radius as an image searching area in an image database;
step 4) image matching is carried out, the mobile robot end obtains images of surrounding scenes through a camera and sends the images to the cloud, denoising processing is carried out on image information obtained in real time by using an image processing method, the images shot in real time are matched with the images in the image searching area in the step 3) by adopting a structural similarity SSIM algorithm, the best matching image is searched, if matching is successful, the next step is carried out, and if matching is unsuccessful, the step 1 is skipped;
and 5) obtaining coordinate position and angle information, and obtaining the position coordinate and angle information of the image according to the corresponding image in the matched image database.
2. The indoor cloud robot angle positioning method based on position and visual information optimization according to claim 1, wherein in the step 1) and the step 2), a signal strength RSSI value read by an RFID reader-writer is converted into an absolute distance from the RFID reader-writer to a corresponding tag according to an equation of a signal propagation model, and then a position coordinate of the mobile robot is obtained by using a trilateral positioning method.
3. The indoor cloud robot angle positioning method based on position and visual information optimization according to claim 2, wherein the signal propagation model is: the RFID reader-writer receives an equation of the attenuation degree of the RSSI transmitted by the label along with the increase of the propagation distance in an indoor environment, a signal propagation model adopts a logarithm-normal distribution propagation loss model, and the equation is as follows:
in the formula,for the RSSI value of the tag received by the RFID reader,for RFID reader at reference pointReceiving a signal strength RSSI value sent by a label; n is an environment-dependent path loss exponent;a gaussian distributed random variable with an average value of 0, i.e. the attenuation of the signal through an obstacle;is a near ground reference distance;distance interval between the reader-writer and the label is set;
obtaining the distance from the RFID reader-writer to the label by adopting the logarithm-normal distribution propagation modelAnd the strength of the label signal received by the RFID reader-writerThe functional relationship between the two is:
in the formula, constantThe value of N determines the received signal strength RSSI value versus the signal transmission distance,and N is closely related to the use environment, and can be used as a constant after the use environment is determined.
4. The indoor cloud robot angle positioning method based on position and visual information optimization according to claim 2 or 3, wherein n RSSI values received at the same tag position are used to select RSSI values with high probability occurrence by using a Gaussian model, and then a geometric mean value is taken to reduce some small probability occurrence noise data and increase positioning accuracy, wherein a Gaussian distribution function is as follows:
in the formula,
after the Gaussian filtering, the range of the effective signal strength RSSI value is obtainedAnd all the RSSI values within the range are taken out, and then the geometric mean value is calculated to obtain the final RSSI value.
5. The indoor cloud robot angle positioning method based on position and visual information optimization according to claim 1, wherein in step 1), a plurality of tags are provided, and the tags are respectively placed at each corner of the indoor space.
6. The indoor cloud robot angle positioning method based on position and visual information optimization according to claim 1, wherein in the step 3), the image database is used for dividing an indoor space to be positioned into a plurality of parts, taking a point at intervals, taking all surrounding scenes at the point at a fixed and variable angle, recording position coordinates and angle information of the scenes when the images are taken, and establishing a corresponding database to store in a cloud to form an image database.
7. The indoor cloud robot angle positioning method based on position and visual information optimization according to claim 1, wherein in the step 2), the mobile robot and the cloud end are in communication by adopting a C/S (client/server) architecture, wherein the mobile robot is used as a client end to collect data, and the cloud end is used as a server end to perform data correlation operation and operation.
8. The indoor cloud robot angle positioning method based on position and visual information optimization of claim 1, wherein in the step 4), the structural similarity SSIM algorithm is as follows:
wherein,which represents the function of the brightness,which represents a function of the contrast ratio,represents the function of the structure and represents the structure,andrespectively representing the pixel matrices corresponding to the two image blocks,andandrespectively representing mean, variance and covariance;is a small constant that prevents the denominator from being equal to 0;
the image similarity formula obtained by the above three formulas is:
and a, b and c are used for adjusting the importance degrees of the three components and customizing values, and the larger the result calculated by the image similarity formula is, the higher the matching degree of the two images is.
CN201610658178.4A 2016-08-12 2016-08-12 Indoor cloud robot angle positioning method based on position and visual information optimization Pending CN106291517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610658178.4A CN106291517A (en) 2016-08-12 2016-08-12 Indoor cloud robot angle positioning method based on position and visual information optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610658178.4A CN106291517A (en) 2016-08-12 2016-08-12 Indoor cloud robot angle positioning method based on position and visual information optimization

Publications (1)

Publication Number Publication Date
CN106291517A true CN106291517A (en) 2017-01-04

Family

ID=57668621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610658178.4A Pending CN106291517A (en) 2016-08-12 2016-08-12 Indoor cloud robot angle positioning method based on position and visual information optimization

Country Status (1)

Country Link
CN (1) CN106291517A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107449427A (en) * 2017-07-27 2017-12-08 京东方科技集团股份有限公司 A kind of method and apparatus for generating navigation map
CN107560618A (en) * 2017-08-25 2018-01-09 河北工业大学 Robot indoor orientation method based on RFID
CN108318050A (en) * 2017-12-14 2018-07-24 富华科精密工业(深圳)有限公司 Central controller and the system and method for utilizing the central controller mobile navigation
CN108692720A (en) * 2018-04-09 2018-10-23 京东方科技集团股份有限公司 Localization method, location-server and positioning system
CN109302738A (en) * 2017-07-25 2019-02-01 杭州海康威视数字技术股份有限公司 A kind of methods, devices and systems adjusting wireless signal transmission power
CN109497893A (en) * 2018-12-28 2019-03-22 湖南格兰博智能科技有限责任公司 A kind of sweeping robot and its method for judging self-position
CN109581287A (en) * 2019-01-22 2019-04-05 西南石油大学 Pressure buries personnel positioning method after a kind of shake based on Wi-Fi
CN109612455A (en) * 2018-12-04 2019-04-12 天津职业技术师范大学 A kind of indoor orientation method and system
WO2019127257A1 (en) * 2017-12-28 2019-07-04 四川金瑞麒智能科学技术有限公司 Positioning method for intelligent wheelchair by means of photo implementation
CN110134117A (en) * 2018-02-08 2019-08-16 杭州萤石软件有限公司 Mobile robot repositioning method, mobile robot and electronic equipment
CN110824525A (en) * 2019-11-15 2020-02-21 中冶华天工程技术有限公司 Self-positioning method of robot
CN110988795A (en) * 2020-03-03 2020-04-10 杭州蓝芯科技有限公司 Mark-free navigation AGV global initial positioning method integrating WIFI positioning
CN111148022A (en) * 2019-12-31 2020-05-12 深圳市优必选科技股份有限公司 Mobile equipment and positioning method and device thereof
CN112051596A (en) * 2020-07-29 2020-12-08 武汉威图传视科技有限公司 Indoor positioning method and device based on node coding
CN112363111A (en) * 2020-11-09 2021-02-12 贵州电网有限责任公司 Equipment positioning method and system based on active tag and radio frequency reader
CN113490171A (en) * 2021-08-11 2021-10-08 重庆大学 Indoor positioning method based on visual label
CN113534117A (en) * 2021-06-11 2021-10-22 广州杰赛科技股份有限公司 Indoor positioning method
WO2022002149A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Initial localization method, visual navigation device, and warehousing system
CN114245307A (en) * 2021-12-21 2022-03-25 北京云迹科技股份有限公司 Positioning method and device for robot, electronic equipment and storage medium
CN114526724A (en) * 2022-02-18 2022-05-24 山东新一代信息产业技术研究院有限公司 Positioning method and equipment for inspection robot
CN114543816A (en) * 2022-04-25 2022-05-27 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539629A (en) * 2009-04-17 2009-09-23 南京师范大学 Remote sensing image change detection method based on multi-feature evidence integration and structure similarity
CN101820675A (en) * 2009-10-22 2010-09-01 深圳市同洲电子股份有限公司 Method, terminal and system for positioning mobile terminal
CN102928813A (en) * 2012-10-19 2013-02-13 南京大学 RSSI (Received Signal Strength Indicator) weighted centroid algorithm-based passive RFID (Radio Frequency Identification Device) label locating method
CN104936283A (en) * 2014-03-21 2015-09-23 中国电信股份有限公司 Indoor positioning method, server and system
CN105792353A (en) * 2016-03-14 2016-07-20 中国人民解放军国防科学技术大学 Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539629A (en) * 2009-04-17 2009-09-23 南京师范大学 Remote sensing image change detection method based on multi-feature evidence integration and structure similarity
CN101820675A (en) * 2009-10-22 2010-09-01 深圳市同洲电子股份有限公司 Method, terminal and system for positioning mobile terminal
CN102928813A (en) * 2012-10-19 2013-02-13 南京大学 RSSI (Received Signal Strength Indicator) weighted centroid algorithm-based passive RFID (Radio Frequency Identification Device) label locating method
CN104936283A (en) * 2014-03-21 2015-09-23 中国电信股份有限公司 Indoor positioning method, server and system
CN105792353A (en) * 2016-03-14 2016-07-20 中国人民解放军国防科学技术大学 Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302738A (en) * 2017-07-25 2019-02-01 杭州海康威视数字技术股份有限公司 A kind of methods, devices and systems adjusting wireless signal transmission power
CN107449427A (en) * 2017-07-27 2017-12-08 京东方科技集团股份有限公司 A kind of method and apparatus for generating navigation map
CN107560618A (en) * 2017-08-25 2018-01-09 河北工业大学 Robot indoor orientation method based on RFID
CN107560618B (en) * 2017-08-25 2019-10-29 河北工业大学 Robot indoor orientation method based on RFID
CN108318050A (en) * 2017-12-14 2018-07-24 富华科精密工业(深圳)有限公司 Central controller and the system and method for utilizing the central controller mobile navigation
CN111527378B (en) * 2017-12-28 2024-03-19 四川金瑞麒智能科学技术有限公司 Positioning method for realizing intelligent wheelchair through photo
CN111527378A (en) * 2017-12-28 2020-08-11 四川金瑞麒智能科学技术有限公司 Method for realizing positioning of intelligent wheelchair through photos
WO2019127257A1 (en) * 2017-12-28 2019-07-04 四川金瑞麒智能科学技术有限公司 Positioning method for intelligent wheelchair by means of photo implementation
CN110134117B (en) * 2018-02-08 2022-07-29 杭州萤石软件有限公司 Mobile robot repositioning method, mobile robot and electronic equipment
CN110134117A (en) * 2018-02-08 2019-08-16 杭州萤石软件有限公司 Mobile robot repositioning method, mobile robot and electronic equipment
WO2019196403A1 (en) * 2018-04-09 2019-10-17 京东方科技集团股份有限公司 Positioning method, positioning server and positioning system
CN108692720B (en) * 2018-04-09 2021-01-22 京东方科技集团股份有限公司 Positioning method, positioning server and positioning system
US11933614B2 (en) 2018-04-09 2024-03-19 Boe Technology Group Co., Ltd. Positioning method, positioning server and positioning system
CN108692720A (en) * 2018-04-09 2018-10-23 京东方科技集团股份有限公司 Localization method, location-server and positioning system
CN109612455A (en) * 2018-12-04 2019-04-12 天津职业技术师范大学 A kind of indoor orientation method and system
CN109497893A (en) * 2018-12-28 2019-03-22 湖南格兰博智能科技有限责任公司 A kind of sweeping robot and its method for judging self-position
CN109581287A (en) * 2019-01-22 2019-04-05 西南石油大学 Pressure buries personnel positioning method after a kind of shake based on Wi-Fi
CN109581287B (en) * 2019-01-22 2024-02-09 西南石油大学 Wi-Fi-based post-earthquake pressure burying personnel positioning method
CN110824525A (en) * 2019-11-15 2020-02-21 中冶华天工程技术有限公司 Self-positioning method of robot
CN111148022A (en) * 2019-12-31 2020-05-12 深圳市优必选科技股份有限公司 Mobile equipment and positioning method and device thereof
CN111148022B (en) * 2019-12-31 2021-06-04 深圳市优必选科技股份有限公司 Mobile equipment and positioning method and device thereof
CN110988795A (en) * 2020-03-03 2020-04-10 杭州蓝芯科技有限公司 Mark-free navigation AGV global initial positioning method integrating WIFI positioning
WO2022002149A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Initial localization method, visual navigation device, and warehousing system
CN112051596A (en) * 2020-07-29 2020-12-08 武汉威图传视科技有限公司 Indoor positioning method and device based on node coding
CN112363111A (en) * 2020-11-09 2021-02-12 贵州电网有限责任公司 Equipment positioning method and system based on active tag and radio frequency reader
CN113534117A (en) * 2021-06-11 2021-10-22 广州杰赛科技股份有限公司 Indoor positioning method
CN113534117B (en) * 2021-06-11 2024-06-04 广州杰赛科技股份有限公司 Indoor positioning method
CN113490171B (en) * 2021-08-11 2022-05-13 重庆大学 Indoor positioning method based on visual label
CN113490171A (en) * 2021-08-11 2021-10-08 重庆大学 Indoor positioning method based on visual label
CN114245307A (en) * 2021-12-21 2022-03-25 北京云迹科技股份有限公司 Positioning method and device for robot, electronic equipment and storage medium
CN114526724A (en) * 2022-02-18 2022-05-24 山东新一代信息产业技术研究院有限公司 Positioning method and equipment for inspection robot
CN114526724B (en) * 2022-02-18 2023-11-24 山东新一代信息产业技术研究院有限公司 Positioning method and equipment for inspection robot
CN114543816A (en) * 2022-04-25 2022-05-27 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things
CN114543816B (en) * 2022-04-25 2022-07-12 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things

Similar Documents

Publication Publication Date Title
CN106291517A (en) Indoor cloud robot angle positioning method based on position and visual information optimization
CN207117844U (en) More VR/AR equipment collaborations systems
CN110163064B (en) Method and device for identifying road marker and storage medium
CN107782322B (en) Indoor positioning method and system and indoor map establishing device thereof
EP3779360B1 (en) Indoor positioning method, indoor positioning system, indoor positioning device, and computer readable medium
CN105279750B (en) It is a kind of that guide system is shown based on the equipment of IR-UWB and image moment
CN103134489B (en) The method of target localization is carried out based on mobile terminal
US20210274358A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
Park et al. When IoT met augmented reality: Visualizing the source of the wireless signal in AR view
CN109724603A (en) A kind of Indoor Robot air navigation aid based on environmental characteristic detection
CN109540144A (en) A kind of indoor orientation method and device
CN105865438A (en) Autonomous precise positioning system based on machine vision for indoor mobile robots
CN110136202A (en) A kind of multi-targets recognition and localization method based on SSD and dual camera
CN111028358A (en) Augmented reality display method and device for indoor environment and terminal equipment
CN105865419A (en) Autonomous precise positioning system and method based on ground characteristic for mobile robot
Feng et al. Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments
Deng et al. Long-range binocular vision target geolocation using handheld electronic devices in outdoor environment
CN106716053B (en) The dimensional posture and position identification device of moving body
McIlroy et al. Kinectrack: 3d pose estimation using a projected dense dot pattern
CN111563934B (en) Monocular vision odometer scale determination method and device
CN111402324A (en) Target measuring method, electronic equipment and computer storage medium
CN108932478A (en) Object positioning method, device and shopping cart based on image
CN115578539B (en) Indoor space high-precision visual position positioning method, terminal and storage medium
CN106776813A (en) Large-scale indoor venue based on SSIFT algorithms is quickly positioned and air navigation aid
CN109612455A (en) A kind of indoor orientation method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170104