CN113988228B - Indoor monitoring method and system based on RFID and vision fusion - Google Patents

Indoor monitoring method and system based on RFID and vision fusion Download PDF

Info

Publication number
CN113988228B
CN113988228B CN202111075680.XA CN202111075680A CN113988228B CN 113988228 B CN113988228 B CN 113988228B CN 202111075680 A CN202111075680 A CN 202111075680A CN 113988228 B CN113988228 B CN 113988228B
Authority
CN
China
Prior art keywords
target
camera
tag
pedestrian
rfid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111075680.XA
Other languages
Chinese (zh)
Other versions
CN113988228A (en
Inventor
李敏
任俊星
李凌涵
杨阳
白入文
姜淼
王思叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202111075680.XA priority Critical patent/CN113988228B/en
Publication of CN113988228A publication Critical patent/CN113988228A/en
Application granted granted Critical
Publication of CN113988228B publication Critical patent/CN113988228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0638Clock or time synchronisation among nodes; Internode synchronisation

Abstract

The invention discloses an indoor monitoring method based on RFID and vision fusion, which belongs to the field of computer vision and Internet of things safety, and adopts an RFID reader, a tag and a commercial camera to match an RFID object with a visual object through a correlation algorithm, and adopts a DS evidence fusion algorithm to fuse RFID and vision position information, so that the identification of the RFID and the high precision of vision positioning are utilized to realize the on-line detection and tracking of a target, and the precision and the robustness of the target positioning are improved.

Description

Indoor monitoring method and system based on RFID and vision fusion
Technical Field
The invention belongs to the field of computer vision and Internet of things safety, and particularly relates to an indoor monitoring method and system based on RFID and vision fusion.
Background
In recent years, the ability to detect tracking personnel and equipment in indoor environments has greatly expanded the development of location-based services such as indoor navigation, transportation, security monitoring, and the like. Indoor positioning is susceptible to object blockage and irregular movement of personnel, and these scenarios are more complex than outdoor environments. Thus, existing localization techniques focus mainly on fusion of multiple detection sensors. Visual monitoring is the primary means of monitoring personnel location, behavior and activity. The monitoring cameras are widely applied to public places, particularly safety sensitive areas, and the number of cameras is huge. Surveillance video has become the largest big data that needs to be processed. However, it is often difficult to identify video data objects, and tracking of lightweight objects is difficult. For example: when theft is monitored, 5 persons exist at the same time, and specific thieves are difficult to distinguish; the specific identity is difficult to detect during multi-pedestrian tracking; small articles such as documents are difficult to track. On the other hand, radio Frequency Identification (RFID) plays an important role in monitoring, and it carries the electronic Identity (ID) of the holder, and can be used for locating and tracking the device holder in the monitoring system, and meanwhile, for objects, old people and patients, RFID tags can be attached for tracking communication. RFID has the specific and widespread application of small volume, low cost, non-line-of-sight propagation, and automatic identification, but it is not as accurate as camera-based tracking technology. Because of the specificity of the indoor environment, the wireless signal transmitted to the indoor environment is seriously weakened and lost due to the interference of walls and the like, and meanwhile, the wireless signal is influenced by the reflection and the like caused by different indoor objects, so that the positioning accuracy is no longer accurate. In addition, the movement of people in the indoor environment is not circulated, the background of the indoor environment is complex, and the positioning accuracy is usually in the order of meters. RFID is susceptible to environmental multipath effects, resulting in low positioning accuracy, while visual object tracking is susceptible to occlusion and background clutter, resulting in target loss. The positioning data derived by the individual sensors is therefore not only ambiguous but also partially reliable. Existing methods use only simple weighting formulas or only positioning data of the vision system and cannot effectively fuse the data between the two.
Disclosure of Invention
The invention aims to provide an indoor monitoring method and system based on RFID and vision fusion, which uses an RFID reader, a tag and a commercial camera to match an RFID object with a vision object through a correlation algorithm, fuses RFID and vision position information through a DS evidence fusion algorithm, realizes on-line detection and tracking of a target by utilizing the identification of the RFID and the high precision of vision positioning, and improves the precision and the robustness of the target positioning.
The technical scheme adopted for solving the technical problems is as follows:
an indoor monitoring method based on RFID and vision fusion comprises the following steps:
calibrating the positions of a camera and an RFID reader which are arranged indoors, synchronizing the time of the camera and the RFID reader, and establishing a coordinate conversion relation between the two-dimensional image coordinates of the camera and the three-dimensional coordinates of the indoor space;
detecting a target tag ID (identity) by an RFID reader, wherein the target tag is an RFID tag carried by a pedestrian target, coarsely positioning the area range of the target tag, and transmitting the acquired target tag ID and position information to a camera of the area range;
the camera shoots the region range, converts the position information in the three-dimensional coordinate system into the image coordinate system of the camera through the coordinate conversion relation, determines the rough position of the pedestrian in the image coordinate system, frames the rough position as a region of interest, detects the pedestrian in the current region of interest by using an image detection algorithm, and segments the pedestrian from the image; then, the specific positioning and tracking are carried out on the pedestrians by using a related filtering algorithm, and specific position and time information of the pedestrians are output;
the RFID reader trains an offline fingerprint tag model by utilizing a fingerprint learning method in advance, then carries out specific positioning and tracking on the target tag based on the offline fingerprint tag model, and outputs specific position and time information of the target tag;
and selecting a pedestrian target corresponding to the target tag from the pedestrians of the image through target association according to the specific position and time information of the pedestrians output by the camera and the specific position and time information of the target tag output by the RFID reader, and carrying out track fusion on the target tag and the pedestrian target to obtain real-time coordinate information of the pedestrian target.
Further, the Zhang Zhengyou estimation method is utilized to calibrate the position of the indoor arranged camera and the RFID reader.
Further, the RFID reader utilizes the LANDMARC algorithm to coarsely locate the pedestrian target.
Further, the correlation filtering algorithm adopts an ECO algorithm.
Further, the step of training the off-line fingerprint tag model by using the fingerprint learning method comprises the following steps:
establishing a three-dimensional coordinate system of an indoor space in advance, dividing a plurality of square grids, placing a tag in each square grid, and collecting a plurality of pieces of data to obtain fingerprint data in each square grid, wherein the fingerprint data comprises a tag ID and a signal intensity data set detected by a plurality of RFID readers;
acquiring coordinate positions corresponding to fingerprint data in all square grids, and then carrying out normalization processing on the fingerprint data;
dividing a positioning area into K macro areas by using a K-means clustering algorithm to obtain class labels of macro areas to which each square grid belongs;
and (3) introducing the label-like information into a BP neural network, training and calculating the loss with the actually collected data, and optimizing network parameters to obtain an offline fingerprint label model.
Further, the RFID reader specifically locates and tracks the target tag by using a Kalman filtering algorithm based on an offline fingerprint tag model.
Further, the target association method comprises the following steps:
calculating a distance matrix between the position coordinates of the target tag and the visual coordinates of each pedestrian, and converting the distance matrix into a probability function by introducing a zero-mean Gaussian kernel with covariance to obtain a distance probability matrix representing the matching probability of the target tag and the pedestrian target;
calculating a speed matrix between the position coordinates of the target label and the visual coordinates of each pedestrian, and converting the speed matrix into a probability function by introducing a zero-mean Gaussian kernel with covariance to obtain a speed probability matrix representing the matching probability of the target label and the pedestrian target;
establishing an allocation matrix between the target tag and the pedestrian according to the distance probability matrix and the speed probability matrix; and calculating the maximum matching probability by using a global optimization algorithm and a t distribution algorithm, and carrying out target association.
Further, track fusion is performed based on DS evidence fusion theory, comprising the following steps:
taking the position and time information output by the camera and the RFID reader as evidence, and calculating a basic probability distribution function BPA;
according to the Dempster combination rule, carrying out spatial domain judgment fusion on evidences output by different cameras and RFID readers, or carrying out time domain judgment fusion on evidences of each camera or RFID reader at different moments, so as to obtain BPA under the combined evidence;
based on the evidence theory of prior probability error distribution, the position information is fused according to the BPA under the combined evidence, the weight of each camera and the weight of the RFID reader are adjusted, and the track with the maximum reliability is selected as the final fusion track.
Further, the Dempster combining rule is:
(1) Dynamically setting a basic probability value according to probability error values of different cameras or RFID readers at different positions, wherein the basic probability distribution meets the following conditionsWherein m is i (A) Representing the basic probability distribution of different cameras or RFID readers at different identification frames i, r i Representing the base probability value it obtains; after the value of m (A) is obtained, the basic probability distribution of the camera and the RFID reader is fused and calculated by using a Dempster combination rule;
(2) Under the condition that the conflict paradox occurs, if the number of points of the paradox is below a preset experimental threshold value, weighting the visual position of the pedestrian as 1; if the number of paradox points exceeds the experimental threshold, the target tag position detected by the RFID reader is given a weight of 1.
Further, firstly, optimizing the fused track information; secondly, feeding back the optimized track information to the camera and the RFID reader; and dynamically weighting the position information output by the camera and the RFID reader according to error distribution information of a previous experiment by utilizing a DS evidence fusion theory, and carrying out online correction on parameters of the RFID and the camera to ensure the positioning accuracy and the positioning reliability at the next moment.
An indoor monitoring system based on RFID and vision fusion, comprising:
the camera is arranged indoors and used for shooting visual image information, the visual data processing module is used for processing the image information shot by the camera, converting the position information in the three-dimensional coordinate system into the image coordinate system of the camera through the coordinate conversion relation, determining the rough position of the pedestrian in the image coordinate system, selecting the rough position as an interested area, detecting the pedestrian in the current interested area by using an image detection algorithm, and dividing the pedestrian from the image; then, the related filtering algorithm is utilized to specifically locate and track pedestrians;
the RFID reader is arranged indoors and has a set corresponding relation with the camera, and is used for detecting the ID and the position information of the target tag and performing rough positioning, training an offline fingerprint tag model by a fingerprint learning method in advance, and performing specific positioning and tracking on the target tag by the offline fingerprint tag model;
the target association module is used for selecting a pedestrian target corresponding to the target tag from the image according to the specific position and time information of the pedestrian output by the camera and the specific position and time information of the target tag output by the RFID reader, and matching the pedestrian target to realize target association;
and the data fusion center is used for carrying out track fusion on the target label and the pedestrian target to obtain real-time coordinate information of the pedestrian target.
After camera calibration and multi-sensor time synchronization, the RFID tracking and positioning are performed on the tag object, and the visual information is used for detecting and tracking on the pedestrian target carrying the tag. And finally, carrying out association and fusion of targets on the obtained position information to obtain final real coordinates, tracks and object IDs. The RFID module inputs the pre-processed received signal strength (Received Signal Strength Indicators, RSSI) into the trained neural network by using a machine learning method, and then optimizes the estimated position by Kalman filtering, so that a smooth track is obtained. And learning based on an offline fingerprint tag model to obtain the position information and time information of the tag, and outputting the spatial position information and the time information of the pedestrian by using camera calibration information after the vision module performs online tracking by using a related filtering method. The association of the object is performed by the RFID and the visual position distance matrix and the speed distance matrix, and the ID information is matched to the visual information. The beneficial effects of the invention are as follows: the on-line detection and tracking function of the target can be realized by utilizing the identification and the high precision of the visual positioning of the RFID, so that important objects can be monitored, the following advantages are brought, and the system performance is improved: 1. the visual target detection does not start the face detection function, so that the personal privacy is protected. 2. RFID booting reduces the time consuming and complexity of video detection. 3. The RFID and the vision module can work independently, and the system can be flexibly used according to different scenes. 4. And the DS evidence fusion algorithm is utilized to improve the positioning accuracy and robustness.
Drawings
Fig. 1 is a flowchart of an indoor monitoring method according to an embodiment of the present invention.
Fig. 2 is a flowchart of another indoor monitoring method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an association module according to an embodiment of the present invention.
FIG. 4 is a fusion flow chart of one form of embodiment of the present invention.
FIG. 5 is a fusion flow chart of another form of embodiment of the present invention.
Detailed Description
In order to make the above features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
The invention utilizes the identification of RFID and the high precision of visual positioning to realize the on-line detection and tracking of the target, and further fuses the position information of the two, thereby improving the positioning precision. Fig. 1 is a workflow diagram of the method of the present invention, which is mainly divided into the following four stages:
the first stage is an RFID guiding stage, the RFID information is utilized for positioning to obtain the area range of the target, the vision is guided to detect the target, and the target detection searching range is greatly reduced.
The second stage is a target detection stage, and the moving target is detected by using an image detection algorithm after camera calibration and time synchronization.
The third stage is a tracking and positioning stage, and the RFID reader performs positioning by utilizing a fingerprint algorithm, specifically: establishing a coordinate system for the indoor space, dividing a plurality of square grids, placing a target tag in each square grid, and collecting a plurality of pieces of data to obtain fingerprint data in each square grid; acquiring coordinate positions corresponding to fingerprint data in all square grids; and then carrying out normalization processing on the fingerprint data, and dividing the positioning area into K macro areas by using a K-means clustering algorithm to obtain class labels of macro areas to which each square grid belongs. And (3) introducing the label information into the BP neural network to perform training and loss calculation with the actually collected data, optimizing network parameters, obtaining an off-line fingerprint label model, and positioning by using the model. The camera tracks by using a related filtering algorithm to obtain coordinate information of a corresponding pedestrian target, and combines time information to obtain information such as speed.
And the fourth stage is a target association and track fusion stage, wherein the position distance matrix and the time distance matrix of the target are obtained through two sensors, and the association is carried out by using a global optimization and t distribution method, so that the tag ID is matched to the visual pedestrian.
The technical route of one example implementation of the present invention is described in detail in connection with fig. 2.
1) Setting up an experimental environment, and deploying an RFID reader and a camera. And acquiring data required by an experiment, acquiring space position coordinates of the RFID reader and the camera, using a Zhang Youzheng calibration method to correspond the two-dimensional pixel point coordinates of the camera to the space coordinates, and synchronizing the time of the RFID reader and the camera.
2) And coarsely positioning the tag ID by using a LANDMARC algorithm to obtain the positions of the camera and the region of interest, and detecting a visual target in the region of interest to obtain a pedestrian target. RFID tracking and visual target tracking are synchronously performed: the RFID reader locates the ID of the tag by utilizing a fingerprint algorithm and tracks the tag by utilizing Kalman filtering; visual target tracking uses a correlation filtering algorithm (e.g., an ECO algorithm) to track pedestrian targets.
And obtaining corresponding speed information by utilizing time information according to the position change information.
3) And carrying out target association on the obtained characteristic information under different conditions by using algorithms such as global optimization, t distribution and the like.
As shown in fig. 3, three abnormal situations are considered in the present embodiment, and for the case (a), the IDs correspond to the pedestrians marked with the targets and are directly associated; for the cases (b) and (c), at this time, the camera cannot identify the pedestrian object and the corresponding ID, and the specific ID corresponding to the specific object can be obtained according to global optimization and t distribution by associating the characteristics of coordinates, speeds and the like obtained by using RFID and visual tracking.
According to the invention, the RFID tag target is matched to the visual object by a joint probability data association method, and the distance matrix between the position point of the RFID tag target and each visual coordinate is calculated and converted into a probability function, so that the matching probability is obtained. The matching probabilities indicate the likelihood that the corresponding object is likely to be the target of the tag, and a higher matching probability indicates that the tag is more likely to be on the visual track. The distance probability matrix and the speed probability matrix of the RFID reader and the camera are calculated as follows:
assuming that there are m tags T, n pedestrians V in the scene, including m tagged and n-m untagged pedestrians, the m pedestrians C that will be tagged are needed i Match to the m tags T j And (3) upper part. C (C) i And T is j The Euclidean distance of (2) is: x, y is the abscissa value, sigma x ,σ y The standard deviations of x and y, respectively. The distance can be converted into a probability function, expressed as follows:
the position measurement of the RFID tag contains random errors that follow a typical gaussian distribution, thus using a zero-mean gaussian kernel with a specific covariance:
on the other hand, the speed between pedestrians is often human-based, and the speed can also be a correlation factor, which is defined as follows:
wherein the method comprises the steps ofRepresenting the displacement of the tag Ti.
The velocity is converted to a probability measure by a gaussian kernel with zero mean and covariance matrix, analogically to the spatial distance.
The above is a matching process of a single location point, and the matching process needs to be extended into a sliding time window in a practical scene, for example, the coordinate of the tag at time t is x (t) = (x) 1 (t),x 2 (t),…,x n (t)) T y(t)=(y 1 (t),y 2 (t),…,y m (t)) T Then the distance within the sliding time window is +.d ij (t) dt. According toThe probability of matching within the sliding window time can be obtained.
For each observed label and visual object, an allocation matrix may thus be established:
wherein the probability of each row indicates that the tag T is to be displayed j Assignment to different visual targets C i By finding the probability of (2) the mostLarge probability value to determine tag T j Assigned visual targets. The association is performed by using global optimization, t distribution and the like, and the tag ID is matched to the visual pedestrian. Here, the distance matrix is taken as an example to use the formulaThe distance is converted to probability and the velocity matrix can also be used to build an allocation matrix, in particular analogized to spatial distance, by converting velocity to probability measurements by a gaussian kernel with zero mean and covariance matrices.
Fig. 4 is a flow chart for the fusion of RFID and visual location information.
The basic idea of data fusion is that firstly, the basic probability distribution function (BPA) of evidence under all cameras and RFID readers is calculated, then, the BPA under all combined evidence is calculated according to a combination rule, and finally, the hypothesis with the maximum credibility is selected as a fusion result according to a decision rule. Basic probability distribution function (Basic Probability Assignment Functions, BPA): to describe the support of the hypotheses, a basic probability distribution, also called quality function, is introduced based on 2Θ. Mass function 2 Θ →[0,1]The following equation is satisfied:
A∈Θ m(A)=1
where m (A) is the basic evidence support for proposition A.
And step 1, roughly positioning a target according to the strength of the RFID signal, calibrating a camera in the region and synchronizing time, and determining the region of interest.
And 2, carrying out background segmentation and target detection on the region of interest, and identifying pedestrians.
And 3, performing visual tracking through a camera and tracking through RFID on the target identified in the step 2.
And 4, feeding the characteristics of the two modes at the time T in the step 3 back to the RFID, the camera and the fusion center, and dynamically weighting the RFID and the visual positioning information according to the error distribution information of the previous experiment by utilizing DS evidence fusion by utilizing the fed-back positioning information, so that the parameters of the RFID and the camera are corrected on line, and the positioning accuracy and the positioning reliability at the time T+1 are ensured.
Fig. 5 is a flow chart for the fusion of RFID and visual location information.
The combination rule of the Dempster in the evidence theory is utilized to carry out fusion judgment on target recognition evidences provided by different sensors (namely a camera and an RFID reader), namely so-called spatial domain judgment fusion; the recognition evidence obtained by each sensor at different moments can also be fused, so-called time domain decision fusion. And using the prior error distribution information as a probability distribution factor, and judging according to a decision rule. And the combination rule of the Dempster in the evidence theory is utilized to carry out fusion judgment on the position evidence provided by the RFID and the camera. In a system where an RFID and a camera are fused, the vision system may fail tracking due to occlusion, and the location point of the RFID system may be outside the location range of the vision system. Both of these are conflicting paradox problems that often occur in evidence theory. In a fused RFID and vision system, the base probability values at different locations are different for different sensors. In order to reflect an actual position fusion scene more truly, the invention fuses the position information of a fusion system by adopting an evidence theory based on prior probability error distribution, and the weight of each sensor is continuously adjusted according to the actual situation in the fusion process. The Dempster combining rule is analyzed in detail below:
(1) According to the difference of positioning and tracking precision and robustness of each sensor, the basic probability value is dynamically set according to the probability error values of different sensors at different positions, and the basic probability distribution satisfiesWherein m is i (A) Representing the basic probability distribution of different sensors at i, n being the number of positions, r i Representing the base probability value it obtained. Obtaining the value of m (A), and then utilizing the Dempster synthesis rule to perform RFID and RFIDAnd carrying out fusion calculation on the basic probability distribution of the camera.
(2) In the event of a conflicting paradox, the visual position is weighted 1 if the number of points of the paradox is below a threshold. If the number of points of the paradox exceeds the experimental threshold, the visual target tracking error is indicated, and the position of the target tag detected by the RFID reader is given a weight of 1.
Compared with the traditional DS evidence theory, the evidence fusion method based on the prior probability error distribution has the following advantages in the process of processing multi-sensor position fusion:
(1) The method considers the positioning accuracy of different sensors at different positions, and is more in line with the actual situation.
(2) When multiple evidences are in serious conflict, the fusion algorithm based on the prior probability error distribution can obtain better results.
The above is a specific implementation flow of the embodiment of the present invention.
The invention utilizes the identification and high precision of visual positioning of RFID to carry out on-line detection and tracking of the target, after time synchronization and space synchronization, the RFID outputs position and time information by utilizing a fingerprint positioning method, and the visual camera carries out target detection in the region of interest determined by the RFID and carries out visual target tracking to output position and time information by utilizing a relevant filtering algorithm. And performing label and vision association on the global optimization and t distribution of the position coordinate matrix and the speed coordinate matrix. The RFID module and the vision module can be independently used according to the environment, so that the multifunctional equipment is more flexible to use.
And (3) experimental verification:
and comparing the positioning result of the fusion algorithm with a classical algorithm under a single sensor, wherein the final positioning accuracy of the classical algorithm BP using the single sensor RFID is in decimeter level, and the error is 0.53m. The visual algorithm ECO (ECO algorithm is the first related filtering target tracking algorithm ranked on the VOT competition in 2017, and the tracking speed of 66fps can be achieved on a CPU computer) using a single sensor camera has the positioning accuracy of centimeter level and the mean square error of 0.12m under the condition of no shielding. The fusion result shows that the mean square error is 3.39cm on the x-axis and 7.09cm on the y-axis, the total mean square error is 9.80cm, and the standard deviation is 7.21cm. The accuracy obtained is slightly higher than the positioning accuracy obtained by purely using computer vision, which proves the effectiveness of the fusion of RFID and vision.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited thereto, and that modifications and equivalents may be made thereto by those skilled in the art, which modifications and equivalents are intended to be included within the scope of the present invention as defined by the appended claims.

Claims (11)

1. An indoor monitoring method based on RFID and vision fusion is characterized by comprising the following steps:
calibrating the positions of a camera and an RFID reader which are arranged indoors, synchronizing the time of the camera and the RFID reader, and establishing a coordinate conversion relation between the two-dimensional image coordinates of the camera and the three-dimensional coordinates of the indoor space;
detecting a target tag ID (identity) by an RFID reader, wherein the target tag is an RFID tag carried by a pedestrian target, coarsely positioning the area range of the target tag, and transmitting the acquired target tag ID and position information to a camera of the area range;
the camera shoots the region range, converts the position information in the three-dimensional coordinate system into the image coordinate system of the camera through the coordinate conversion relation, determines the rough position of the pedestrian in the image coordinate system, frames the rough position as a region of interest, detects the pedestrian in the current region of interest by using an image detection algorithm, and segments the pedestrian from the image; then, the specific positioning and tracking are carried out on the pedestrians by using a related filtering algorithm, and specific position and time information of the pedestrians are output;
the RFID reader trains an offline fingerprint tag model by utilizing a fingerprint learning method in advance, then carries out specific positioning and tracking on the target tag based on the offline fingerprint tag model, and outputs specific position and time information of the target tag;
and selecting a pedestrian target corresponding to the target tag from the pedestrians of the image through target association according to the specific position and time information of the pedestrians output by the camera and the specific position and time information of the target tag output by the RFID reader, and carrying out track fusion on the target tag and the pedestrian target to obtain real-time coordinate information of the pedestrian target.
2. The method of claim 1, wherein the camera and the RFID reader disposed indoors are position calibrated using Zhang Zhengyou estimation.
3. The method of claim 1, wherein the RFID reader utilizes a LANDMARC algorithm to coarsely locate the pedestrian target.
4. The method of claim 1, wherein the correlation filtering algorithm employs an ECO algorithm.
5. The method of claim 1, wherein training an offline fingerprint tag model using a fingerprint learning method comprises:
establishing a three-dimensional coordinate system of an indoor space in advance, dividing a plurality of square grids, placing a tag in each square grid, and collecting a plurality of pieces of data to obtain fingerprint data in each square grid, wherein the fingerprint data comprises a tag ID and a signal intensity data set detected by a plurality of RFID readers;
acquiring coordinate positions corresponding to fingerprint data in all square grids, and then carrying out normalization processing on the fingerprint data;
dividing a positioning area into K macro areas by using a K-means clustering algorithm to obtain class labels of macro areas to which each square grid belongs;
and (3) introducing the label-like information into a BP neural network, training and calculating the loss with the actually collected data, and optimizing network parameters to obtain an offline fingerprint label model.
6. The method of claim 1, wherein the RFID reader uses a kalman filter algorithm to specifically locate and track the target tag based on an offline fingerprint tag model.
7. The method of claim 1, wherein the target association method comprises the steps of:
calculating a distance matrix between the position coordinates of the target tag and the visual coordinates of each candidate pedestrian, and converting the distance matrix into a probability function by introducing a zero-mean Gaussian kernel with covariance to obtain a distance probability matrix representing the matching probability of the target tag and the pedestrian target;
calculating a speed matrix between the position coordinates of the target label and the visual coordinates of each candidate pedestrian, and converting the speed matrix into a probability function by introducing a zero-mean Gaussian kernel with covariance to obtain a speed probability matrix representing the matching probability of the target label and the pedestrian target;
establishing an allocation matrix between the target tag and the pedestrian according to the distance probability matrix and the speed probability matrix; and calculating the maximum matching probability by using a global optimization algorithm and a t distribution algorithm, and carrying out target association.
8. The method of claim 1, wherein trajectory fusion based on DS evidence fusion theory comprises the steps of:
taking the position and time information output by the camera and the RFID reader as evidence, and calculating a basic probability distribution function BPA;
according to the Dempster combination rule, carrying out spatial domain judgment fusion on evidences output by different cameras and RFID readers, or carrying out time domain judgment fusion on evidences of each camera or RFID reader at different moments, so as to obtain BPA under the combined evidence;
based on the evidence theory of prior probability error distribution, the position information is fused according to the BPA under the combined evidence, the weight of each camera and the weight of the RFID reader are adjusted, and the track with the maximum reliability is selected as the final fusion track.
9. The method of claim 8, wherein the Dempster combining rule is:
(1) Dynamically setting basic probability distribution according to probability error values of different cameras or RFID readers at different positions, wherein the distribution meets the requirementWherein m is i (A) Representing the basic probability distribution of different cameras or RFID readers at i, n is the number of positions, r i Representing the base probability value it obtains; then, the basic probability distribution of the camera and the RFID reader is fused and calculated by using a Dempster combination rule;
(2) Under the condition that the conflict paradox occurs, if the number of points of the paradox is below a preset experimental threshold value, weighting the visual position of the pedestrian as 1; if the number of paradox points exceeds the experimental threshold, the target tag position detected by the RFID reader is given a weight of 1.
10. The method of claim 9, wherein the fused track information is optimized first; secondly, feeding back the optimized track information to the camera and the RFID reader; and dynamically weighting the position information output by the camera and the RFID reader according to error distribution information of a previous experiment by utilizing a DS evidence fusion theory, and carrying out online correction on parameters of the RFID and the camera to ensure the positioning accuracy and the positioning reliability at the next moment.
11. An indoor monitoring system based on RFID and vision fusion, comprising:
the camera is arranged indoors and used for shooting visual image information;
the visual data processing module is used for processing image information shot by the camera, converting the position information in the three-dimensional coordinate system into the image coordinate system of the camera through the coordinate conversion relation, determining the rough position of the pedestrian in the image coordinate system, selecting the rough position as an interested area, detecting the pedestrian in the current interested area by using an image detection algorithm, and dividing the pedestrian from the image; then, the related filtering algorithm is utilized to specifically locate and track pedestrians;
the RFID reader is arranged indoors and has a set corresponding relation with the camera, and is used for detecting the ID and the position information of the target tag and performing rough positioning, training an offline fingerprint tag model by a fingerprint learning method in advance, and performing specific positioning and tracking on the target tag by the offline fingerprint tag model;
the target association module is used for selecting a pedestrian target corresponding to the target tag from the image according to the specific position and time information of the pedestrian output by the camera and the specific position and time information of the target tag output by the RFID reader, and matching the pedestrian target to realize target association;
and the data fusion center is used for carrying out track fusion on the target label and the pedestrian target to obtain real-time coordinate information of the pedestrian target.
CN202111075680.XA 2021-09-14 2021-09-14 Indoor monitoring method and system based on RFID and vision fusion Active CN113988228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111075680.XA CN113988228B (en) 2021-09-14 2021-09-14 Indoor monitoring method and system based on RFID and vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111075680.XA CN113988228B (en) 2021-09-14 2021-09-14 Indoor monitoring method and system based on RFID and vision fusion

Publications (2)

Publication Number Publication Date
CN113988228A CN113988228A (en) 2022-01-28
CN113988228B true CN113988228B (en) 2024-04-09

Family

ID=79735819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111075680.XA Active CN113988228B (en) 2021-09-14 2021-09-14 Indoor monitoring method and system based on RFID and vision fusion

Country Status (1)

Country Link
CN (1) CN113988228B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578756B (en) * 2022-11-08 2023-04-14 杭州昊恒科技有限公司 Personnel fine management method and system based on precise positioning and video linkage
CN117793628B (en) * 2024-02-26 2024-05-07 微澜能源(江苏)有限公司 Hydropower station visitor positioning method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202121708U (en) * 2011-04-21 2012-01-18 南京百合商务有限公司 Campus intelligent security monitoring and comprehensive service system
CN102510476A (en) * 2011-10-28 2012-06-20 河海大学 Platform system of video monitoring integration information of network of things
CN102710928A (en) * 2011-10-09 2012-10-03 苏州元澄智能科技有限公司 Subway closed circuit television monitoring method fusing RFID (radio frequency identification)
CN105973228A (en) * 2016-06-28 2016-09-28 江苏环亚医用科技集团股份有限公司 Single camera and RSSI (received signal strength indication) based indoor target positioning system and method
CN109840504A (en) * 2019-02-01 2019-06-04 腾讯科技(深圳)有限公司 Article picks and places Activity recognition method, apparatus, storage medium and equipment
CN110726990A (en) * 2019-09-23 2020-01-24 江苏大学 Multi-sensor fusion method based on DS-GNN algorithm
CN112731371A (en) * 2020-12-18 2021-04-30 重庆邮电大学 Laser radar and vision fused integrated target tracking system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI357582B (en) * 2008-04-18 2012-02-01 Univ Nat Taiwan Image tracking system and method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202121708U (en) * 2011-04-21 2012-01-18 南京百合商务有限公司 Campus intelligent security monitoring and comprehensive service system
CN102710928A (en) * 2011-10-09 2012-10-03 苏州元澄智能科技有限公司 Subway closed circuit television monitoring method fusing RFID (radio frequency identification)
CN102510476A (en) * 2011-10-28 2012-06-20 河海大学 Platform system of video monitoring integration information of network of things
CN105973228A (en) * 2016-06-28 2016-09-28 江苏环亚医用科技集团股份有限公司 Single camera and RSSI (received signal strength indication) based indoor target positioning system and method
CN109840504A (en) * 2019-02-01 2019-06-04 腾讯科技(深圳)有限公司 Article picks and places Activity recognition method, apparatus, storage medium and equipment
CN110726990A (en) * 2019-09-23 2020-01-24 江苏大学 Multi-sensor fusion method based on DS-GNN algorithm
CN112731371A (en) * 2020-12-18 2021-04-30 重庆邮电大学 Laser radar and vision fused integrated target tracking system and method

Also Published As

Publication number Publication date
CN113988228A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US20200160061A1 (en) Automatic ship tracking method and system based on deep learning network and mean shift
CN113988228B (en) Indoor monitoring method and system based on RFID and vision fusion
CN111753797B (en) Vehicle speed measuring method based on video analysis
CN111914635A (en) Human body temperature measurement method, device and system and electronic equipment
CN106570490B (en) A kind of pedestrian's method for real time tracking based on quick clustering
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprints
CN112325883B (en) Indoor positioning method for mobile robot with WiFi and visual multi-source integration
JP4874607B2 (en) Object positioning device
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
CN110084830B (en) Video moving object detection and tracking method
CN113705376B (en) Personnel positioning method and system based on RFID and camera
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
JP2018061114A (en) Monitoring device and monitoring method
CN102200578A (en) Data correlation equipment and data correlation method
JP2010157093A (en) Motion estimation device and program
CN116403139A (en) Visual tracking and positioning method based on target detection
Wei et al. Learning spatio-temporal information for multi-object tracking
CN110287957B (en) Low-slow small target positioning method and positioning device
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN114612521A (en) Multi-target multi-camera tracking method, system, equipment and storage medium
CN116358547A (en) Method for acquiring AGV position based on optical flow estimation
TWI730795B (en) Multi-target human body temperature tracking method and system
Zhu et al. Fusion of wireless signal and computer vision for identification and tracking
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
CN112446355B (en) Pedestrian recognition method and people stream statistics system in public place

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant