CN114758364A - Industrial Internet of things scene fusion positioning method and system based on deep learning - Google Patents

Industrial Internet of things scene fusion positioning method and system based on deep learning Download PDF

Info

Publication number
CN114758364A
CN114758364A CN202210120544.6A CN202210120544A CN114758364A CN 114758364 A CN114758364 A CN 114758364A CN 202210120544 A CN202210120544 A CN 202210120544A CN 114758364 A CN114758364 A CN 114758364A
Authority
CN
China
Prior art keywords
fingerprint
prediction
data
layer
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210120544.6A
Other languages
Chinese (zh)
Other versions
CN114758364B (en
Inventor
王天择
孙奕髦
董兵
赖伟
黄文炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202210120544.6A priority Critical patent/CN114758364B/en
Publication of CN114758364A publication Critical patent/CN114758364A/en
Application granted granted Critical
Publication of CN114758364B publication Critical patent/CN114758364B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/25Manufacturing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/60Positioning; Navigation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses an industrial Internet of things scene fusion positioning method and system based on deep learning, when position information of a position to be detected needs to be obtained, a position fingerprint identification network extracts features of fingerprint information received by a target to be detected through a convolutional neural network, then a feature image of the fingerprint information features is extracted through an image extraction layer, a prediction layer predicts fingerprint features of the fingerprint information based on the feature image, a full connection layer matches the fingerprint features with standard fingerprint information in a preset position fingerprint library, and position information corresponding to the successfully matched standard fingerprint information is output and serves as the position information of the target to be detected. The method and the device realize online positioning, eliminate the dependency of matching between position labels and improve the positioning robustness and precision. The position fingerprint identification network has good mobility, can be suitable for different environments, and improves the environmental adaptability to positioning.

Description

Industrial Internet of things scene fusion positioning method and system based on deep learning
Technical Field
The invention relates to the technical field of computers, in particular to an industrial Internet of things scene fusion positioning method and system based on deep learning.
Background
The vigorous development of industrial manufacturing continuously promotes the technical change in the manufacturing field of China, and the appearance of the Internet of things and the artificial intelligence technology makes the concepts of intelligent production, intelligent factories and the like possible. In the intelligent modification, the traditional industrial production faces many difficulties, such as indoor positioning technology in an industrial scene. The indoor positioning technology is realized, personnel can be tracked and allocated in real time, each link of automatic production can be monitored, and the method has positive significance for improving the production efficiency.
Indoor positioning plays an important role in industrial scenarios. By the indoor positioning technology with higher precision, a manager can realize real-time monitoring and dynamic scheduling on production personnel, thereby eliminating hidden dangers and improving the working efficiency; meanwhile, a factory can acquire real-time positions of materials and vehicles, and automatic allocation, turnover, cargo transportation and the like of the materials are realized; in addition, a high-precision positioning technology is also a precondition for putting industrial robots and the like into automatic production.
Existing outdoor location services are mainly implemented by Global Positioning Satellite (GPS) technology. The global positioning system can provide high-precision positioning service for outdoor users, but has the following limitations: the GPS signal common rate is very low, the signal receiving requirement is higher, no obstruction exists between an outdoor antenna and a satellite, and a better positioning result can be achieved. However, when indoor positioning is performed, since a satellite signal is rapidly attenuated after reaching indoors due to blockage of a building, indoor coverage requirements cannot be met, and indoor positioning using a GPS signal is almost impossible.
In industrial production, an indoor environment is far more complex than an outdoor environment, radio waves are easily blocked by obstacles, and reflection, refraction or scattering occurs to form non-line-of-sight propagation (namely NLOS, non-line-of-sight communication refers to indirect point-to-point communication between a receiver and a transmitter, the most direct explanation of the non-line-of-sight is that two points of sight of communication are blocked, the two points cannot see each other, and more than 50% of a Fresnel zone is blocked), so that the positioning accuracy is seriously influenced. In addition, the indoor production environment layout and topology are susceptible to human factors, which causes various signal propagation changes, thereby reducing the performance of the positioning technology based on the characteristic matching principle.
Most of the currently popular positioning systems are independent positioning systems, that is, most of them are designed for a special application environment, which leads to the following problems:
(1) the target detection has defects: most location systems rely on matches between tags, so they have difficulty detecting objects that are different tags co-existing with the target tag in the consent environment.
(2) Positioning robustness and precision are low: independent systems are limited in information acquisition and difficult to implement.
(3) Poor environmental adaptability: an independent positioning system that performs well in one practical scenario may perform poorly in another practical scenario.
Disclosure of Invention
The invention aims to provide an industrial Internet of things scene fusion positioning method and system based on deep learning, and aims to solve the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a deep learning-based industrial internet of things scene fusion positioning method, where the method includes:
acquiring fingerprint information received by a target to be detected, wherein the fingerprint information is used for representing the position characteristics of the target to be detected;
inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network;
the position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the sketch extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the countermeasure network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
Optionally, the method for training the location fingerprint identification network includes:
determining a plurality of fingerprint data of a plurality of predetermined positions, setting 20% of the plurality of fingerprint data of each predetermined position in a verification set, and setting 80% of the plurality of fingerprint data of each predetermined position in a training set; the fingerprint data in the verification set is marked with a preset position represented by the fingerprint data in advance;
inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network;
extracting a first characteristic portrait of the training characteristics through a portrait extraction layer, and extracting a second characteristic portrait of the verification characteristics through a portrait extraction layer;
mapping the first characteristic image to a potential space through a prediction layer to obtain a first potential characteristic, and mapping the second characteristic image to the potential space to obtain a second potential characteristic; obtaining a first prediction vector y1 based on the first potential feature; obtaining a second prediction vector based on the second potential feature; obtaining a first loss function of the unlabeled data based on the first prediction vector, and obtaining a second loss function of the labeled data based on the second prediction vector;
Performing fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector through an antagonistic network to obtain fusion data; mapping the fusion data to second spaces respectively to obtain mapping characteristics; obtaining a third loss function based on the mapping characteristics;
obtaining a model loss function based on the first loss function, the second loss function and the third loss function;
and when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by a prediction layer is input into the position information of the target to be measured.
Optionally, performing a fusion operation on the training features, the first prediction vector, the verification features, and the second prediction vector to obtain fusion data includes:
obtaining a first fusion vector based on the training features, the first prediction vector and a second prediction vector;
obtaining a second fusion vector based on the verification feature, the first prediction vector and a second prediction vector;
and carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
In a second aspect, an embodiment of the present invention provides an industrial internet of things scene fusion positioning system based on deep learning, where the system includes:
The acquisition module is used for acquiring fingerprint information received by a target to be detected, and the fingerprint information is used for representing the position characteristics of the target to be detected;
the positioning module is used for inputting the fingerprint information into a position fingerprint identification network trained in advance, and the position fingerprint identification network outputs the position information of the target to be detected;
the position fingerprint identification network consists of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the portrait extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the countermeasure network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
The embodiment of the invention provides an industrial Internet of things scene fusion positioning method and system based on deep learning, wherein the method comprises the following steps: acquiring fingerprint information received by a target to be detected, wherein the fingerprint information is used for representing the position characteristics of the target to be detected; inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network; the position fingerprint identification network consists of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the portrait extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
When the position information of the position to be detected needs to be obtained, a signal (for example, Wifi) needs to be sent to the signal receiving terminal of the position to be detected through a predetermined AP, then signal information (fingerprint information) received by the signal receiving terminal is obtained, the fingerprint information is input into a position fingerprint identification network trained in advance, and the position fingerprint identification network can identify the position information of the position to be detected based on the fingerprint information. The position fingerprint identification network extracts the characteristics of fingerprint information received by a target to be detected through a convolutional neural network, then extracts a characteristic image of the fingerprint information characteristics through an image extraction layer, after the characteristic image is obtained, a prediction layer predicts the fingerprint characteristics of the fingerprint information based on the characteristic image, a full connection layer matches the fingerprint characteristics with standard fingerprint information in a preset position fingerprint database, and position information corresponding to the successfully matched standard fingerprint information is output and serves as the position information of the target to be detected. The standard fingerprint information uniquely characterizes one location information. Therefore, online positioning is realized, and the positioning accuracy is improved. In addition, the position fingerprint identification network comprises a countermeasure network, and the countermeasure network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer, so that the accuracy of the fingerprint characteristics output by the prediction layer is improved, and the accuracy of positioning is further improved.
Drawings
Fig. 1 is a flowchart of an industrial internet of things scene fusion positioning method based on deep learning according to an embodiment of the present invention.
Fig. 2 shows a schematic structure diagram of a location fingerprinting network.
FIG. 3 shows a schematic representation of location fingerprinting of signal strengths for one variety of signals.
Fig. 4 is a schematic block structure diagram of an electronic device according to an embodiment of the present invention.
The mark in the figure is: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Example 1
Existing outdoor location services are mainly implemented by Global Positioning Satellite (GPS) technology. The global positioning system can provide high-precision positioning service for outdoor users, but has the following limitations: the GPS signal common rate is very low, the signal receiving requirement is higher, no obstruction exists between an outdoor antenna and a satellite, and a better positioning result can be achieved. However, when indoor positioning is performed, since a satellite signal is rapidly attenuated after reaching indoors due to blockage of a building, indoor coverage requirements cannot be met, and indoor positioning using a GPS signal is almost impossible.
In industrial production, an indoor environment is far more complex than an outdoor environment, radio waves are easily blocked by obstacles, and are reflected, refracted or scattered to form non-line-of-sight propagation (namely NLOS, the non-line-of-sight communication refers to indirect point-to-point communication between a receiver and a transmitter, the most direct explanation of the non-line-of-sight is that two points of sight of communication are blocked, the two points cannot see each other, and more than 50% of a Fresnel zone is blocked), so that the positioning accuracy is seriously influenced. In addition, the indoor production environment layout and topology are susceptible to human factors, which causes various signal propagation changes, thereby reducing the performance of the positioning technology based on the characteristic matching principle.
In consideration of the huge development potential and the wide development prospect of indoor positioning and the difficult points of the existing indoor positioning technology analyzed in the above, the application provides a positioning method based on the combination of multi-sensor data fusion and the deep learning technology so as to reduce the indoor positioning difficulty and improve the indoor positioning precision.
Before the solution proposed by the present invention is explained, the concept of "location fingerprint" needs to be introduced. As a "human body identification card", human fingerprints are widely used in the field of identification. Uniqueness, local correlation and recognizability are characteristics of fingerprints, and thus the concept of fingerprint location is introduced into indoor location. A "location fingerprint" is a fingerprint that associates the characteristics of a location in the physical environment with a certain characteristic, where the environment has one or more characteristics, and a one-to-one correspondence is achieved by the specific characteristics, i.e., a location corresponds to a unique fingerprint. This fingerprint may be one or more dimensions depending on the characteristics of the environment in which it is located, for example when the device to be located is receiving or transmitting information, then the fingerprint may be a characteristic or characteristics of the information or signal (most commonly signal strength). The common location fingerprint positioning modes include three, if the equipment to be positioned is sending signals, the fixed receiving equipment which is installed in advance senses the signals or information of the equipment to be positioned to realize positioning, and the mode is remote positioning or network positioning. If the device to be positioned receives signals or information of some fixed sending devices, then the position of the device to be positioned is estimated according to the detected characteristics, and the mode is self-positioning. If the device to be positioned transmits all the detected features to the server, the server uses the obtained features to estimate the position of the device, which is a hybrid positioning.
The method aims at the problems that the traditional method for indoor positioning through a single signal is high in information acquisition difficulty and low in positioning result accuracy. The positioning problem of an actual industrial scene can be effectively solved by adopting the fusion of various signals; various sensors are utilized to collect various common signals indoors, such as Wireless Fidelity (WiFi), Visible Light Communication (VLC), and the like, and more complete data can be collected under the condition of low difficulty. Compared with a single signal method, when one signal loses value due to some reason, more accurate positioning can be realized by making up information through other signals. When the signal is complete, more accurate positioning result can be achieved due to the multi-dimensionality of the signal. For this reason, the fingerprint information and fingerprint data mentioned in the present application are the above-mentioned "location fingerprint", which may be a single signal, such as a WiFi signal, a visible light signal, and an Ultra Wide Band (UWB) signal, or may be formed by combining a plurality of signals mentioned above, and it should be noted that the nature of VLC, UWB, and WiFi positioning is actually a base station type positioning, and positioning is performed by radiating a fingerprint that gradually weakens outward by taking a WiFi AP and a UWB base station as their centers and a light source. Therefore, the fingerprint Information and fingerprint data of the present application may be Channel State Information (CSI), Received Signal Strength Indication (RSSI), or data formed by combining Information such as CSI and RSSI. That is, the location fingerprint can be of various types, and any "location-unique" feature can be The Signal Strength of the wireless Signal or the visible light Signal Strength of the wireless Signal decreases with increasing propagation distance during spatial propagation, the Signal Strength of the Signal source increases the distance from the receiving end to the Signal source increases the further the receiving end is from the transmitting end, the Received Signal Strength decreases the further the receiving end is from the transmitting end, a unique fingerprint can be formed according to the Signal Strength Received by the terminal equipment, considering the cost and practicality aspects, the wireless Signal Strength (Received Signal Strength h,
Figure RE-475170DEST_PATH_IMAGE001
) Are commonly used as fingerprints in indoor positioning. In the indoor environment where the wireless signal deployment is finished, the wireless signals are distinguished at different positions and are distributed relatively stably in space and time, and the wireless signals are distributed only at a preset reference point
Figure RE-126732DEST_PATH_IMAGE001
The fingerprint can be acquired, and the position of an Access Point (AP) does not need to be known. The positioning method and the positioning system aim at the problems of positioning, positioning accuracy and the like of indoor scenes such as large workshops, plants, warehouses and the like, and serve as an optional implementation mode, so that the CSI is adopted to be suitable for adopting (VLC, UWB and WiFi are fused to receive signal strength) as position fingerprints.
With reference to fig. 1, the method for scene fusion and positioning of the industrial internet of things based on deep learning includes:
s101: and acquiring fingerprint information received by the target to be detected.
In the embodiment of the present invention, the fingerprint information is used to represent the location characteristics of the target to be detected, and optionally, the fingerprint information may be CSI, RSSI, RSS, or other information, or may be information formed by combining CSI, RSSI, and RSS. In the embodiment of the present invention, the receiver of the target to be detected receives the fingerprint signal transmitted by the transmitter in the scene and performs fusion to obtain the fingerprint information, which may specifically be:
and obtaining data samples in a set time period, wherein the set time period can be 1-15 minutes, the data samples in the set time period comprise data sampled every second in the set time period, namely the data samples comprise data sampled for multiple times. The sampled data may include one or more of CSI, RSSI, RSS, etc. information. As an example, a piece of fingerprint information is represented by [ a1, a2, a3, … …, a60 ]. Wherein, a1, a2, a3, … … and a60 respectively represent CSI information acquired at 1 st second, 2 nd second, 3 rd second and … … th 60 th second.
S102: and inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network.
The position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network. The input to the prediction layer is the output of the image layer, and the input to the countermeasure network is the output of the convolutional neural network and the output of the prediction layer. The prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer. And the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the target to be measured. As shown in fig. 2.
By adopting the scheme, when the position information of the position to be detected needs to be obtained, a signal (such as Wifi) needs to be sent to the signal receiving end of the position to be detected through the preset AP, then the signal information (fingerprint information) received by the signal receiving end is obtained, the fingerprint information is input into the position fingerprint identification network trained in advance, and the position fingerprint identification network can identify the position information of the position to be detected based on the fingerprint information. The position fingerprint identification network extracts the characteristics of fingerprint information received by a target to be detected through a convolutional neural network, then extracts a characteristic image of the fingerprint information characteristics through an image extraction layer, after the characteristic image is obtained, a prediction layer predicts the fingerprint characteristics of the fingerprint information based on the characteristic image, a full connection layer matches the fingerprint characteristics with standard fingerprint information in a preset position fingerprint database, and position information corresponding to the standard fingerprint information which is successfully matched is output and serves as the position information of the target to be detected. The standard fingerprint information uniquely characterizes one location information. Therefore, online positioning is realized, and the positioning accuracy is improved. In addition, the position fingerprint identification network comprises a countermeasure network, and the countermeasure network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer, so that the accuracy of the fingerprint characteristics output by the prediction layer is improved, and the accuracy of positioning is further improved.
The training method of the position fingerprint identification network comprises the following steps:
a1, determining a plurality of fingerprint data of a plurality of preset positions, setting 20% of the plurality of fingerprint data of each preset position in a verification set, and setting 80% of the plurality of fingerprint data of each preset position in a training set. The fingerprint data in the authentication set is pre-labeled with a predetermined location characterized by the fingerprint data.
In the embodiment of the present invention, the predetermined location is a specific area within the positioning scenario, for example, if the positioning scenario is a factory including 10 workshops, each workshop can be set to be a predetermined location. Optionally, a signal receiver is arranged in the predetermined position, and a plurality of fingerprint data of each predetermined position are obtained according to the above manner of obtaining the fingerprint information of the target to be detected. For example, the fingerprint information of the object to be detected is [ a1, a2, a3, … …, a60], and the fingerprint data of one of the predetermined positions may be [ b1, b2, b3, … …, b60], [ b61, b62, b63, … …, b120], [ b121, b122, b123, … …, b180 ]. I.e. the fingerprint data may comprise a plurality of pieces of historical fingerprint information. If the fingerprint data has 10 pieces of historical fingerprint information, 8 pieces of fingerprint information are arranged in the training set, 2 pieces of fingerprint information are arranged in the verification set, and the historical fingerprint information arranged in the verification set is subjected to standard to determine the position information represented by the historical fingerprint information in the verification set.
And A2, inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network.
A3, extracting a first feature image of the training feature by an image extraction layer, and extracting a second feature image of the verification feature by the image extraction layer. Specifically, the portrait extraction layer is a full connection layer and is fully connected with the convolutional neural network, the portrait extraction layer maps the training features and the verification features into a portrait space to respectively obtain a first feature portrait and a second feature portrait, specifically, the training features are mapped through a softplus function to obtain the first feature portrait, and the verification features are mapped through the softplus function to obtain the second feature portrait.
A4, mapping the first feature image into the potential space through the prediction layer to obtain the first potential feature, and mapping the second feature image into the potential space to obtain the second potential feature. Obtaining a first prediction vector based on the first potential feature; a second prediction vector is derived based on the second latent features.
In particular, a first feature representation is mapped into the potential space, resulting in a first potential feature, specifically H1= W × P1+ b, H1 representing the first potential feature, and P1 representing the first feature representation. The second feature image is injected into the potential space, resulting in a second potential feature, specifically H2= W × P2+ b, with P2 representing the second feature image and H2 representing the second potential feature. W, b are mapping parameters, and the values may be W =1, b = 0.5.
Obtaining a first prediction vector based on the first potential feature through a softmax activation function; and obtaining a second prediction vector based on the second potential feature through the softmax activation function. Specifically, the first potential feature is input into a softmax activation function, and the softmax activation function outputs the first prediction vector. The second potential feature is input into a softmax activation function, which outputs a second prediction vector.
A5, obtaining a first loss function of the unlabeled data based on the first prediction vector, and obtaining a second loss function of the labeled data based on the second prediction vector.
In the embodiment of the present invention, the steps a4, a5 are obtained by a prediction layer operation.
Specifically, the first loss function is represented by:
Figure RE-640889DEST_PATH_IMAGE002
(1)
where L1 is the first loss function, X1 represents the number of fingerprint data in the training set.
Figure RE-36099DEST_PATH_IMAGE003
Representing the ith first prediction vector.
The second loss function is represented by:
Figure RE-584892DEST_PATH_IMAGE004
(2)
where, the L2 second loss function,
Figure RE-141775DEST_PATH_IMAGE005
the actual feature vector representing the data for the ith fingerprint in the verification set is pre-extracted. X2 represents the amount of fingerprint data in the authentication set.
And A6, carrying out fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector to obtain fusion data. Specifically, the fusion operation specifically comprises: first, a first fusion vector is obtained based on the training features, the first prediction vector and the second prediction vector. And secondly, obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector. And then carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
Obtaining a first fusion vector based on the training features, the first prediction vector and the second prediction vector, specifically by a calculation method of the following formula: r1= y1+ a y2-p z1, where R1 represents a first fused vector, y1 represents a first predicted vector, y2 represents a second predicted vector, z1 represents a training feature, a, p are weight parameters, a + p = 1.
And obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector, specifically by a calculation mode of the following formula: r2= y2+ a y1-p z2, wherein R2 represents the second fused vector and z2 represents the verification feature.
The fused data F is represented in the following manner: f = R2+ a R2-p R1, where F represents the fusion data.
For this reason, for each fingerprint data, the manner of obtaining the first fusion vector, and the fusion data of the fingerprint data is referred to as the manner of obtaining the first fusion vector, and the fusion data as above, and details are not repeated here.
By adopting the scheme, the data characteristics of the verification data set and the training data are comprehensively considered in the fusion data, and the accuracy of the characterization of the fusion characteristics on the position is improved.
And A7, mapping the fusion data to second spaces respectively to obtain mapping characteristics.
Specifically, the specific mapping mode for mapping the fusion data to the second space through the softmax function to obtain the mapping characteristics is as follows:
Figure RE-143229DEST_PATH_IMAGE006
(3)
wherein, the first and the second end of the pipe are connected with each other,
Figure RE-76550DEST_PATH_IMAGE007
a mapping feature representing the ith fingerprint data,
Figure RE-745429DEST_PATH_IMAGE008
is a mapping parameter of the second space, and the value may be
Figure RE-473213DEST_PATH_IMAGE009
A8, obtaining a third loss function based on the mapping characteristics, specifically according to the mode shown in formula (4):
Figure RE-961963DEST_PATH_IMAGE010
(4)
where L3 denotes a third loss function,
Figure RE-433396DEST_PATH_IMAGE011
a unique heat vector expressing the ith fingerprint data.
In the embodiment of the present invention, the steps A6-A8 are obtained by the operation of the countermeasure network.
A9, obtaining a model loss function based on the first loss function, the second loss function and the third loss function.
The model loss function is specifically calculated by L = L2+ a L1-p L3 (5)
Where L represents the loss function of the model.
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by the prediction layer is input of the position information of the target to be measured.
By adopting the scheme, the constraint capacity of the loss function on the model can be improved, the correlation between the marked data and the unmarked data is considered, the influence of the network-resistant elimination environment characteristics on the fingerprint characteristics output by the prediction layer is realized (particularly, the formula (5) is embodied), and the accuracy of the model prediction position information is improved. The trained model is used for detecting the target position to be detected, the matching dependency among the labels is eliminated, the fingerprint data of the position to be detected is directly obtained for the position to be detected, the fingerprint data is input into the model, the position information of the position can be directly detected, the positioning independent of the labels is realized, and the positioning robustness and the positioning precision are improved. Due to the adoption of the position fingerprint identification network with the structure, the network has good mobility, can be suitable for different environments, and improves the environmental adaptability of positioning.
In the embodiment of the invention, the method for acquiring the fingerprint information received by the target to be detected comprises the following steps:
the fingerprint data received by the target to be detected is collected, and specifically, the CSI information or RSS information received by the target to be detected at the position of the target to be detected can be collected.
Of signals
Figure RE-956781DEST_PATH_IMAGE001
Or the received power depends on the location of the receiver.
Figure RE-855467DEST_PATH_IMAGE001
The signal weakens with the increase of the distance, is generally a negative value, and is considered to be good at a signal of-60 to-70 dBm.
Figure RE-97093DEST_PATH_IMAGE001
The acquisition is relatively simple because it is necessary for most wireless communication devices to operate properly. Many communication system needs
Figure RE-309899DEST_PATH_IMAGE001
The information is used to sense the quality of the link to perform functions such as handoff and adapting the transmission rate, and
Figure RE-687791DEST_PATH_IMAGE001
is not affected by the signal bandwidth. The RSS information is used as important reference data for indoor positioning, so that the positioning accuracy can be improved.
In free space, without any obstacle, the signal is emitted from the emission source in a spherical shape in all directions, and there is no difference in each direction, so that the power of the signal is inversely proportional to the square of the distance: p ^ 1d 2.
Figure RE-757378DEST_PATH_IMAGE001
That is, power, but the unit of attenuation is typically expressed in dB, it is easy to understand
Figure RE-220721DEST_PATH_IMAGE001
In relation to the distance, the distance between the sensor and the sensor,
Figure RE-33956DEST_PATH_IMAGE001
attenuation is proportional to the logarithm of the distance, assuming a reference distance d0 and a distance over it are known
Figure RE-531933DEST_PATH_IMAGE001
Is composed of
Figure RE-38001DEST_PATH_IMAGE001
(d0) Then RSS (d) = RSS (d0) -10 × n × log (d 0).
In the embodiment of the application, the preset fixed signal emission source is averaged at different distances from the signal emission source
Figure RE-723060DEST_PATH_IMAGE001
Is proportional to the logarithm of the distance, and in the simplest case, the RSS can be expressed as:
RSS(d)=Pt-K-10*q*ln(d)
where q is called the path loss exponent, Pt is the transmit power, and K is a constant that depends on the environment and frequency, and may take values from 0.1 to 20. RSS can be used to calculate the distance between the device to be located and the AP and the light source, where the obtained distance is used to perform trilateration of the mobile device for location determination, and this method may cause large errors due to the influence of the actual environment (called shadow fading). When the device to be positioned receives signals from a plurality of transmission sources, we can use the signals from the plurality of transmission sources
Figure RE-339986DEST_PATH_IMAGE001
Form a R
Figure RE-426891DEST_PATH_IMAGE012
The vector serves as a fingerprint associated with the location. Namely, the different sensors are utilized simultaneously by the data layer fusion technologyMeasuring fingerprint data of the same position, firstly, carrying out fusion processing on the measured data from a plurality of different sensors to obtain and extract combined data characteristics to construct a position fingerprint database.
In the embodiment of the invention, the establishment of the corresponding relation between the position and the fingerprint is carried out in an off-line stage. In the off-line stage, in order to collect fingerprints at various positions and construct a database, a relatively complex survey is conducted in a specified area, and collected data are used as a training set. The position coordinates obtained by indoor positioning calculation refer to coordinates in a local coordinate system in the current environment, and are not latitude and longitude. At each data acquisition point, we averaged the data from each AP light source over a period of time (5 to 15 minutes, acquired approximately once per second)
Figure RE-369439DEST_PATH_IMAGE001
The device may be oriented differently at the time of acquisition.
For this project we deploy several APs, UWB base stations and LED light sources (the APs of LiFi are usually integrated on the ceiling and the UWB positioning base stations are distributed at the geometrical edges of the positioning area), which can be used for communication as well as for positioning.
In the data acquisition stage, the complicated experiment is carried out at each data acquisition point through equipment, and an equipment hardware system comprises a photodiode, a 0UWB (ultra wide band) tag, a WiFi (wireless fidelity) module and the like. Firstly, the equipment obtains various relevant characteristics of a data acquisition point through a relevant module, namely VLC signal intensity, UWB signal intensity and WiFi signal intensity, and then sends the characteristics to an upper computer through a WiFi module. At each data acquisition point, we averaged the data from each AP light source by sampling the data over a period of time (5 to 15 minutes, acquired approximately once per second)
Figure RE-541795DEST_PATH_IMAGE013
The device may be oriented differently at the time of acquisition.
In order to clarify the technical scheme of the embodiment of the invention, the geographic area to be measured is setCovered by a rectangular grid, as shown in fig. 3, which is a grid of 4 rows and 8 columns (32 grid points in total), 1 AP and 1 LED light source in this scenario, the fingerprint on each grid point is a two-dimensional vector z = [ x1, x2 ] in this example]Where x1 is the average from the AP
Figure RE-696832DEST_PATH_IMAGE001
Figure RE-903823DEST_PATH_IMAGE014
Averaging from LEDs
Figure RE-751693DEST_PATH_IMAGE001
. That is, the fingerprint data z at each predetermined position is represented in a manner of z = [ x1, x2 ]]。
These two-dimensional fingerprints are acquired in the area shown by each grid point, and the coordinates of the grid points and the corresponding fingerprints form a database, which is called a tagging phase (calibration phase), and this database of fingerprints also becomes a radio map (radio map). The right part of fig. 3 shows these fingerprints in a two-dimensional vector space (signal space), whereas in a more general scenario, assuming N APs, UWB base stations and LED light sources, the fingerprints are
Figure RE-676924DEST_PATH_IMAGE015
Is an N-dimensional vector.
The hybrid network location is less costly than those location systems that require additional equipment.
Most studies assume that the location fingerprints are obtained by collecting data at virtual grid points, for example, in a 100 x 100 area, dividing the area into 50 x 50 grids (i.e., 2m x 2m per grid), and collecting a set of fingerprints in the middle of each grid, where each set of fingerprints is recorded by the grid points that include information received from each WiFi AP, UWB base station, and visible light
Figure RE-635653DEST_PATH_IMAGE001
5 to 15 minutes, and in some cases we alsoDifferent measuring devices or different device orientations are used to make measurements and collect data. Such collection work is extremely tedious, because this project is to the locatable and positioning accuracy scheduling problem of indoor scenes such as large-scale workshop, factory building, warehouse, in addition in order to adapt to environmental change and need update periodically. In the above example, the application uses a single device and a fixed direction, and the whole collection process requires 2500 × 5=125000 minutes, which is close to 9 days, and not only consumes manpower and material resources, but also has low efficiency. Efficiency is improved by preprocessing in a way that reduces the number of fingerprint acquisitions.
In a complex indoor environment, especially aiming at indoor scenes such as large-scale workshops, plants, warehouses and the like, WiFi, UWB and VLC
Figure RE-431570DEST_PATH_IMAGE001
The signal is susceptible to multipath effects and ambient temperature, and the signal is reflected, refracted indoors, even due to equipment problems, resulting in the acquisition
Figure RE-450342DEST_PATH_IMAGE001
The value suffers from fluctuations, even packet losses. When we are doing data collection, if it is collected
Figure RE-862869DEST_PATH_IMAGE001
Of great value, especially of great strength
Figure RE-380217DEST_PATH_IMAGE001
The value is lost, and even if the subsequent positioning algorithm is good, a large error can be caused, so that the project adopts an integrated filtering method combining gradient filtering and Kalman filtering. The method has the main idea that firstly, the gradient filtering method is utilized to carry out preliminary filtering on the acquired data, missing data is filled, and the occurrence of data acquisition process is avoided
Figure RE-561799DEST_PATH_IMAGE001
Loss of value phenomenon, secondary reuse of CarlThe Mandarin filtering method performs a second filtering process on the data, so as to further smooth the data
Figure RE-485893DEST_PATH_IMAGE001
Data and noise reduction pair
Figure RE-120137DEST_PATH_IMAGE001
The impact of the data.
That is, after acquiring the fingerprint data received by the target to be detected, the method for obtaining the fingerprint information received by the target to be detected further includes: fingerprint data is preprocessed.
The specific pretreatment mode may be:
in the indoor space to be collected, the intelligent mobile equipment can collect a group at any collection point
Figure RE-420668DEST_PATH_IMAGE001
And (4) data. Due to a single acquisition
Figure RE-456757DEST_PATH_IMAGE001
The fluctuation is large, therefore, in the actual collecting process, a time interval is set for collecting
Figure RE-817331DEST_PATH_IMAGE001
The data is set to 1 second here. During this time, the data collected is averaged and represented by RSS (i),
Figure RE-938871DEST_PATH_IMAGE016
+1) represents the latter of the successively acquired data, i being a positive integer greater than or equal to 1. The data collected each time are processed by an integrated filtering method, and the detailed processing steps are as follows:
step 1, firstly, the
Figure RE-43093DEST_PATH_IMAGE001
The data is judged if
Figure RE-668110DEST_PATH_IMAGE016
+1) =0, if data loss is determined, replacing with data acquired last time, and if n times of loss occur continuously, and the value of n may be 3-5 times, stopping replacing with the last time data, which indicates that the position of the reference point is far away from the coverage of the AP. That is, for the mth data acquisition, if RSS (m) =0, RSS (m) = RSS (m-1) is determined, and m is a positive integer greater than or equal to 2. If rss (m) =0 appears n times in succession, data processing is performed in the manner of step 2.
And 2, predicting data and carrying out gradient filtering. Specifically, the method comprises the following steps:
obtained by the first two steps
Figure RE-465165DEST_PATH_IMAGE016
The prediction data rss (prediction) of +1) has the formula (6).
RSS(predict)=2*RSS(i)-RSS(i-1) (6)
I.e. the prediction difference is made for the missing data.
And then for the predicted prediction data
Figure RE-808421DEST_PATH_IMAGE017
Performing gradient filtration, wherein the specific gradient filtration mode is as follows:
RSS(predict)1= RSS(predict)+sign(RSS(predict))+c (7)
where c is equal to the variance between RSS (predict) and RSS (i + 1).
Through adopting above scheme, can improve the accuracy of indoor location.
In summary, the industrial internet of things scene fusion positioning method based on deep learning provided by the embodiment of the invention includes:
firstly, fingerprint data received by a target to be detected is collected, wherein the fingerprint data is information capable of representing a certain position. The fingerprint data is preprocessed, environmental influences and data residual errors are removed, the fingerprint data is corrected, then the fingerprint data is input into a trained position fingerprint identification network, the characteristics of the corrected fingerprint data are extracted through a convolutional neural network in the position fingerprint identification network, then a characteristic portrait of the characteristics of the fingerprint data is extracted through a portrait extracting layer, then the fingerprint characteristics of the fingerprint data are predicted through a predicting layer based on the characteristic portrait, then the fingerprint characteristics are matched with standard fingerprints in a preset fingerprint library based on a full connecting layer, and position information corresponding to the standard fingerprints successfully matched with the fingerprint characteristics is output as position information of a target to be detected. Thereby realizing the positioning of the target to be measured. The target to be detected can be a target area to be detected, or a person, an animal, a robot or an object carrying equipment capable of receiving fingerprint information.
The fingerprint database stores standard fingerprints and position information with one-to-one relationship in advance, and one standard fingerprint corresponds to one position information. I.e. a standard fingerprint uniquely characterizes a location.
The fingerprint features are matched with the standard fingerprints in a preset fingerprint database, the Euclidean distance or the similarity between the fingerprint features and the standard fingerprints can be calculated, then the position information corresponding to the standard fingerprint with the minimum Euclidean distance is taken as the position information of the target to be detected, or the position information corresponding to the standard fingerprint with the maximum similarity is taken as the position information of the target to be detected. The similarity can be a cosine value of an included angle between the fingerprint feature and the standard fingerprint.
Example 2
The embodiment of the invention provides an industrial Internet of things scene fusion positioning system based on deep learning, which comprises an acquisition module and a positioning module, wherein:
the acquisition module is used for acquiring fingerprint information received by the target to be detected, and the fingerprint information is used for representing the position characteristics of the target to be detected.
And the positioning module is used for inputting the fingerprint information into a position fingerprint identification network trained in advance, and the position fingerprint identification network outputs the position information of the target to be detected.
In the embodiment of the invention, the position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network. The input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network. The input to the prediction layer is the output of the portrait extraction layer, and the input to the countermeasure network is the output of the convolutional neural network and the output of the prediction layer. The prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer. The input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured. As shown in fig. 2.
In the embodiment of the invention, the training method of the position fingerprint identification network comprises the following steps:
a1, determining a plurality of fingerprint data of a plurality of preset positions, setting 20% of the plurality of fingerprint data of each preset position in a verification set, and setting 80% of the plurality of fingerprint data of each preset position in a training set. The fingerprint data in the verification set is pre-labeled with a predetermined location characterized by the fingerprint data.
In the embodiment of the present invention, the predetermined location is a specific area within the positioning scenario, for example, if the positioning scenario is a factory including 10 workshops, each workshop can be set to be a predetermined location. Optionally, a signal receiver is arranged in the predetermined position, and a plurality of fingerprint data of each predetermined position are obtained according to the above manner of obtaining the fingerprint information of the target to be detected. For example, the fingerprint information of the object to be detected is [ a1, a2, a3, … …, a60], and the fingerprint data of one of the predetermined positions may be [ b1, b2, b3, … …, b60], [ b61, b62, b63, … …, b120], [ b121, b122, b123, … …, b180 ]. I.e. the fingerprint data may comprise a plurality of pieces of historical fingerprint information. If the fingerprint data has 10 pieces of historical fingerprint information, 8 pieces of fingerprint information are arranged in the training set, 2 pieces of fingerprint information are arranged in the verification set, and the historical fingerprint information arranged in the verification set is subjected to standard to determine the position information represented by the historical fingerprint information in the verification set.
And A2, inputting the training set and the verification set into a convolutional neural network, and respectively extracting the training features in the training set and the verification features in the verification set through the convolutional neural network.
A3, extracting a first feature image of the training feature by the image extraction layer, and extracting a second feature image of the verification feature by the image extraction layer. Specifically, the portrait extraction layer is a full connection layer and is fully connected with the convolutional neural network, the portrait extraction layer maps the training features and the verification features into a portrait space to respectively obtain a first feature portrait and a second feature portrait, specifically, the training features are mapped through a softplus function to obtain the first feature portrait, and the verification features are mapped through the softplus function to obtain the second feature portrait.
A4, mapping the first feature image into the potential space through the prediction layer to obtain the first potential feature, and mapping the second feature image into the potential space to obtain the second potential feature. Obtaining a first prediction vector based on the first potential feature; a second prediction vector is derived based on the second potential feature.
In particular, a first feature representation is mapped into the potential space, resulting in a first potential feature, specifically H1= W × P1+ b, H1 representing the first potential feature, and P1 representing the first feature representation. The second feature image is injected into the potential space, resulting in a second potential feature, specifically H2= W × P2+ b, with P2 representing the second feature image and H2 representing the second potential feature. W, b are mapping parameters, and the values may be W =1, b = 0.5.
Obtaining a first prediction vector based on the first potential feature through a softmax activation function; and obtaining a second prediction vector based on the second potential feature through the softmax activation function. Specifically, the first potential feature is input into a softmax activation function, and the softmax activation function outputs the first prediction vector. The second potential feature is input into a softmax activation function, which outputs a second prediction vector.
A5, obtaining a first loss function of the unlabeled data based on the first prediction vector, and obtaining a second loss function of the labeled data based on the second prediction vector.
In the embodiment of the present invention, the steps a4, a5 are obtained by a prediction layer operation.
And A6, carrying out fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector to obtain fusion data. Specifically, the fusion operation specifically comprises: first, a first fusion vector is obtained based on the training features, the first prediction vector and the second prediction vector. And secondly, obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector. And then carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
Obtaining a first fusion vector based on the training features, the first prediction vector and the second prediction vector, specifically obtaining the first fusion vector by a calculation mode of the following formula: r1= y1+ a y2-p z1, wherein R1 represents a first fused vector, y1 represents a first prediction vector, y2 represents a second prediction vector, z1 represents a training feature, a, p are weight parameters, a + p = 1.
Obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector, specifically obtaining the second fusion vector by a calculation mode of the following formula: r2= y2+ a y1-p z2, wherein R2 represents the second fused vector and z2 represents the verification feature.
The fused data F is represented in the following manner: f = R2+ a × R2-p × R1, wherein F represents fused data.
For this reason, for each fingerprint data, the manner of obtaining the first fusion vector, and the fusion data of the fingerprint data is referred to as the manner of obtaining the first fusion vector, and the fusion data as above, and details are not repeated here.
By adopting the scheme, the data characteristics of the verification data set and the training data are comprehensively considered in the fusion data, and the accuracy of the characterization of the fusion characteristics on the position is improved.
And A7, mapping the fusion data to second spaces respectively to obtain mapping characteristics.
A8, obtaining a third loss function based on the mapping characteristics, specifically according to the mode shown in formula (4):
in the embodiment of the invention, the steps A6-A8 are obtained by the operation of the countermeasure network.
And A9, obtaining a model loss function based on the first loss function, the second loss function and the third loss function.
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by the prediction layer is input of the position information of the target to be measured.
By adopting the scheme, the constraint capacity of the loss function on the model can be improved, the correlation between the marked data and the unmarked data is considered, the influence of the network-resistant elimination environment characteristics on the fingerprint characteristics output by the prediction layer is realized (particularly, the formula (5) is embodied), and the accuracy of the model prediction position information is improved. The trained model is used for detecting the target position to be detected, the matching dependency among the labels is eliminated, the fingerprint data of the position to be detected is directly obtained for the position to be detected, the fingerprint data is input into the model, the position information of the position can be directly detected, the positioning independent of the labels is realized, and the positioning robustness and the positioning precision are improved. Due to the adoption of the position fingerprint identification network with the structure, the network has good mobility, can be suitable for different environments, and improves the environmental adaptability of positioning. In addition, the industrial internet of things scene fusion positioning system based on deep learning further comprises a preprocessing module, and the preprocessing module is used for preprocessing fingerprint data.
The specific pretreatment mode may be:
in the indoor space to be collected, the intelligent mobile equipment can collect a group at any collection point
Figure RE-716334DEST_PATH_IMAGE001
And (6) data. Due to a single acquisition
Figure RE-727016DEST_PATH_IMAGE001
The fluctuation is large, therefore, in the actual collecting process, a time interval is set for collecting
Figure RE-429392DEST_PATH_IMAGE001
The data is set to 1 second here. During which time the collected data is averaged
Figure RE-259945DEST_PATH_IMAGE018
It is shown that,
Figure RE-705970DEST_PATH_IMAGE019
the latter data representing the data acquired consecutively, i being a positive integer greater than or equal to 1. The data collected each time are processed by an integrated filtering method, and the detailed processing steps are as follows:
and (3) processing by a filtering method, wherein the detailed processing steps are as follows:
step 1, firstly, the
Figure RE-305579DEST_PATH_IMAGE001
The data is judged if
Figure RE-444436DEST_PATH_IMAGE016
+1) =0, if data loss is determined, replacing with data acquired last time, and if n times of loss occur continuously, and the value of n may be 3-5 times, stopping replacing with the last time data, which indicates that the position of the reference point is far away from the coverage of the AP. That is, for the mth data acquisition, if RSS (m) =0, RSS (m) = RSS (m-1) is determined, and m is a positive integer greater than or equal to 2. If rss (m) =0 appears n times in succession, data processing is performed in the manner of step 2.
And 2, predicting and gradient filtering the data. Specifically, the method comprises the following steps:
through adopting above scheme, can improve the accuracy of indoor location.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present invention further provides an electronic device, as shown in fig. 4, which includes a memory 504, a processor 502, and a computer program stored on the memory 504 and executable on the processor 502, where the processor 502, when executing the program, implements the steps of any one of the deep learning based industrial internet of things scene fusion positioning methods described above.
Where in fig. 4 a bus architecture (represented by bus 500) is shown, bus 500 may include any number of interconnected buses and bridges, and bus 500 links together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (6)

1. An industrial Internet of things scene fusion positioning method based on deep learning is characterized by comprising the following steps:
acquiring fingerprint information received by a target to be detected, wherein the fingerprint information is used for representing the position characteristics of the target to be detected;
Inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network;
the position fingerprint identification network consists of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the portrait extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
2. The deep learning-based industrial internet of things scene fusion positioning method as claimed in claim 1, wherein the training method of the location fingerprint identification network comprises the following steps:
determining a plurality of fingerprint data of a plurality of predetermined positions, setting 20% of the plurality of fingerprint data of each predetermined position in a verification set, and setting 80% of the plurality of fingerprint data of each predetermined position in a training set; the fingerprint data in the verification set is marked with a preset position represented by the fingerprint data in advance;
Inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network;
extracting a first characteristic portrait of the training characteristics through a portrait extraction layer, and extracting a second characteristic portrait of the verification characteristics through a portrait extraction layer;
mapping the first feature image into a potential space through a prediction layer to obtain a first potential feature, and mapping the second feature image into the potential space to obtain a second potential feature; obtaining a first prediction vector based on the first potential feature; obtaining a second prediction vector based on the second potential feature; obtaining a first loss function of unmarked data based on the first prediction vector, and obtaining a second loss function of marked data based on the second prediction vector;
performing fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector through an antagonistic network to obtain fusion data; mapping the fusion data to second spaces respectively to obtain mapping characteristics; obtaining a third loss function based on the mapping characteristics;
obtaining a model loss function based on the first loss function, the second loss function and the third loss function;
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by a prediction layer is input into the position information of the target to be measured.
3. The deep learning-based industrial internet of things scene fusion positioning method as claimed in claim 2, wherein the fusion operation of the training features, the first prediction vector, the verification features and the second prediction vector to obtain fusion data comprises:
obtaining a first fusion vector based on the training features, the first prediction vector and a second prediction vector;
obtaining a second fusion vector based on the verification feature, the first prediction vector and a second prediction vector;
and carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
4. An industrial internet of things scene fusion positioning system based on deep learning, which is characterized by comprising:
the acquisition module is used for acquiring fingerprint information received by a target to be detected, and the fingerprint information is used for representing the position characteristics of the target to be detected;
the positioning module is used for inputting the fingerprint information into a position fingerprint identification network trained in advance, and the position fingerprint identification network outputs the position information of the target to be detected;
The position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the sketch extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the countermeasure network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
5. The deep learning based industrial internet of things scene fusion positioning system as claimed in claim 4, wherein the training method of the location fingerprint identification network comprises:
determining a plurality of fingerprint data of a plurality of predetermined positions, setting 20% of the plurality of fingerprint data of each predetermined position in a verification set, and setting 80% of the plurality of fingerprint data of each predetermined position in a training set; the fingerprint data in the verification set is marked with a preset position represented by the fingerprint data in advance;
Inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network;
extracting a first characteristic portrait of the training characteristics through a portrait extraction layer, and extracting a second characteristic portrait of the verification characteristics through a portrait extraction layer;
mapping the first feature image into a potential space through a prediction layer to obtain a first potential feature, and mapping the second feature image into the potential space to obtain a second potential feature; obtaining a first prediction vector based on the first potential feature; obtaining a second prediction vector based on the second potential feature; obtaining a first loss function of unmarked data based on the first prediction vector, and obtaining a second loss function of marked data based on the second prediction vector;
performing fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector through an antagonistic network to obtain fusion data; mapping the fusion data to a second space respectively to obtain mapping characteristics; obtaining a third loss function based on the mapping characteristics;
obtaining a model loss function based on the first loss function, the second loss function and the third loss function;
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by a prediction layer is input into the position information of the target to be measured.
6. The deep learning-based industrial internet of things scene fusion positioning system according to claim 5, wherein the fusion operation of the training features, the first prediction vectors, the verification features and the second prediction vectors to obtain fusion data comprises:
obtaining a first fusion vector based on the training features, the first prediction vector and a second prediction vector;
obtaining a second fusion vector based on the verification feature, the first prediction vector and a second prediction vector;
and carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
CN202210120544.6A 2022-02-09 2022-02-09 Industrial Internet of things scene fusion positioning method and system based on deep learning Expired - Fee Related CN114758364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210120544.6A CN114758364B (en) 2022-02-09 2022-02-09 Industrial Internet of things scene fusion positioning method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210120544.6A CN114758364B (en) 2022-02-09 2022-02-09 Industrial Internet of things scene fusion positioning method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN114758364A true CN114758364A (en) 2022-07-15
CN114758364B CN114758364B (en) 2022-09-23

Family

ID=82325182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210120544.6A Expired - Fee Related CN114758364B (en) 2022-02-09 2022-02-09 Industrial Internet of things scene fusion positioning method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114758364B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713681A (en) * 2022-11-22 2023-02-24 中国农业科学院农业资源与农业区划研究所 Method and system for generating space-time continuous crop parameters by fusing internet of things and satellite data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832834A (en) * 2017-11-13 2018-03-23 合肥工业大学 A kind of construction method of the WIFI indoor positioning fingerprint bases based on generation confrontation network
CN110234085A (en) * 2019-05-23 2019-09-13 深圳大学 Based on the indoor location fingerprint to anti-migration network drawing generating method and system
CN110300370A (en) * 2019-07-02 2019-10-01 广州纳斯威尔信息技术有限公司 A kind of reconstruction wifi fingerprint map indoor orientation method
CN111797863A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Model training method, data processing method, device, storage medium and equipment
CN112085738A (en) * 2020-08-14 2020-12-15 南京邮电大学 Image segmentation method based on generation countermeasure network
US20200402223A1 (en) * 2019-06-24 2020-12-24 Insurance Services Office, Inc. Machine Learning Systems and Methods for Improved Localization of Image Forgery
CN112312541A (en) * 2020-10-09 2021-02-02 清华大学 Wireless positioning method and system
US20210112371A1 (en) * 2020-12-21 2021-04-15 Intel Corporation Proximity detection using wi-fi channel state information
CN113630720A (en) * 2021-08-24 2021-11-09 西北大学 Indoor positioning method based on WiFi signal strength and generation countermeasure network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832834A (en) * 2017-11-13 2018-03-23 合肥工业大学 A kind of construction method of the WIFI indoor positioning fingerprint bases based on generation confrontation network
CN111797863A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Model training method, data processing method, device, storage medium and equipment
CN110234085A (en) * 2019-05-23 2019-09-13 深圳大学 Based on the indoor location fingerprint to anti-migration network drawing generating method and system
US20200402223A1 (en) * 2019-06-24 2020-12-24 Insurance Services Office, Inc. Machine Learning Systems and Methods for Improved Localization of Image Forgery
CN110300370A (en) * 2019-07-02 2019-10-01 广州纳斯威尔信息技术有限公司 A kind of reconstruction wifi fingerprint map indoor orientation method
CN112085738A (en) * 2020-08-14 2020-12-15 南京邮电大学 Image segmentation method based on generation countermeasure network
CN112312541A (en) * 2020-10-09 2021-02-02 清华大学 Wireless positioning method and system
US20210112371A1 (en) * 2020-12-21 2021-04-15 Intel Corporation Proximity detection using wi-fi channel state information
CN113630720A (en) * 2021-08-24 2021-11-09 西北大学 Indoor positioning method based on WiFi signal strength and generation countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KEVIN M. CHEN 等: "Semi-Supervised Learning with GANs for Device-Free Fingerprinting Indoor Localization", 《GLOBECOM 2020 - 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE》 *
章燕芳: "多传感器位置指纹信息的采集与处理技术", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
郭昕刚 等: "基于k-means及改进k近邻的WiFi指纹定位算法", 《长春工业大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713681A (en) * 2022-11-22 2023-02-24 中国农业科学院农业资源与农业区划研究所 Method and system for generating space-time continuous crop parameters by fusing internet of things and satellite data
CN115713681B (en) * 2022-11-22 2023-06-13 中国农业科学院农业资源与农业区划研究所 Method and system for generating space-time continuous crop parameters by integrating Internet of things and satellite data

Also Published As

Publication number Publication date
CN114758364B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
Li et al. Toward location-enabled IoT (LE-IoT): IoT positioning techniques, error sources, and error mitigation
EP2270535B1 (en) Indoor/outdoor decision apparatus and indoor/outdoor decision method
CN105699938B (en) A kind of accurate positioning method and device based on wireless signal
Diaz et al. Bluepass: An indoor bluetooth-based localization system for mobile applications
CN112533163B (en) Indoor positioning method based on NB-IoT (NB-IoT) improved fusion ultra-wideband and Bluetooth
CN101923118B (en) Building influence estimation apparatus and building influence estimation method
CN110926461B (en) Indoor positioning method and system based on ultra wide band and navigation method and system
Lee et al. Method for improving indoor positioning accuracy using extended Kalman filter
CN108413966A (en) Localization method based on a variety of sensing ranging technology indoor locating systems
Song et al. Implementation of android application for indoor positioning system with estimote BLE beacons
Mazan et al. A Study of Devising Neural Network Based Indoor Localization Using Beacons: First Results.
CN114758364B (en) Industrial Internet of things scene fusion positioning method and system based on deep learning
Alamleh et al. A weighting system for building RSS maps by crowdsourcing data from smartphones
CN109640253B (en) Mobile robot positioning method
Duong et al. Improving indoor positioning system using weighted linear least square and neural network
Moradbeikie et al. A cost-effective LoRaWAN-based IoT localization method using fixed reference nodes and dual-slope path-loss modeling
Gadhgadhi et al. Distance estimation using polynomial approximation and neural network based on rssi technique
CN108540926B (en) Wireless signal fingerprint construction method and device
CN115629376A (en) Target positioning method and device, electronic device and storage medium
Luo et al. Research on an adaptive algorithm for indoor bluetooth positioning
Pan et al. Application of a WiFi/Geomagnetic Combined Positioning Method in a Single Access Point Environment
Zhang et al. Integrated iBeacon/PDR Indoor Positioning System Using Extended Kalman Filter
Chakraborty et al. On estimating the location and the 3-d shape of an object in an indoor environment using visible light
US11808873B2 (en) Systems and methods for locating tagged objects in remote regions
Chen et al. A new indoor positioning technique based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220923

CF01 Termination of patent right due to non-payment of annual fee