CN114758364A - Industrial Internet of things scene fusion positioning method and system based on deep learning - Google Patents
Industrial Internet of things scene fusion positioning method and system based on deep learning Download PDFInfo
- Publication number
- CN114758364A CN114758364A CN202210120544.6A CN202210120544A CN114758364A CN 114758364 A CN114758364 A CN 114758364A CN 202210120544 A CN202210120544 A CN 202210120544A CN 114758364 A CN114758364 A CN 114758364A
- Authority
- CN
- China
- Prior art keywords
- fingerprint
- prediction
- data
- layer
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013135 deep learning Methods 0.000 title claims abstract description 21
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 44
- 238000000605 extraction Methods 0.000 claims abstract description 37
- 230000007613 environmental effect Effects 0.000 claims abstract description 15
- 239000013598 vector Substances 0.000 claims description 122
- 238000012549 training Methods 0.000 claims description 67
- 230000006870 function Effects 0.000 claims description 65
- 238000012795 verification Methods 0.000 claims description 59
- 238000013507 mapping Methods 0.000 claims description 32
- 230000003042 antagnostic effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 6
- 238000001914 filtration Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 11
- 230000004913 activation Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000009776 industrial production Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 241000196324 Embryophyta Species 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241001672694 Citrus reticulata Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y10/00—Economic sectors
- G16Y10/25—Manufacturing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/60—Positioning; Navigation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention discloses an industrial Internet of things scene fusion positioning method and system based on deep learning, when position information of a position to be detected needs to be obtained, a position fingerprint identification network extracts features of fingerprint information received by a target to be detected through a convolutional neural network, then a feature image of the fingerprint information features is extracted through an image extraction layer, a prediction layer predicts fingerprint features of the fingerprint information based on the feature image, a full connection layer matches the fingerprint features with standard fingerprint information in a preset position fingerprint library, and position information corresponding to the successfully matched standard fingerprint information is output and serves as the position information of the target to be detected. The method and the device realize online positioning, eliminate the dependency of matching between position labels and improve the positioning robustness and precision. The position fingerprint identification network has good mobility, can be suitable for different environments, and improves the environmental adaptability to positioning.
Description
Technical Field
The invention relates to the technical field of computers, in particular to an industrial Internet of things scene fusion positioning method and system based on deep learning.
Background
The vigorous development of industrial manufacturing continuously promotes the technical change in the manufacturing field of China, and the appearance of the Internet of things and the artificial intelligence technology makes the concepts of intelligent production, intelligent factories and the like possible. In the intelligent modification, the traditional industrial production faces many difficulties, such as indoor positioning technology in an industrial scene. The indoor positioning technology is realized, personnel can be tracked and allocated in real time, each link of automatic production can be monitored, and the method has positive significance for improving the production efficiency.
Indoor positioning plays an important role in industrial scenarios. By the indoor positioning technology with higher precision, a manager can realize real-time monitoring and dynamic scheduling on production personnel, thereby eliminating hidden dangers and improving the working efficiency; meanwhile, a factory can acquire real-time positions of materials and vehicles, and automatic allocation, turnover, cargo transportation and the like of the materials are realized; in addition, a high-precision positioning technology is also a precondition for putting industrial robots and the like into automatic production.
Existing outdoor location services are mainly implemented by Global Positioning Satellite (GPS) technology. The global positioning system can provide high-precision positioning service for outdoor users, but has the following limitations: the GPS signal common rate is very low, the signal receiving requirement is higher, no obstruction exists between an outdoor antenna and a satellite, and a better positioning result can be achieved. However, when indoor positioning is performed, since a satellite signal is rapidly attenuated after reaching indoors due to blockage of a building, indoor coverage requirements cannot be met, and indoor positioning using a GPS signal is almost impossible.
In industrial production, an indoor environment is far more complex than an outdoor environment, radio waves are easily blocked by obstacles, and reflection, refraction or scattering occurs to form non-line-of-sight propagation (namely NLOS, non-line-of-sight communication refers to indirect point-to-point communication between a receiver and a transmitter, the most direct explanation of the non-line-of-sight is that two points of sight of communication are blocked, the two points cannot see each other, and more than 50% of a Fresnel zone is blocked), so that the positioning accuracy is seriously influenced. In addition, the indoor production environment layout and topology are susceptible to human factors, which causes various signal propagation changes, thereby reducing the performance of the positioning technology based on the characteristic matching principle.
Most of the currently popular positioning systems are independent positioning systems, that is, most of them are designed for a special application environment, which leads to the following problems:
(1) the target detection has defects: most location systems rely on matches between tags, so they have difficulty detecting objects that are different tags co-existing with the target tag in the consent environment.
(2) Positioning robustness and precision are low: independent systems are limited in information acquisition and difficult to implement.
(3) Poor environmental adaptability: an independent positioning system that performs well in one practical scenario may perform poorly in another practical scenario.
Disclosure of Invention
The invention aims to provide an industrial Internet of things scene fusion positioning method and system based on deep learning, and aims to solve the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a deep learning-based industrial internet of things scene fusion positioning method, where the method includes:
acquiring fingerprint information received by a target to be detected, wherein the fingerprint information is used for representing the position characteristics of the target to be detected;
inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network;
the position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the sketch extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the countermeasure network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
Optionally, the method for training the location fingerprint identification network includes:
determining a plurality of fingerprint data of a plurality of predetermined positions, setting 20% of the plurality of fingerprint data of each predetermined position in a verification set, and setting 80% of the plurality of fingerprint data of each predetermined position in a training set; the fingerprint data in the verification set is marked with a preset position represented by the fingerprint data in advance;
inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network;
extracting a first characteristic portrait of the training characteristics through a portrait extraction layer, and extracting a second characteristic portrait of the verification characteristics through a portrait extraction layer;
mapping the first characteristic image to a potential space through a prediction layer to obtain a first potential characteristic, and mapping the second characteristic image to the potential space to obtain a second potential characteristic; obtaining a first prediction vector y1 based on the first potential feature; obtaining a second prediction vector based on the second potential feature; obtaining a first loss function of the unlabeled data based on the first prediction vector, and obtaining a second loss function of the labeled data based on the second prediction vector;
Performing fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector through an antagonistic network to obtain fusion data; mapping the fusion data to second spaces respectively to obtain mapping characteristics; obtaining a third loss function based on the mapping characteristics;
obtaining a model loss function based on the first loss function, the second loss function and the third loss function;
and when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by a prediction layer is input into the position information of the target to be measured.
Optionally, performing a fusion operation on the training features, the first prediction vector, the verification features, and the second prediction vector to obtain fusion data includes:
obtaining a first fusion vector based on the training features, the first prediction vector and a second prediction vector;
obtaining a second fusion vector based on the verification feature, the first prediction vector and a second prediction vector;
and carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
In a second aspect, an embodiment of the present invention provides an industrial internet of things scene fusion positioning system based on deep learning, where the system includes:
The acquisition module is used for acquiring fingerprint information received by a target to be detected, and the fingerprint information is used for representing the position characteristics of the target to be detected;
the positioning module is used for inputting the fingerprint information into a position fingerprint identification network trained in advance, and the position fingerprint identification network outputs the position information of the target to be detected;
the position fingerprint identification network consists of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the portrait extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the countermeasure network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
The embodiment of the invention provides an industrial Internet of things scene fusion positioning method and system based on deep learning, wherein the method comprises the following steps: acquiring fingerprint information received by a target to be detected, wherein the fingerprint information is used for representing the position characteristics of the target to be detected; inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network; the position fingerprint identification network consists of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the portrait extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
When the position information of the position to be detected needs to be obtained, a signal (for example, Wifi) needs to be sent to the signal receiving terminal of the position to be detected through a predetermined AP, then signal information (fingerprint information) received by the signal receiving terminal is obtained, the fingerprint information is input into a position fingerprint identification network trained in advance, and the position fingerprint identification network can identify the position information of the position to be detected based on the fingerprint information. The position fingerprint identification network extracts the characteristics of fingerprint information received by a target to be detected through a convolutional neural network, then extracts a characteristic image of the fingerprint information characteristics through an image extraction layer, after the characteristic image is obtained, a prediction layer predicts the fingerprint characteristics of the fingerprint information based on the characteristic image, a full connection layer matches the fingerprint characteristics with standard fingerprint information in a preset position fingerprint database, and position information corresponding to the successfully matched standard fingerprint information is output and serves as the position information of the target to be detected. The standard fingerprint information uniquely characterizes one location information. Therefore, online positioning is realized, and the positioning accuracy is improved. In addition, the position fingerprint identification network comprises a countermeasure network, and the countermeasure network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer, so that the accuracy of the fingerprint characteristics output by the prediction layer is improved, and the accuracy of positioning is further improved.
Drawings
Fig. 1 is a flowchart of an industrial internet of things scene fusion positioning method based on deep learning according to an embodiment of the present invention.
Fig. 2 shows a schematic structure diagram of a location fingerprinting network.
FIG. 3 shows a schematic representation of location fingerprinting of signal strengths for one variety of signals.
Fig. 4 is a schematic block structure diagram of an electronic device according to an embodiment of the present invention.
The mark in the figure is: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Example 1
Existing outdoor location services are mainly implemented by Global Positioning Satellite (GPS) technology. The global positioning system can provide high-precision positioning service for outdoor users, but has the following limitations: the GPS signal common rate is very low, the signal receiving requirement is higher, no obstruction exists between an outdoor antenna and a satellite, and a better positioning result can be achieved. However, when indoor positioning is performed, since a satellite signal is rapidly attenuated after reaching indoors due to blockage of a building, indoor coverage requirements cannot be met, and indoor positioning using a GPS signal is almost impossible.
In industrial production, an indoor environment is far more complex than an outdoor environment, radio waves are easily blocked by obstacles, and are reflected, refracted or scattered to form non-line-of-sight propagation (namely NLOS, the non-line-of-sight communication refers to indirect point-to-point communication between a receiver and a transmitter, the most direct explanation of the non-line-of-sight is that two points of sight of communication are blocked, the two points cannot see each other, and more than 50% of a Fresnel zone is blocked), so that the positioning accuracy is seriously influenced. In addition, the indoor production environment layout and topology are susceptible to human factors, which causes various signal propagation changes, thereby reducing the performance of the positioning technology based on the characteristic matching principle.
In consideration of the huge development potential and the wide development prospect of indoor positioning and the difficult points of the existing indoor positioning technology analyzed in the above, the application provides a positioning method based on the combination of multi-sensor data fusion and the deep learning technology so as to reduce the indoor positioning difficulty and improve the indoor positioning precision.
Before the solution proposed by the present invention is explained, the concept of "location fingerprint" needs to be introduced. As a "human body identification card", human fingerprints are widely used in the field of identification. Uniqueness, local correlation and recognizability are characteristics of fingerprints, and thus the concept of fingerprint location is introduced into indoor location. A "location fingerprint" is a fingerprint that associates the characteristics of a location in the physical environment with a certain characteristic, where the environment has one or more characteristics, and a one-to-one correspondence is achieved by the specific characteristics, i.e., a location corresponds to a unique fingerprint. This fingerprint may be one or more dimensions depending on the characteristics of the environment in which it is located, for example when the device to be located is receiving or transmitting information, then the fingerprint may be a characteristic or characteristics of the information or signal (most commonly signal strength). The common location fingerprint positioning modes include three, if the equipment to be positioned is sending signals, the fixed receiving equipment which is installed in advance senses the signals or information of the equipment to be positioned to realize positioning, and the mode is remote positioning or network positioning. If the device to be positioned receives signals or information of some fixed sending devices, then the position of the device to be positioned is estimated according to the detected characteristics, and the mode is self-positioning. If the device to be positioned transmits all the detected features to the server, the server uses the obtained features to estimate the position of the device, which is a hybrid positioning.
The method aims at the problems that the traditional method for indoor positioning through a single signal is high in information acquisition difficulty and low in positioning result accuracy. The positioning problem of an actual industrial scene can be effectively solved by adopting the fusion of various signals; various sensors are utilized to collect various common signals indoors, such as Wireless Fidelity (WiFi), Visible Light Communication (VLC), and the like, and more complete data can be collected under the condition of low difficulty. Compared with a single signal method, when one signal loses value due to some reason, more accurate positioning can be realized by making up information through other signals. When the signal is complete, more accurate positioning result can be achieved due to the multi-dimensionality of the signal. For this reason, the fingerprint information and fingerprint data mentioned in the present application are the above-mentioned "location fingerprint", which may be a single signal, such as a WiFi signal, a visible light signal, and an Ultra Wide Band (UWB) signal, or may be formed by combining a plurality of signals mentioned above, and it should be noted that the nature of VLC, UWB, and WiFi positioning is actually a base station type positioning, and positioning is performed by radiating a fingerprint that gradually weakens outward by taking a WiFi AP and a UWB base station as their centers and a light source. Therefore, the fingerprint Information and fingerprint data of the present application may be Channel State Information (CSI), Received Signal Strength Indication (RSSI), or data formed by combining Information such as CSI and RSSI. That is, the location fingerprint can be of various types, and any "location-unique" feature can be The Signal Strength of the wireless Signal or the visible light Signal Strength of the wireless Signal decreases with increasing propagation distance during spatial propagation, the Signal Strength of the Signal source increases the distance from the receiving end to the Signal source increases the further the receiving end is from the transmitting end, the Received Signal Strength decreases the further the receiving end is from the transmitting end, a unique fingerprint can be formed according to the Signal Strength Received by the terminal equipment, considering the cost and practicality aspects, the wireless Signal Strength (Received Signal Strength h,) Are commonly used as fingerprints in indoor positioning. In the indoor environment where the wireless signal deployment is finished, the wireless signals are distinguished at different positions and are distributed relatively stably in space and time, and the wireless signals are distributed only at a preset reference point The fingerprint can be acquired, and the position of an Access Point (AP) does not need to be known. The positioning method and the positioning system aim at the problems of positioning, positioning accuracy and the like of indoor scenes such as large workshops, plants, warehouses and the like, and serve as an optional implementation mode, so that the CSI is adopted to be suitable for adopting (VLC, UWB and WiFi are fused to receive signal strength) as position fingerprints.
With reference to fig. 1, the method for scene fusion and positioning of the industrial internet of things based on deep learning includes:
s101: and acquiring fingerprint information received by the target to be detected.
In the embodiment of the present invention, the fingerprint information is used to represent the location characteristics of the target to be detected, and optionally, the fingerprint information may be CSI, RSSI, RSS, or other information, or may be information formed by combining CSI, RSSI, and RSS. In the embodiment of the present invention, the receiver of the target to be detected receives the fingerprint signal transmitted by the transmitter in the scene and performs fusion to obtain the fingerprint information, which may specifically be:
and obtaining data samples in a set time period, wherein the set time period can be 1-15 minutes, the data samples in the set time period comprise data sampled every second in the set time period, namely the data samples comprise data sampled for multiple times. The sampled data may include one or more of CSI, RSSI, RSS, etc. information. As an example, a piece of fingerprint information is represented by [ a1, a2, a3, … …, a60 ]. Wherein, a1, a2, a3, … … and a60 respectively represent CSI information acquired at 1 st second, 2 nd second, 3 rd second and … … th 60 th second.
S102: and inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network.
The position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network. The input to the prediction layer is the output of the image layer, and the input to the countermeasure network is the output of the convolutional neural network and the output of the prediction layer. The prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer. And the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the target to be measured. As shown in fig. 2.
By adopting the scheme, when the position information of the position to be detected needs to be obtained, a signal (such as Wifi) needs to be sent to the signal receiving end of the position to be detected through the preset AP, then the signal information (fingerprint information) received by the signal receiving end is obtained, the fingerprint information is input into the position fingerprint identification network trained in advance, and the position fingerprint identification network can identify the position information of the position to be detected based on the fingerprint information. The position fingerprint identification network extracts the characteristics of fingerprint information received by a target to be detected through a convolutional neural network, then extracts a characteristic image of the fingerprint information characteristics through an image extraction layer, after the characteristic image is obtained, a prediction layer predicts the fingerprint characteristics of the fingerprint information based on the characteristic image, a full connection layer matches the fingerprint characteristics with standard fingerprint information in a preset position fingerprint database, and position information corresponding to the standard fingerprint information which is successfully matched is output and serves as the position information of the target to be detected. The standard fingerprint information uniquely characterizes one location information. Therefore, online positioning is realized, and the positioning accuracy is improved. In addition, the position fingerprint identification network comprises a countermeasure network, and the countermeasure network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer, so that the accuracy of the fingerprint characteristics output by the prediction layer is improved, and the accuracy of positioning is further improved.
The training method of the position fingerprint identification network comprises the following steps:
a1, determining a plurality of fingerprint data of a plurality of preset positions, setting 20% of the plurality of fingerprint data of each preset position in a verification set, and setting 80% of the plurality of fingerprint data of each preset position in a training set. The fingerprint data in the authentication set is pre-labeled with a predetermined location characterized by the fingerprint data.
In the embodiment of the present invention, the predetermined location is a specific area within the positioning scenario, for example, if the positioning scenario is a factory including 10 workshops, each workshop can be set to be a predetermined location. Optionally, a signal receiver is arranged in the predetermined position, and a plurality of fingerprint data of each predetermined position are obtained according to the above manner of obtaining the fingerprint information of the target to be detected. For example, the fingerprint information of the object to be detected is [ a1, a2, a3, … …, a60], and the fingerprint data of one of the predetermined positions may be [ b1, b2, b3, … …, b60], [ b61, b62, b63, … …, b120], [ b121, b122, b123, … …, b180 ]. I.e. the fingerprint data may comprise a plurality of pieces of historical fingerprint information. If the fingerprint data has 10 pieces of historical fingerprint information, 8 pieces of fingerprint information are arranged in the training set, 2 pieces of fingerprint information are arranged in the verification set, and the historical fingerprint information arranged in the verification set is subjected to standard to determine the position information represented by the historical fingerprint information in the verification set.
And A2, inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network.
A3, extracting a first feature image of the training feature by an image extraction layer, and extracting a second feature image of the verification feature by the image extraction layer. Specifically, the portrait extraction layer is a full connection layer and is fully connected with the convolutional neural network, the portrait extraction layer maps the training features and the verification features into a portrait space to respectively obtain a first feature portrait and a second feature portrait, specifically, the training features are mapped through a softplus function to obtain the first feature portrait, and the verification features are mapped through the softplus function to obtain the second feature portrait.
A4, mapping the first feature image into the potential space through the prediction layer to obtain the first potential feature, and mapping the second feature image into the potential space to obtain the second potential feature. Obtaining a first prediction vector based on the first potential feature; a second prediction vector is derived based on the second latent features.
In particular, a first feature representation is mapped into the potential space, resulting in a first potential feature, specifically H1= W × P1+ b, H1 representing the first potential feature, and P1 representing the first feature representation. The second feature image is injected into the potential space, resulting in a second potential feature, specifically H2= W × P2+ b, with P2 representing the second feature image and H2 representing the second potential feature. W, b are mapping parameters, and the values may be W =1, b = 0.5.
Obtaining a first prediction vector based on the first potential feature through a softmax activation function; and obtaining a second prediction vector based on the second potential feature through the softmax activation function. Specifically, the first potential feature is input into a softmax activation function, and the softmax activation function outputs the first prediction vector. The second potential feature is input into a softmax activation function, which outputs a second prediction vector.
A5, obtaining a first loss function of the unlabeled data based on the first prediction vector, and obtaining a second loss function of the labeled data based on the second prediction vector.
In the embodiment of the present invention, the steps a4, a5 are obtained by a prediction layer operation.
Specifically, the first loss function is represented by:
where L1 is the first loss function, X1 represents the number of fingerprint data in the training set.Representing the ith first prediction vector.
The second loss function is represented by:
where, the L2 second loss function,the actual feature vector representing the data for the ith fingerprint in the verification set is pre-extracted. X2 represents the amount of fingerprint data in the authentication set.
And A6, carrying out fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector to obtain fusion data. Specifically, the fusion operation specifically comprises: first, a first fusion vector is obtained based on the training features, the first prediction vector and the second prediction vector. And secondly, obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector. And then carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
Obtaining a first fusion vector based on the training features, the first prediction vector and the second prediction vector, specifically by a calculation method of the following formula: r1= y1+ a y2-p z1, where R1 represents a first fused vector, y1 represents a first predicted vector, y2 represents a second predicted vector, z1 represents a training feature, a, p are weight parameters, a + p = 1.
And obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector, specifically by a calculation mode of the following formula: r2= y2+ a y1-p z2, wherein R2 represents the second fused vector and z2 represents the verification feature.
The fused data F is represented in the following manner: f = R2+ a R2-p R1, where F represents the fusion data.
For this reason, for each fingerprint data, the manner of obtaining the first fusion vector, and the fusion data of the fingerprint data is referred to as the manner of obtaining the first fusion vector, and the fusion data as above, and details are not repeated here.
By adopting the scheme, the data characteristics of the verification data set and the training data are comprehensively considered in the fusion data, and the accuracy of the characterization of the fusion characteristics on the position is improved.
And A7, mapping the fusion data to second spaces respectively to obtain mapping characteristics.
Specifically, the specific mapping mode for mapping the fusion data to the second space through the softmax function to obtain the mapping characteristics is as follows:
wherein, the first and the second end of the pipe are connected with each other,a mapping feature representing the ith fingerprint data,is a mapping parameter of the second space, and the value may be。
A8, obtaining a third loss function based on the mapping characteristics, specifically according to the mode shown in formula (4):
In the embodiment of the present invention, the steps A6-A8 are obtained by the operation of the countermeasure network.
A9, obtaining a model loss function based on the first loss function, the second loss function and the third loss function.
The model loss function is specifically calculated by L = L2+ a L1-p L3 (5)
Where L represents the loss function of the model.
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by the prediction layer is input of the position information of the target to be measured.
By adopting the scheme, the constraint capacity of the loss function on the model can be improved, the correlation between the marked data and the unmarked data is considered, the influence of the network-resistant elimination environment characteristics on the fingerprint characteristics output by the prediction layer is realized (particularly, the formula (5) is embodied), and the accuracy of the model prediction position information is improved. The trained model is used for detecting the target position to be detected, the matching dependency among the labels is eliminated, the fingerprint data of the position to be detected is directly obtained for the position to be detected, the fingerprint data is input into the model, the position information of the position can be directly detected, the positioning independent of the labels is realized, and the positioning robustness and the positioning precision are improved. Due to the adoption of the position fingerprint identification network with the structure, the network has good mobility, can be suitable for different environments, and improves the environmental adaptability of positioning.
In the embodiment of the invention, the method for acquiring the fingerprint information received by the target to be detected comprises the following steps:
the fingerprint data received by the target to be detected is collected, and specifically, the CSI information or RSS information received by the target to be detected at the position of the target to be detected can be collected.
Of signalsOr the received power depends on the location of the receiver.The signal weakens with the increase of the distance, is generally a negative value, and is considered to be good at a signal of-60 to-70 dBm.The acquisition is relatively simple because it is necessary for most wireless communication devices to operate properly. Many communication system needsThe information is used to sense the quality of the link to perform functions such as handoff and adapting the transmission rate, andis not affected by the signal bandwidth. The RSS information is used as important reference data for indoor positioning, so that the positioning accuracy can be improved.
In free space, without any obstacle, the signal is emitted from the emission source in a spherical shape in all directions, and there is no difference in each direction, so that the power of the signal is inversely proportional to the square of the distance: p ^ 1d 2.That is, power, but the unit of attenuation is typically expressed in dB, it is easy to understandIn relation to the distance, the distance between the sensor and the sensor,attenuation is proportional to the logarithm of the distance, assuming a reference distance d0 and a distance over it are known Is composed of(d0) Then RSS (d) = RSS (d0) -10 × n × log (d 0).
In the embodiment of the application, the preset fixed signal emission source is averaged at different distances from the signal emission sourceIs proportional to the logarithm of the distance, and in the simplest case, the RSS can be expressed as:
RSS(d)=Pt-K-10*q*ln(d)
where q is called the path loss exponent, Pt is the transmit power, and K is a constant that depends on the environment and frequency, and may take values from 0.1 to 20. RSS can be used to calculate the distance between the device to be located and the AP and the light source, where the obtained distance is used to perform trilateration of the mobile device for location determination, and this method may cause large errors due to the influence of the actual environment (called shadow fading). When the device to be positioned receives signals from a plurality of transmission sources, we can use the signals from the plurality of transmission sourcesForm a RThe vector serves as a fingerprint associated with the location. Namely, the different sensors are utilized simultaneously by the data layer fusion technologyMeasuring fingerprint data of the same position, firstly, carrying out fusion processing on the measured data from a plurality of different sensors to obtain and extract combined data characteristics to construct a position fingerprint database.
In the embodiment of the invention, the establishment of the corresponding relation between the position and the fingerprint is carried out in an off-line stage. In the off-line stage, in order to collect fingerprints at various positions and construct a database, a relatively complex survey is conducted in a specified area, and collected data are used as a training set. The position coordinates obtained by indoor positioning calculation refer to coordinates in a local coordinate system in the current environment, and are not latitude and longitude. At each data acquisition point, we averaged the data from each AP light source over a period of time (5 to 15 minutes, acquired approximately once per second)The device may be oriented differently at the time of acquisition.
For this project we deploy several APs, UWB base stations and LED light sources (the APs of LiFi are usually integrated on the ceiling and the UWB positioning base stations are distributed at the geometrical edges of the positioning area), which can be used for communication as well as for positioning.
In the data acquisition stage, the complicated experiment is carried out at each data acquisition point through equipment, and an equipment hardware system comprises a photodiode, a 0UWB (ultra wide band) tag, a WiFi (wireless fidelity) module and the like. Firstly, the equipment obtains various relevant characteristics of a data acquisition point through a relevant module, namely VLC signal intensity, UWB signal intensity and WiFi signal intensity, and then sends the characteristics to an upper computer through a WiFi module. At each data acquisition point, we averaged the data from each AP light source by sampling the data over a period of time (5 to 15 minutes, acquired approximately once per second) The device may be oriented differently at the time of acquisition.
In order to clarify the technical scheme of the embodiment of the invention, the geographic area to be measured is setCovered by a rectangular grid, as shown in fig. 3, which is a grid of 4 rows and 8 columns (32 grid points in total), 1 AP and 1 LED light source in this scenario, the fingerprint on each grid point is a two-dimensional vector z = [ x1, x2 ] in this example]Where x1 is the average from the AP,Averaging from LEDs. That is, the fingerprint data z at each predetermined position is represented in a manner of z = [ x1, x2 ]]。
These two-dimensional fingerprints are acquired in the area shown by each grid point, and the coordinates of the grid points and the corresponding fingerprints form a database, which is called a tagging phase (calibration phase), and this database of fingerprints also becomes a radio map (radio map). The right part of fig. 3 shows these fingerprints in a two-dimensional vector space (signal space), whereas in a more general scenario, assuming N APs, UWB base stations and LED light sources, the fingerprints areIs an N-dimensional vector.
The hybrid network location is less costly than those location systems that require additional equipment.
Most studies assume that the location fingerprints are obtained by collecting data at virtual grid points, for example, in a 100 x 100 area, dividing the area into 50 x 50 grids (i.e., 2m x 2m per grid), and collecting a set of fingerprints in the middle of each grid, where each set of fingerprints is recorded by the grid points that include information received from each WiFi AP, UWB base station, and visible light 5 to 15 minutes, and in some cases we alsoDifferent measuring devices or different device orientations are used to make measurements and collect data. Such collection work is extremely tedious, because this project is to the locatable and positioning accuracy scheduling problem of indoor scenes such as large-scale workshop, factory building, warehouse, in addition in order to adapt to environmental change and need update periodically. In the above example, the application uses a single device and a fixed direction, and the whole collection process requires 2500 × 5=125000 minutes, which is close to 9 days, and not only consumes manpower and material resources, but also has low efficiency. Efficiency is improved by preprocessing in a way that reduces the number of fingerprint acquisitions.
In a complex indoor environment, especially aiming at indoor scenes such as large-scale workshops, plants, warehouses and the like, WiFi, UWB and VLCThe signal is susceptible to multipath effects and ambient temperature, and the signal is reflected, refracted indoors, even due to equipment problems, resulting in the acquisitionThe value suffers from fluctuations, even packet losses. When we are doing data collection, if it is collectedOf great value, especially of great strengthThe value is lost, and even if the subsequent positioning algorithm is good, a large error can be caused, so that the project adopts an integrated filtering method combining gradient filtering and Kalman filtering. The method has the main idea that firstly, the gradient filtering method is utilized to carry out preliminary filtering on the acquired data, missing data is filled, and the occurrence of data acquisition process is avoided Loss of value phenomenon, secondary reuse of CarlThe Mandarin filtering method performs a second filtering process on the data, so as to further smooth the dataData and noise reduction pairThe impact of the data.
That is, after acquiring the fingerprint data received by the target to be detected, the method for obtaining the fingerprint information received by the target to be detected further includes: fingerprint data is preprocessed.
The specific pretreatment mode may be:
in the indoor space to be collected, the intelligent mobile equipment can collect a group at any collection pointAnd (4) data. Due to a single acquisitionThe fluctuation is large, therefore, in the actual collecting process, a time interval is set for collectingThe data is set to 1 second here. During this time, the data collected is averaged and represented by RSS (i),+1) represents the latter of the successively acquired data, i being a positive integer greater than or equal to 1. The data collected each time are processed by an integrated filtering method, and the detailed processing steps are as follows:
And 2, predicting data and carrying out gradient filtering. Specifically, the method comprises the following steps:
RSS(predict)=2*RSS(i)-RSS(i-1) (6)
I.e. the prediction difference is made for the missing data.
And then for the predicted prediction dataPerforming gradient filtration, wherein the specific gradient filtration mode is as follows:
RSS(predict)1= RSS(predict)+sign(RSS(predict))+c (7)
where c is equal to the variance between RSS (predict) and RSS (i + 1).
Through adopting above scheme, can improve the accuracy of indoor location.
In summary, the industrial internet of things scene fusion positioning method based on deep learning provided by the embodiment of the invention includes:
firstly, fingerprint data received by a target to be detected is collected, wherein the fingerprint data is information capable of representing a certain position. The fingerprint data is preprocessed, environmental influences and data residual errors are removed, the fingerprint data is corrected, then the fingerprint data is input into a trained position fingerprint identification network, the characteristics of the corrected fingerprint data are extracted through a convolutional neural network in the position fingerprint identification network, then a characteristic portrait of the characteristics of the fingerprint data is extracted through a portrait extracting layer, then the fingerprint characteristics of the fingerprint data are predicted through a predicting layer based on the characteristic portrait, then the fingerprint characteristics are matched with standard fingerprints in a preset fingerprint library based on a full connecting layer, and position information corresponding to the standard fingerprints successfully matched with the fingerprint characteristics is output as position information of a target to be detected. Thereby realizing the positioning of the target to be measured. The target to be detected can be a target area to be detected, or a person, an animal, a robot or an object carrying equipment capable of receiving fingerprint information.
The fingerprint database stores standard fingerprints and position information with one-to-one relationship in advance, and one standard fingerprint corresponds to one position information. I.e. a standard fingerprint uniquely characterizes a location.
The fingerprint features are matched with the standard fingerprints in a preset fingerprint database, the Euclidean distance or the similarity between the fingerprint features and the standard fingerprints can be calculated, then the position information corresponding to the standard fingerprint with the minimum Euclidean distance is taken as the position information of the target to be detected, or the position information corresponding to the standard fingerprint with the maximum similarity is taken as the position information of the target to be detected. The similarity can be a cosine value of an included angle between the fingerprint feature and the standard fingerprint.
Example 2
The embodiment of the invention provides an industrial Internet of things scene fusion positioning system based on deep learning, which comprises an acquisition module and a positioning module, wherein:
the acquisition module is used for acquiring fingerprint information received by the target to be detected, and the fingerprint information is used for representing the position characteristics of the target to be detected.
And the positioning module is used for inputting the fingerprint information into a position fingerprint identification network trained in advance, and the position fingerprint identification network outputs the position information of the target to be detected.
In the embodiment of the invention, the position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network. The input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network. The input to the prediction layer is the output of the portrait extraction layer, and the input to the countermeasure network is the output of the convolutional neural network and the output of the prediction layer. The prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of the environmental characteristics on the fingerprint characteristics output by the prediction layer. The input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured. As shown in fig. 2.
In the embodiment of the invention, the training method of the position fingerprint identification network comprises the following steps:
a1, determining a plurality of fingerprint data of a plurality of preset positions, setting 20% of the plurality of fingerprint data of each preset position in a verification set, and setting 80% of the plurality of fingerprint data of each preset position in a training set. The fingerprint data in the verification set is pre-labeled with a predetermined location characterized by the fingerprint data.
In the embodiment of the present invention, the predetermined location is a specific area within the positioning scenario, for example, if the positioning scenario is a factory including 10 workshops, each workshop can be set to be a predetermined location. Optionally, a signal receiver is arranged in the predetermined position, and a plurality of fingerprint data of each predetermined position are obtained according to the above manner of obtaining the fingerprint information of the target to be detected. For example, the fingerprint information of the object to be detected is [ a1, a2, a3, … …, a60], and the fingerprint data of one of the predetermined positions may be [ b1, b2, b3, … …, b60], [ b61, b62, b63, … …, b120], [ b121, b122, b123, … …, b180 ]. I.e. the fingerprint data may comprise a plurality of pieces of historical fingerprint information. If the fingerprint data has 10 pieces of historical fingerprint information, 8 pieces of fingerprint information are arranged in the training set, 2 pieces of fingerprint information are arranged in the verification set, and the historical fingerprint information arranged in the verification set is subjected to standard to determine the position information represented by the historical fingerprint information in the verification set.
And A2, inputting the training set and the verification set into a convolutional neural network, and respectively extracting the training features in the training set and the verification features in the verification set through the convolutional neural network.
A3, extracting a first feature image of the training feature by the image extraction layer, and extracting a second feature image of the verification feature by the image extraction layer. Specifically, the portrait extraction layer is a full connection layer and is fully connected with the convolutional neural network, the portrait extraction layer maps the training features and the verification features into a portrait space to respectively obtain a first feature portrait and a second feature portrait, specifically, the training features are mapped through a softplus function to obtain the first feature portrait, and the verification features are mapped through the softplus function to obtain the second feature portrait.
A4, mapping the first feature image into the potential space through the prediction layer to obtain the first potential feature, and mapping the second feature image into the potential space to obtain the second potential feature. Obtaining a first prediction vector based on the first potential feature; a second prediction vector is derived based on the second potential feature.
In particular, a first feature representation is mapped into the potential space, resulting in a first potential feature, specifically H1= W × P1+ b, H1 representing the first potential feature, and P1 representing the first feature representation. The second feature image is injected into the potential space, resulting in a second potential feature, specifically H2= W × P2+ b, with P2 representing the second feature image and H2 representing the second potential feature. W, b are mapping parameters, and the values may be W =1, b = 0.5.
Obtaining a first prediction vector based on the first potential feature through a softmax activation function; and obtaining a second prediction vector based on the second potential feature through the softmax activation function. Specifically, the first potential feature is input into a softmax activation function, and the softmax activation function outputs the first prediction vector. The second potential feature is input into a softmax activation function, which outputs a second prediction vector.
A5, obtaining a first loss function of the unlabeled data based on the first prediction vector, and obtaining a second loss function of the labeled data based on the second prediction vector.
In the embodiment of the present invention, the steps a4, a5 are obtained by a prediction layer operation.
And A6, carrying out fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector to obtain fusion data. Specifically, the fusion operation specifically comprises: first, a first fusion vector is obtained based on the training features, the first prediction vector and the second prediction vector. And secondly, obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector. And then carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
Obtaining a first fusion vector based on the training features, the first prediction vector and the second prediction vector, specifically obtaining the first fusion vector by a calculation mode of the following formula: r1= y1+ a y2-p z1, wherein R1 represents a first fused vector, y1 represents a first prediction vector, y2 represents a second prediction vector, z1 represents a training feature, a, p are weight parameters, a + p = 1.
Obtaining a second fusion vector based on the verification feature, the first prediction vector and the second prediction vector, specifically obtaining the second fusion vector by a calculation mode of the following formula: r2= y2+ a y1-p z2, wherein R2 represents the second fused vector and z2 represents the verification feature.
The fused data F is represented in the following manner: f = R2+ a × R2-p × R1, wherein F represents fused data.
For this reason, for each fingerprint data, the manner of obtaining the first fusion vector, and the fusion data of the fingerprint data is referred to as the manner of obtaining the first fusion vector, and the fusion data as above, and details are not repeated here.
By adopting the scheme, the data characteristics of the verification data set and the training data are comprehensively considered in the fusion data, and the accuracy of the characterization of the fusion characteristics on the position is improved.
And A7, mapping the fusion data to second spaces respectively to obtain mapping characteristics.
A8, obtaining a third loss function based on the mapping characteristics, specifically according to the mode shown in formula (4):
in the embodiment of the invention, the steps A6-A8 are obtained by the operation of the countermeasure network.
And A9, obtaining a model loss function based on the first loss function, the second loss function and the third loss function.
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by the prediction layer is input of the position information of the target to be measured.
By adopting the scheme, the constraint capacity of the loss function on the model can be improved, the correlation between the marked data and the unmarked data is considered, the influence of the network-resistant elimination environment characteristics on the fingerprint characteristics output by the prediction layer is realized (particularly, the formula (5) is embodied), and the accuracy of the model prediction position information is improved. The trained model is used for detecting the target position to be detected, the matching dependency among the labels is eliminated, the fingerprint data of the position to be detected is directly obtained for the position to be detected, the fingerprint data is input into the model, the position information of the position can be directly detected, the positioning independent of the labels is realized, and the positioning robustness and the positioning precision are improved. Due to the adoption of the position fingerprint identification network with the structure, the network has good mobility, can be suitable for different environments, and improves the environmental adaptability of positioning. In addition, the industrial internet of things scene fusion positioning system based on deep learning further comprises a preprocessing module, and the preprocessing module is used for preprocessing fingerprint data.
The specific pretreatment mode may be:
in the indoor space to be collected, the intelligent mobile equipment can collect a group at any collection pointAnd (6) data. Due to a single acquisitionThe fluctuation is large, therefore, in the actual collecting process, a time interval is set for collectingThe data is set to 1 second here. During which time the collected data is averagedIt is shown that,the latter data representing the data acquired consecutively, i being a positive integer greater than or equal to 1. The data collected each time are processed by an integrated filtering method, and the detailed processing steps are as follows:
and (3) processing by a filtering method, wherein the detailed processing steps are as follows:
And 2, predicting and gradient filtering the data. Specifically, the method comprises the following steps:
through adopting above scheme, can improve the accuracy of indoor location.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present invention further provides an electronic device, as shown in fig. 4, which includes a memory 504, a processor 502, and a computer program stored on the memory 504 and executable on the processor 502, where the processor 502, when executing the program, implements the steps of any one of the deep learning based industrial internet of things scene fusion positioning methods described above.
Where in fig. 4 a bus architecture (represented by bus 500) is shown, bus 500 may include any number of interconnected buses and bridges, and bus 500 links together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Claims (6)
1. An industrial Internet of things scene fusion positioning method based on deep learning is characterized by comprising the following steps:
acquiring fingerprint information received by a target to be detected, wherein the fingerprint information is used for representing the position characteristics of the target to be detected;
Inputting the fingerprint information into a position fingerprint identification network trained in advance, and outputting the position information of the target to be detected by the position fingerprint identification network;
the position fingerprint identification network consists of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the portrait extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the confrontation network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
2. The deep learning-based industrial internet of things scene fusion positioning method as claimed in claim 1, wherein the training method of the location fingerprint identification network comprises the following steps:
determining a plurality of fingerprint data of a plurality of predetermined positions, setting 20% of the plurality of fingerprint data of each predetermined position in a verification set, and setting 80% of the plurality of fingerprint data of each predetermined position in a training set; the fingerprint data in the verification set is marked with a preset position represented by the fingerprint data in advance;
Inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network;
extracting a first characteristic portrait of the training characteristics through a portrait extraction layer, and extracting a second characteristic portrait of the verification characteristics through a portrait extraction layer;
mapping the first feature image into a potential space through a prediction layer to obtain a first potential feature, and mapping the second feature image into the potential space to obtain a second potential feature; obtaining a first prediction vector based on the first potential feature; obtaining a second prediction vector based on the second potential feature; obtaining a first loss function of unmarked data based on the first prediction vector, and obtaining a second loss function of marked data based on the second prediction vector;
performing fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector through an antagonistic network to obtain fusion data; mapping the fusion data to second spaces respectively to obtain mapping characteristics; obtaining a third loss function based on the mapping characteristics;
obtaining a model loss function based on the first loss function, the second loss function and the third loss function;
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by a prediction layer is input into the position information of the target to be measured.
3. The deep learning-based industrial internet of things scene fusion positioning method as claimed in claim 2, wherein the fusion operation of the training features, the first prediction vector, the verification features and the second prediction vector to obtain fusion data comprises:
obtaining a first fusion vector based on the training features, the first prediction vector and a second prediction vector;
obtaining a second fusion vector based on the verification feature, the first prediction vector and a second prediction vector;
and carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
4. An industrial internet of things scene fusion positioning system based on deep learning, which is characterized by comprising:
the acquisition module is used for acquiring fingerprint information received by a target to be detected, and the fingerprint information is used for representing the position characteristics of the target to be detected;
the positioning module is used for inputting the fingerprint information into a position fingerprint identification network trained in advance, and the position fingerprint identification network outputs the position information of the target to be detected;
The position fingerprint identification network is composed of a convolutional neural network, a portrait extraction layer, a prediction layer, a full connection layer and a confrontation network, wherein the input of the convolutional neural network is the fingerprint information, and the input of the portrait extraction layer is the output of the convolutional neural network; the input of the prediction layer is the output of the sketch extraction layer, and the input of the countermeasure network is the output of the convolutional neural network and the output of the prediction layer; the prediction layer outputs the position information of the target to be detected, and the countermeasure network is used for eliminating the influence of environmental characteristics on the fingerprint characteristics output by the prediction layer; the input of the full connection layer is the output of the prediction layer, and the output of the prediction layer is the position information of the object to be measured.
5. The deep learning based industrial internet of things scene fusion positioning system as claimed in claim 4, wherein the training method of the location fingerprint identification network comprises:
determining a plurality of fingerprint data of a plurality of predetermined positions, setting 20% of the plurality of fingerprint data of each predetermined position in a verification set, and setting 80% of the plurality of fingerprint data of each predetermined position in a training set; the fingerprint data in the verification set is marked with a preset position represented by the fingerprint data in advance;
Inputting the training set and the verification set into a convolutional neural network, and respectively extracting training characteristics in the training set and verification characteristics in the verification set through the convolutional neural network;
extracting a first characteristic portrait of the training characteristics through a portrait extraction layer, and extracting a second characteristic portrait of the verification characteristics through a portrait extraction layer;
mapping the first feature image into a potential space through a prediction layer to obtain a first potential feature, and mapping the second feature image into the potential space to obtain a second potential feature; obtaining a first prediction vector based on the first potential feature; obtaining a second prediction vector based on the second potential feature; obtaining a first loss function of unmarked data based on the first prediction vector, and obtaining a second loss function of marked data based on the second prediction vector;
performing fusion operation on the training features, the first prediction vector, the verification features and the second prediction vector through an antagonistic network to obtain fusion data; mapping the fusion data to a second space respectively to obtain mapping characteristics; obtaining a third loss function based on the mapping characteristics;
obtaining a model loss function based on the first loss function, the second loss function and the third loss function;
And when the model loss function is converged or the model training times are greater than a set value, determining that the training of the position fingerprint identification network is finished, and determining that a first prediction vector output by a prediction layer is input into the position information of the target to be measured.
6. The deep learning-based industrial internet of things scene fusion positioning system according to claim 5, wherein the fusion operation of the training features, the first prediction vectors, the verification features and the second prediction vectors to obtain fusion data comprises:
obtaining a first fusion vector based on the training features, the first prediction vector and a second prediction vector;
obtaining a second fusion vector based on the verification feature, the first prediction vector and a second prediction vector;
and carrying out weighted summation on the first fusion vector and the second fusion vector to obtain fusion data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210120544.6A CN114758364B (en) | 2022-02-09 | 2022-02-09 | Industrial Internet of things scene fusion positioning method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210120544.6A CN114758364B (en) | 2022-02-09 | 2022-02-09 | Industrial Internet of things scene fusion positioning method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114758364A true CN114758364A (en) | 2022-07-15 |
CN114758364B CN114758364B (en) | 2022-09-23 |
Family
ID=82325182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210120544.6A Expired - Fee Related CN114758364B (en) | 2022-02-09 | 2022-02-09 | Industrial Internet of things scene fusion positioning method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114758364B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115713681A (en) * | 2022-11-22 | 2023-02-24 | 中国农业科学院农业资源与农业区划研究所 | Method and system for generating space-time continuous crop parameters by fusing internet of things and satellite data |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107832834A (en) * | 2017-11-13 | 2018-03-23 | 合肥工业大学 | A kind of construction method of the WIFI indoor positioning fingerprint bases based on generation confrontation network |
CN110234085A (en) * | 2019-05-23 | 2019-09-13 | 深圳大学 | Based on the indoor location fingerprint to anti-migration network drawing generating method and system |
CN110300370A (en) * | 2019-07-02 | 2019-10-01 | 广州纳斯威尔信息技术有限公司 | A kind of reconstruction wifi fingerprint map indoor orientation method |
CN111797863A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Model training method, data processing method, device, storage medium and equipment |
CN112085738A (en) * | 2020-08-14 | 2020-12-15 | 南京邮电大学 | Image segmentation method based on generation countermeasure network |
US20200402223A1 (en) * | 2019-06-24 | 2020-12-24 | Insurance Services Office, Inc. | Machine Learning Systems and Methods for Improved Localization of Image Forgery |
CN112312541A (en) * | 2020-10-09 | 2021-02-02 | 清华大学 | Wireless positioning method and system |
US20210112371A1 (en) * | 2020-12-21 | 2021-04-15 | Intel Corporation | Proximity detection using wi-fi channel state information |
CN113630720A (en) * | 2021-08-24 | 2021-11-09 | 西北大学 | Indoor positioning method based on WiFi signal strength and generation countermeasure network |
-
2022
- 2022-02-09 CN CN202210120544.6A patent/CN114758364B/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107832834A (en) * | 2017-11-13 | 2018-03-23 | 合肥工业大学 | A kind of construction method of the WIFI indoor positioning fingerprint bases based on generation confrontation network |
CN111797863A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Model training method, data processing method, device, storage medium and equipment |
CN110234085A (en) * | 2019-05-23 | 2019-09-13 | 深圳大学 | Based on the indoor location fingerprint to anti-migration network drawing generating method and system |
US20200402223A1 (en) * | 2019-06-24 | 2020-12-24 | Insurance Services Office, Inc. | Machine Learning Systems and Methods for Improved Localization of Image Forgery |
CN110300370A (en) * | 2019-07-02 | 2019-10-01 | 广州纳斯威尔信息技术有限公司 | A kind of reconstruction wifi fingerprint map indoor orientation method |
CN112085738A (en) * | 2020-08-14 | 2020-12-15 | 南京邮电大学 | Image segmentation method based on generation countermeasure network |
CN112312541A (en) * | 2020-10-09 | 2021-02-02 | 清华大学 | Wireless positioning method and system |
US20210112371A1 (en) * | 2020-12-21 | 2021-04-15 | Intel Corporation | Proximity detection using wi-fi channel state information |
CN113630720A (en) * | 2021-08-24 | 2021-11-09 | 西北大学 | Indoor positioning method based on WiFi signal strength and generation countermeasure network |
Non-Patent Citations (3)
Title |
---|
KEVIN M. CHEN 等: "Semi-Supervised Learning with GANs for Device-Free Fingerprinting Indoor Localization", 《GLOBECOM 2020 - 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE》 * |
章燕芳: "多传感器位置指纹信息的采集与处理技术", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
郭昕刚 等: "基于k-means及改进k近邻的WiFi指纹定位算法", 《长春工业大学学报》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115713681A (en) * | 2022-11-22 | 2023-02-24 | 中国农业科学院农业资源与农业区划研究所 | Method and system for generating space-time continuous crop parameters by fusing internet of things and satellite data |
CN115713681B (en) * | 2022-11-22 | 2023-06-13 | 中国农业科学院农业资源与农业区划研究所 | Method and system for generating space-time continuous crop parameters by integrating Internet of things and satellite data |
Also Published As
Publication number | Publication date |
---|---|
CN114758364B (en) | 2022-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Toward location-enabled IoT (LE-IoT): IoT positioning techniques, error sources, and error mitigation | |
EP2270535B1 (en) | Indoor/outdoor decision apparatus and indoor/outdoor decision method | |
CN105699938B (en) | A kind of accurate positioning method and device based on wireless signal | |
Diaz et al. | Bluepass: An indoor bluetooth-based localization system for mobile applications | |
CN112533163B (en) | Indoor positioning method based on NB-IoT (NB-IoT) improved fusion ultra-wideband and Bluetooth | |
CN101923118B (en) | Building influence estimation apparatus and building influence estimation method | |
CN110926461B (en) | Indoor positioning method and system based on ultra wide band and navigation method and system | |
Lee et al. | Method for improving indoor positioning accuracy using extended Kalman filter | |
CN108413966A (en) | Localization method based on a variety of sensing ranging technology indoor locating systems | |
Song et al. | Implementation of android application for indoor positioning system with estimote BLE beacons | |
Mazan et al. | A Study of Devising Neural Network Based Indoor Localization Using Beacons: First Results. | |
CN114758364B (en) | Industrial Internet of things scene fusion positioning method and system based on deep learning | |
Alamleh et al. | A weighting system for building RSS maps by crowdsourcing data from smartphones | |
CN109640253B (en) | Mobile robot positioning method | |
Duong et al. | Improving indoor positioning system using weighted linear least square and neural network | |
Moradbeikie et al. | A cost-effective LoRaWAN-based IoT localization method using fixed reference nodes and dual-slope path-loss modeling | |
Gadhgadhi et al. | Distance estimation using polynomial approximation and neural network based on rssi technique | |
CN108540926B (en) | Wireless signal fingerprint construction method and device | |
CN115629376A (en) | Target positioning method and device, electronic device and storage medium | |
Luo et al. | Research on an adaptive algorithm for indoor bluetooth positioning | |
Pan et al. | Application of a WiFi/Geomagnetic Combined Positioning Method in a Single Access Point Environment | |
Zhang et al. | Integrated iBeacon/PDR Indoor Positioning System Using Extended Kalman Filter | |
Chakraborty et al. | On estimating the location and the 3-d shape of an object in an indoor environment using visible light | |
US11808873B2 (en) | Systems and methods for locating tagged objects in remote regions | |
Chen et al. | A new indoor positioning technique based on neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220923 |
|
CF01 | Termination of patent right due to non-payment of annual fee |