CN116051810A - Intelligent clothing positioning method based on deep learning - Google Patents

Intelligent clothing positioning method based on deep learning Download PDF

Info

Publication number
CN116051810A
CN116051810A CN202310330129.8A CN202310330129A CN116051810A CN 116051810 A CN116051810 A CN 116051810A CN 202310330129 A CN202310330129 A CN 202310330129A CN 116051810 A CN116051810 A CN 116051810A
Authority
CN
China
Prior art keywords
positioning
layer
convolution
value
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310330129.8A
Other languages
Chinese (zh)
Other versions
CN116051810B (en
Inventor
黄国强
俞晨雨
田张源
陈余焜
张碧瑶
王文婷
王智力
王朵
余锋
姜明华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN202310330129.8A priority Critical patent/CN116051810B/en
Publication of CN116051810A publication Critical patent/CN116051810A/en
Application granted granted Critical
Publication of CN116051810B publication Critical patent/CN116051810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The invention discloses a smart garment positioning method based on deep learning, which comprises the following steps: the method comprises the steps of collecting positioning signals through an ultra-wideband positioning chip connected to the intelligent clothes, preprocessing the positioning signals by using a denoising and filtering preprocessing algorithm, and constructing a data set; transmitting the preprocessed positioning signals into a convolutional neural network to extract a positioning information feature map; and carrying out feature classification on the last layer of positioning information feature map by using a deep learning network to obtain a high-precision positioning result. The method of the invention realizes high-precision positioning by using the ultra-wideband positioning chip fixed on the intelligent clothing, has the advantages of portability and high efficiency, and can be applied to the fields of intelligent wearable equipment, engineering safety and the like.

Description

Intelligent clothing positioning method based on deep learning
Technical Field
The invention relates to the technical field of positioning of deep learning, in particular to a smart garment positioning method based on deep learning.
Background
High-precision positioning is always a hot spot for research in the field of engineering safety, however, high-precision positioning often needs to rely on heavy positioning sensors to meet the requirement of high precision. In the field of wearable equipment, smart clothing is one of the hot spots of research in recent years, and the smart clothing has the characteristics of portability, intelligence, high comfort level and the like. Processing of positioning sensor signals on smart garments using high precision positioning algorithms will also be one of the hot spots for future positioning algorithm research.
At present, many research institutions at home and abroad are also researching a positioning method of a wearable device, the existing methods are mainly divided into two types, and the first type is a method for satellite positioning by using a GPS (global positioning system), a Beidou chip and other chips, and the method has high precision, but signal transmission is easy to be blocked and has poor anti-interference capability. The second type is to use sensors such as acceleration, gyroscope and the like to measure and calculate the motion state so as to update the positioning state, and the method has strong anti-interference performance, low precision and poor wearing comfort. In addition, in the field of medical security, highly accurate positioning is required for some patients with mobility impairment in some specific scenarios to prevent accidents. Most of the currently used methods wear positioning sensors on patients, and the methods have poor comfort and poor anti-interference capability.
Disclosure of Invention
Aiming at the problems, the invention solves the problem of low signal precision of the positioning sensor by using an algorithm based on deep learning, integrates the sensor on the intelligent clothing aiming at the medical safety field, solves the problem of poor wearing comfort, and provides a comfortable intelligent clothing positioning method for patients with inconvenient actions.
The invention provides a smart garment positioning method based on deep learning, which aims to collect positioning information in different areas by using an ultra-wideband positioning chip, extract local features in a positioning signal sequence through a convolutional neural network and obtain a positioning information feature map, and transmit the positioning information feature map into a deep learning network based on self-attention to solve the problems of poor anti-interference capability and low comfort level in the traditional positioning method.
In order to achieve the above purpose, the invention adopts the following technical scheme: a smart garment positioning method based on deep learning comprises the following steps:
step (1), positioning signals are collected through an ultra-wideband positioning chip connected to the intelligent clothes, the collected positioning signals are subjected to signal preprocessing through a denoising and filtering preprocessing algorithm, and the preprocessed positioning signals are constructed into a data set and are divided into a training set and a testing set;
training the constructed intelligent clothing positioning model by utilizing a training set, wherein the intelligent clothing positioning model comprises a feature extraction network and a feature classification network;
step (2.1), transmitting the preprocessed positioning signals into a feature extraction network to extract a positioning information feature map;
step (2.2), the last layer of positioning information feature map is subjected to feature classification by using a feature classification network, and a high-precision positioning result is output;
and (3) inputting the positioning signals after the pretreatment in the test set into a trained intelligent clothing positioning model, and outputting a positioning result.
Further, the positioning signal acquisition in the step (1) includes the following steps:
determining an acquisition area, dividing the acquisition area into n small square areas in equal proportion, wherein the small square at the upper left corner is an area 1, the square at the lower right corner is an area n, deploying an ultra-wideband positioning base station A right above the acquisition area, deploying an ultra-wideband positioning base station B at the lower left corner and deploying an ultra-wideband positioning base station C at the lower right corner;
the ultra-wideband positioning tag chip on the intelligent clothes is deployed in the area 1, and the distance between the ultra-wideband positioning tag and 3 ultra-wideband positioning base stations is collected
Figure SMS_1
"1" is taken as->
Figure SMS_2
Is a tag value of (2);
repeating the steps in the small square areas 1 to n, and collecting the distances between the ultra-wideband positioning labels and the 3 ultra-wideband positioning base stations in different areas
Figure SMS_3
Further, in the step (1), a denoising and filtering preprocessing algorithm is used for preprocessing the acquired positioning signals, and the preprocessed positioning signals are used for constructing a data set;
the denoising filter preprocessing algorithm comprises the following steps: of length t
Figure SMS_4
As a sequence x, a denoising value of a sequence value of a t bit and a sequence value of a t-1 bit in the sequence is obtained through calculation, the denoising value of the sequence value of the t bit is multiplied by the sequence value of the t bit, and the influence of the noise value on the sequence is reduced, wherein the calculation formula of the denoising filter preprocessing algorithm is as follows:
Figure SMS_5
Figure SMS_6
wherein the method comprises the steps of
Figure SMS_7
Denoising value for the t-th bit sequence value, < >>
Figure SMS_8
For inputting the sequence +.>
Figure SMS_9
The value of bit t in (a)>
Figure SMS_10
Post sequence +.>
Figure SMS_11
T, where t is the set time period length.
Further, the feature extraction network in step (2.1) includes three convolution blocks, the first convolution block includes 2 convolution layers, 1 pooling layer and 1 RELU activation function layer, the second convolution block includes 1 convolution layer, 1 pooling layer and 1 RELU activation function layer, and the third convolution block includes 2 convolution layers; pre-processing the noise-removed filter
Figure SMS_12
Splicing the information matrixes with the length of t and the height of 3, and taking the information matrixes as the input of a first convolution block; the output of the second convolution block and the output of the third convolution block are the nonlinear variation superposition of the output of the previous layer convolution block and the output of the present layer convolution block.
Further, the first convolution block includes a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 16, a step size of 1, and a padding of 1, a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 32, a step size of 1, a padding of 1, a convolution kernel size of 1*1, a convolution kernel number of 32, a step size of 3, a pooling layer with a padding of 1, and a RELU activation function layer; the second convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 3 and a filling of 1 and a RELU activation function layer; the third convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 1, a step length of 1 and a filling of 1, and a positioning information feature map is output.
Further, the feature classification network in step (2.2) includes two codec blocks; the first codec block includes two decoding layers, a self-attention extraction layer and a decoding layer; the second codec block includes two decoding layers, a self-attention extraction layer, a full-connection layer and a Softmax layer;
the decoding layer performs flattening operation on the positioning information feature map from the first dimension to obtain a sequence Z, and the independent coding sequence Z is endowed by calculation according to the position; the self-attention extraction layer calculates global correlation according to the coding sequence; the decoding layer splices the self-attention values extracted by the self-attention extraction layer into a self-attention matrix according to the positions; the Softmax layer outputs a probability value of the positioning position at the current moment, and takes the maximum value in all positions as a positioning position result at the current moment.
Further, the coding sequence and the self-attention value calculation formula are as follows:
Figure SMS_13
Figure SMS_14
Figure SMS_15
Figure SMS_16
Figure SMS_17
wherein the method comprises the steps of
Figure SMS_19
Expressed as input sequence +.>
Figure SMS_23
The%>
Figure SMS_27
Position coding of elements->
Figure SMS_20
For inputting the sequence +.>
Figure SMS_25
The%>
Figure SMS_30
Element(s)>
Figure SMS_31
Coding sequence for input into the self-attention extraction layer>
Figure SMS_18
Is>
Figure SMS_22
Element(s)>
Figure SMS_26
Attention key score and attention value score, respectively, of the coding sequence of the current self-attention extraction layer,/-attention key score and attention value score, respectively>
Figure SMS_29
、/>
Figure SMS_21
Parameter learned for back propagation in network, +.>
Figure SMS_24
For modifiable parameters controlling the learning rate of the network, +.>
Figure SMS_28
Indicating the output positioning result, i.e. the tag prediction value.
Further, the loss function used to train the smart garment positioning model is:
Figure SMS_32
where N is the number of input samples,
Figure SMS_33
a true tag value representing the current jth sample, is->
Figure SMS_34
Representing the predictive tag value of the current jth sample.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the ultra-wideband positioning chip is arranged on the intelligent clothing, the convolutional learning network (namely the characteristic extraction network) is used for extracting positioning information characteristics, the self-attention-based deep learning network (namely the characteristic classification network) is used for classifying the positioning information, the problem of poor interference resistance of the traditional triangular positioning algorithm based on the ultra-wideband positioning chip is solved, meanwhile, the method has the advantages of portability of a wearable device and high comfort level, a comfortable intelligent clothing method is provided for patients with mobility impairment in the medical safety field, and the method can be applied to places such as hospitals, nursing homes and electric power inspection.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of a network structure of a feature extraction network and a feature classification network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of map division according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Fig. 1 is a schematic flow chart of a smart garment positioning method based on deep learning according to an embodiment, which includes the following steps:
step (1), positioning signal acquisition is carried out through an ultra-wideband positioning chip connected to the intelligent clothing, a denoising and filtering preprocessing algorithm is used for carrying out signal preprocessing on the acquired positioning signals, and the preprocessed positioning signals are used for constructing a data set;
wherein, the positioning signal acquisition includes the following steps:
and determining a room or a park as an acquisition area, dividing the acquisition area into n small squares in equal proportion, wherein the small square at the upper left corner of the map is an area 1, and the square at the lower right corner of the map is an area n. And deploying the ultra-wideband positioning base station A right above the map, deploying the ultra-wideband positioning base station B in the lower left corner and deploying the ultra-wideband positioning base station C in the lower right corner.
The ultra-wideband positioning tag chip on the intelligent clothes is deployed in the area 1, and the distance between the ultra-wideband positioning tag and 3 ultra-wideband positioning base stations is collected
Figure SMS_35
"1" is taken as->
Figure SMS_36
Is a label value of (a).
Repeating the steps in the areas 1 to n, and collecting the distances between the ultra-wideband positioning labels and the 3 ultra-wideband positioning base stations in different areas
Figure SMS_37
Carrying out signal preprocessing on the collected positioning signals by using a denoising filtering preprocessing algorithm, and constructing a data set from the preprocessed positioning signals;
the denoising filter preprocessing algorithm comprises the following steps: of length t
Figure SMS_38
As the sequence x, the de-noising value of the sequence value of the t bit and the de-noising value of the sequence value of the t-1 bit in the time sequence are obtained through calculation, the de-noising value of the sequence value of the t bit and the sequence value of the t bit are multiplied, and the influence of the noise value on the sequence is reduced. The calculation formula of the denoising filter preprocessing algorithm is as follows:
Figure SMS_39
Figure SMS_40
wherein the method comprises the steps of
Figure SMS_41
Denoising value for the t-th bit sequence value, < >>
Figure SMS_42
For inputting the sequence +.>
Figure SMS_43
The value of bit t in (a)>
Figure SMS_44
Post sequence +.>
Figure SMS_45
The value of the t-th element in (b).
Step (2), transmitting the preprocessed positioning signals into a convolutional neural network (namely a feature extraction network) to extract a positioning information feature map;
wherein the convolutional neural network comprises three convolutional blocks, and the convolutional neural network is preprocessed by denoising filtering
Figure SMS_46
Splicing the information matrixes with the length of t and the height of 3, and taking the information matrixes as the input of a first convolution block; the first convolution block comprises a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 16, a step length of 1 and a filling of 1, a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 32, a step length of 1 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 32, a step length of 3 and a filling of 1, and a RELU activation function layer; the second convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 3 and a filling of 1 and a RELU activation function layer; the third convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 1, a step length of 1 and a filling of 1, and a positioning information feature map is output; the output of the second convolution block and the output of the third convolution block are the nonlinear variation superposition of the output of the previous layer convolution block and the output of the present layer convolution block. />
And (3) carrying out feature classification on the last layer of positioning information feature map by using a deep learning network (namely a feature classification network), and outputting a high-precision positioning result.
The deep learning network for feature classification comprises two coding and decoding blocks; the first coding and decoding block comprises two decoding layers, a self-attention extraction layer and a decoding layer; the second codec block includes two decoding layers, a self-attention extraction layer, a full-connection layer and a Softmax layer; the decoding layer performs flattening operation on the positioning information feature map from the first dimension to obtain a sequence Z, and the independent coding sequence Z is endowed by calculation according to the position; the self-attention extraction layer calculates global correlation according to the coding sequence; the decoding layer splices the self-attention values extracted by the self-attention extraction layer into a self-attention matrix according to the positions; the Softmax layer outputs a probability value of the positioning position at the current moment, and takes the maximum value in all positions as a positioning position result at the current moment. The sequence coding and self-attention value calculation formula is as follows:
Figure SMS_47
Figure SMS_48
Figure SMS_49
Figure SMS_50
Figure SMS_51
wherein the method comprises the steps of
Figure SMS_52
Expressed as input sequence +.>
Figure SMS_56
The%>
Figure SMS_60
Position coding of elements->
Figure SMS_54
For inputting the sequence +.>
Figure SMS_59
The%>
Figure SMS_63
Element(s)>
Figure SMS_65
Coding sequence for input into the self-attention extraction layer>
Figure SMS_55
Is>
Figure SMS_57
Element(s)>
Figure SMS_61
Attention key score and attention value score, respectively, of the coding sequence of the current self-attention extraction layer,/-attention key score and attention value score, respectively>
Figure SMS_64
、/>
Figure SMS_53
Parameter learned for back propagation in network, +.>
Figure SMS_58
For modifiable parameters controlling the learning rate of the network, +.>
Figure SMS_62
Indicating the output positioning result, i.e. the tag prediction value.
The loss functions used by the convolutional neural network in the step (2) and the deep learning network in the step (3) are as follows:
Figure SMS_66
where N is the number of input samples (i.e. input positioning signals),
Figure SMS_67
representing the true value of the current jth sample, for example>
Figure SMS_68
Representing the predicted value of the current jth sample.
In this embodiment, the positioning area is divided into 25 areas, and the specific division situation and the deployment position of the base station on the map are shown in fig. 3. Before the experiment starts, positioning signal acquisition is respectively carried out in 25 areas through ultra-wideband positioning chips connected to the intelligent clothes, 1000 distances from a base station A, B, C are respectively acquired in each area, and 25000 pieces of data are acquired in total; and carrying out signal preprocessing on the acquired positioning signals by using a denoising filtering preprocessing algorithm, constructing the preprocessed positioning signals into a data set, dividing the data in the data set into a training set and a testing set according to the proportion of 8:2, and training the constructed intelligent clothing positioning model by using the training set to obtain a trained intelligent clothing positioning model. And (3) calculating the accuracy rate of the trained intelligent clothing positioning model on the test set, wherein the accuracy rate reaches 92.50%. In the experiment, ultra-wideband positioning tag chips integrated on smart clothes are deployed in the area 1, the area 5 and the area 18, the distances between the tag chips of the smart clothes and the base station A, B, C in the areas are measured respectively, the distances are used as input sequences and are input into the trained smart clothes positioning model, and the output result of a network is recorded. The specific experimental results are shown in the following table:
Figure SMS_69
the experimental result shows that the predicted experimental region result output by the network is consistent with the experimental region of the intelligent clothing, and the reliability is high.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The solutions in the embodiments of the present application may be implemented in various computer languages, for example, object-oriented programming language Java, and an transliterated scripting language JavaScript, etc.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (8)

1. The intelligent clothing positioning method based on deep learning is characterized by comprising the following steps of:
step (1), positioning signals are collected through an ultra-wideband positioning chip connected to the intelligent clothes, the collected positioning signals are subjected to signal preprocessing through a denoising and filtering preprocessing algorithm, and the preprocessed positioning signals are constructed into a data set and are divided into a training set and a testing set;
training the constructed intelligent clothing positioning model by utilizing a training set, wherein the intelligent clothing positioning model comprises a feature extraction network and a feature classification network;
step (2.1), transmitting the preprocessed positioning signals into a feature extraction network to extract a positioning information feature map;
step (2.2), the last layer of positioning information feature map is subjected to feature classification by using a feature classification network, and a high-precision positioning result is output;
and (3) inputting the positioning signals after the pretreatment in the test set into a trained intelligent clothing positioning model, and outputting a positioning result.
2. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the positioning signal acquisition in the step (1) comprises the following steps:
determining an acquisition area, dividing the acquisition area into n small square areas in equal proportion, wherein the small square at the upper left corner is an area 1, the square at the lower right corner is an area n, deploying an ultra-wideband positioning base station A right above the acquisition area, deploying an ultra-wideband positioning base station B at the lower left corner and deploying an ultra-wideband positioning base station C at the lower right corner;
the ultra-wideband positioning tag chip on the intelligent clothes is deployed in the area 1, and the distance between the ultra-wideband positioning tag and 3 ultra-wideband positioning base stations is collected
Figure QLYQS_1
"1" is taken as->
Figure QLYQS_2
Is a tag value of (2);
repeating the steps in the small square areas 1 to n, and collecting the distances between the ultra-wideband positioning labels and the 3 ultra-wideband positioning base stations in different areas
Figure QLYQS_3
3. The smart garment positioning method based on deep learning as claimed in claim 2, wherein: in the step (1), a denoising and filtering preprocessing algorithm is used for preprocessing the acquired positioning signals, and the preprocessed positioning signals are used for constructing a data set;
the denoising filter preprocessing algorithm comprises the following steps: of length t
Figure QLYQS_4
As a sequence x, a denoising value of a sequence value of a t bit and a sequence value of a t-1 bit in the sequence is obtained through calculation, the denoising value of the sequence value of the t bit is multiplied by the sequence value of the t bit, and the influence of the noise value on the sequence is reduced, wherein the calculation formula of the denoising filter preprocessing algorithm is as follows:
Figure QLYQS_5
Figure QLYQS_6
wherein the method comprises the steps of
Figure QLYQS_7
Denoising value for the t-th bit sequence value, < >>
Figure QLYQS_8
For inputting the sequence +.>
Figure QLYQS_9
The value of bit t in (a)>
Figure QLYQS_10
Post sequence +.>
Figure QLYQS_11
T, where t is the set time period length.
4. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the feature extraction network in step (2.1) comprises three convolution blocks, the first convolution block comprising 2 convolution layers, 1 pooling layer and 1 RELU activation function layer, the second convolution block comprising 1 convolution layer, 1 pooling layer and 1 RELU activation function layer, the third convolution block comprising 2 convolution layers; pre-processing the noise-removed filter
Figure QLYQS_12
Splicing the information matrixes with the length of t and the height of 3, and taking the information matrixes as the input of a first convolution block; the output of the second convolution block and the output of the third convolution block are the nonlinear variation superposition of the output of the previous layer convolution block and the output of the present layer convolution block.
5. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the first convolution block comprises a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 16, a step length of 1 and a filling of 1, a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 32, a step length of 1 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 32, a step length of 3 and a filling of 1, and a RELU activation function layer; the second convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 3 and a filling of 1 and a RELU activation function layer; the third convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 1, a step length of 1 and a filling of 1, and a positioning information feature map is output.
6. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the feature classification network in step (2.2) comprises two codec blocks; the first coding and decoding block comprises two decoding layers, a self-attention extraction layer and a decoding layer; the second codec block includes two decoding layers, a self-attention extraction layer, a full-connection layer and a Softmax layer; the decoding layer performs flattening operation on the positioning information feature map from the first dimension to obtain a sequence Z, and the independent coding sequence Z is endowed by calculation according to the position; the self-attention extraction layer calculates global correlation according to the coding sequence; the decoding layer splices the self-attention values extracted by the self-attention extraction layer into a self-attention matrix according to the positions; the Softmax layer outputs a probability value of the positioning position at the current moment, and takes the maximum value in all positions as a positioning position result at the current moment.
7. The smart garment positioning method based on deep learning of claim 6, wherein: the coding sequence and the self-attention value calculation formula are as follows:
Figure QLYQS_13
Figure QLYQS_14
Figure QLYQS_15
Figure QLYQS_16
Figure QLYQS_17
wherein the method comprises the steps of
Figure QLYQS_19
Expressed as input sequence +.>
Figure QLYQS_23
The%>
Figure QLYQS_27
Position coding of elements->
Figure QLYQS_21
For inputting the sequence +.>
Figure QLYQS_25
The%>
Figure QLYQS_29
Element(s)>
Figure QLYQS_31
Coding sequence for input into the self-attention extraction layer>
Figure QLYQS_18
Is>
Figure QLYQS_24
Element(s)>
Figure QLYQS_28
Attention key score and attention value score, respectively, of the coding sequence of the current self-attention extraction layer,/-attention key score and attention value score, respectively>
Figure QLYQS_30
、/>
Figure QLYQS_20
Parameter learned for back propagation in network, +.>
Figure QLYQS_22
For modifiable parameters controlling the learning rate of the network, +.>
Figure QLYQS_26
Indicating the output positioning result, i.e. the tag prediction value.
8. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the loss function used to train the smart garment positioning model is:
Figure QLYQS_32
where N is the number of input samples,
Figure QLYQS_33
a true tag value representing the current jth sample, is->
Figure QLYQS_34
Representing the predictive tag value of the current jth sample. />
CN202310330129.8A 2023-03-30 2023-03-30 Intelligent clothing positioning method based on deep learning Active CN116051810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310330129.8A CN116051810B (en) 2023-03-30 2023-03-30 Intelligent clothing positioning method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310330129.8A CN116051810B (en) 2023-03-30 2023-03-30 Intelligent clothing positioning method based on deep learning

Publications (2)

Publication Number Publication Date
CN116051810A true CN116051810A (en) 2023-05-02
CN116051810B CN116051810B (en) 2023-06-13

Family

ID=86129911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310330129.8A Active CN116051810B (en) 2023-03-30 2023-03-30 Intelligent clothing positioning method based on deep learning

Country Status (1)

Country Link
CN (1) CN116051810B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282743A (en) * 2018-03-05 2018-07-13 桂林理工大学 Indoor orientation method, apparatus and system
CN110933625A (en) * 2019-11-01 2020-03-27 武汉纺织大学 Ultra-wideband fingerprint positioning method based on deep learning
WO2020224123A1 (en) * 2019-06-24 2020-11-12 浙江大学 Deep learning-based seizure focus three-dimensional automatic positioning system
WO2020257812A2 (en) * 2020-09-16 2020-12-24 Google Llc Modeling dependencies with global self-attention neural networks
CN112257509A (en) * 2020-09-23 2021-01-22 浙江科技学院 Stereo image single-stream visual saliency detection method based on joint information coding
CN114364015A (en) * 2021-12-10 2022-04-15 上海应用技术大学 UWB positioning method based on deep learning
CN114678097A (en) * 2022-05-25 2022-06-28 武汉纺织大学 Artificial intelligence and digital twinning system and method for intelligent clothes

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282743A (en) * 2018-03-05 2018-07-13 桂林理工大学 Indoor orientation method, apparatus and system
WO2020224123A1 (en) * 2019-06-24 2020-11-12 浙江大学 Deep learning-based seizure focus three-dimensional automatic positioning system
CN110933625A (en) * 2019-11-01 2020-03-27 武汉纺织大学 Ultra-wideband fingerprint positioning method based on deep learning
WO2020257812A2 (en) * 2020-09-16 2020-12-24 Google Llc Modeling dependencies with global self-attention neural networks
CN112257509A (en) * 2020-09-23 2021-01-22 浙江科技学院 Stereo image single-stream visual saliency detection method based on joint information coding
CN114364015A (en) * 2021-12-10 2022-04-15 上海应用技术大学 UWB positioning method based on deep learning
CN114678097A (en) * 2022-05-25 2022-06-28 武汉纺织大学 Artificial intelligence and digital twinning system and method for intelligent clothes

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SEYED YAHYA NIKOUEI,ET AL.: "Kerman:A Hibrid Lightweight Tracking Algorithm to Enable Smart Surveillance as an Edge Service", 《2019 16TH IEEE CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE》》, pages 1 - 6 *
刘震宇;李嘉俊;王昆;: "一种基于深度自编码器的指纹匹配定位方法", 广东工业大学学报, no. 05, pages 19 - 25 *
张哲晗;方薇;杜丽丽;乔延利;张冬英;丁国绅;: "基于编码-解码卷积神经网络的遥感图像语义分割", 光学学报, no. 03, pages 46 - 55 *
贾红雨;王宇涵;丛日晴;林岩;: "结合自注意力机制的神经网络文本分类算法研究", 计算机应用与软件, no. 02, pages 206 - 212 *
韩星烁;林伟;: "深度卷积神经网络在图像识别算法中的研究与实现", 微型机与应用, no. 21, pages 58 - 60 *

Also Published As

Publication number Publication date
CN116051810B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
Gu et al. A survey on deep learning for human activity recognition
CN111209848B (en) Real-time falling detection method based on deep learning
CN106096662A (en) Human motion state identification based on acceleration transducer
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN113160276B (en) Target tracking method, target tracking device and computer readable storage medium
CN103218825A (en) Quick detection method of spatio-temporal interest points with invariable scale
CN112836657A (en) Pedestrian detection method and system based on lightweight YOLOv3
CN110503643A (en) A kind of object detection method and device based on the retrieval of multiple dimensioned rapid scene
CN110348492A (en) A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN104463240A (en) Method and device for controlling list interface
Mohan et al. Non-invasive technique for real-time myocardial infarction detection using faster R-CNN
CN114937293A (en) Agricultural service management method and system based on GIS
CN116051810B (en) Intelligent clothing positioning method based on deep learning
CN117036868B (en) Training method and device of human body perception model, medium and electronic equipment
CN115422962A (en) Gesture and gesture recognition method and device based on millimeter wave radar and deep learning algorithm
CN111105124B (en) Multi-landmark influence calculation method based on distance constraint
CN109350072B (en) Step frequency detection method based on artificial neural network
CN116758479A (en) Coding deep learning-based intelligent agent activity recognition method and system
CN114550297B (en) Pedestrian intention analysis method and system
CN111597881B (en) Human body complex behavior identification method based on data separation multi-scale feature combination
CN115563652A (en) Track embedding leakage prevention method and system
Qiu et al. Old man fall detection based on surveillance video object tracking
Chouhan et al. Human fall detection analysis with image recognition using convolutional neural network approach
Cui et al. Mobile Big Data Analytics for Human Behavior Recognition in Wireless Sensor Network Based on Transfer Learning
Junoh et al. Region Classification using Wi-Fi and Magnetic Field Strength.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant