CN116051810B - Intelligent clothing positioning method based on deep learning - Google Patents
Intelligent clothing positioning method based on deep learning Download PDFInfo
- Publication number
- CN116051810B CN116051810B CN202310330129.8A CN202310330129A CN116051810B CN 116051810 B CN116051810 B CN 116051810B CN 202310330129 A CN202310330129 A CN 202310330129A CN 116051810 B CN116051810 B CN 116051810B
- Authority
- CN
- China
- Prior art keywords
- positioning
- layer
- convolution
- value
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
The invention discloses a smart garment positioning method based on deep learning, which comprises the following steps: the method comprises the steps of collecting positioning signals through an ultra-wideband positioning chip connected to the intelligent clothes, preprocessing the positioning signals by using a denoising and filtering preprocessing algorithm, and constructing a data set; transmitting the preprocessed positioning signals into a convolutional neural network to extract a positioning information feature map; and carrying out feature classification on the last layer of positioning information feature map by using a deep learning network to obtain a high-precision positioning result. The method of the invention realizes high-precision positioning by using the ultra-wideband positioning chip fixed on the intelligent clothing, has the advantages of portability and high efficiency, and can be applied to the fields of intelligent wearable equipment, engineering safety and the like.
Description
Technical Field
The invention relates to the technical field of positioning of deep learning, in particular to a smart garment positioning method based on deep learning.
Background
High-precision positioning is always a hot spot for research in the field of engineering safety, however, high-precision positioning often needs to rely on heavy positioning sensors to meet the requirement of high precision. In the field of wearable equipment, smart clothing is one of the hot spots of research in recent years, and the smart clothing has the characteristics of portability, intelligence, high comfort level and the like. Processing of positioning sensor signals on smart garments using high precision positioning algorithms will also be one of the hot spots for future positioning algorithm research.
At present, many research institutions at home and abroad are also researching a positioning method of a wearable device, the existing methods are mainly divided into two types, and the first type is a method for satellite positioning by using a GPS (global positioning system), a Beidou chip and other chips, and the method has high precision, but signal transmission is easy to be blocked and has poor anti-interference capability. The second type is to use sensors such as acceleration, gyroscope and the like to measure and calculate the motion state so as to update the positioning state, and the method has strong anti-interference performance, low precision and poor wearing comfort. In addition, in the field of medical security, highly accurate positioning is required for some patients with mobility impairment in some specific scenarios to prevent accidents. Most of the currently used methods wear positioning sensors on patients, and the methods have poor comfort and poor anti-interference capability.
Disclosure of Invention
Aiming at the problems, the invention solves the problem of low signal precision of the positioning sensor by using an algorithm based on deep learning, integrates the sensor on the intelligent clothing aiming at the medical safety field, solves the problem of poor wearing comfort, and provides a comfortable intelligent clothing positioning method for patients with inconvenient actions.
The invention provides a smart garment positioning method based on deep learning, which aims to collect positioning information in different areas by using an ultra-wideband positioning chip, extract local features in a positioning signal sequence through a convolutional neural network and obtain a positioning information feature map, and transmit the positioning information feature map into a deep learning network based on self-attention to solve the problems of poor anti-interference capability and low comfort level in the traditional positioning method.
In order to achieve the above purpose, the invention adopts the following technical scheme: a smart garment positioning method based on deep learning comprises the following steps:
step (1), positioning signals are collected through an ultra-wideband positioning chip connected to the intelligent clothes, the collected positioning signals are subjected to signal preprocessing through a denoising and filtering preprocessing algorithm, and the preprocessed positioning signals are constructed into a data set and are divided into a training set and a testing set;
training the constructed intelligent clothing positioning model by utilizing a training set, wherein the intelligent clothing positioning model comprises a feature extraction network and a feature classification network;
step (2.1), transmitting the preprocessed positioning signals into a feature extraction network to extract a positioning information feature map;
step (2.2), the last layer of positioning information feature map is subjected to feature classification by using a feature classification network, and a high-precision positioning result is output;
and (3) inputting the positioning signals after the pretreatment in the test set into a trained intelligent clothing positioning model, and outputting a positioning result.
Further, the positioning signal acquisition in the step (1) includes the following steps:
determining an acquisition area, dividing the acquisition area into n small square areas in equal proportion, wherein the small square at the upper left corner is an area 1, the square at the lower right corner is an area n, deploying an ultra-wideband positioning base station A right above the acquisition area, deploying an ultra-wideband positioning base station B at the lower left corner and deploying an ultra-wideband positioning base station C at the lower right corner;
the ultra-wideband positioning tag chip on the intelligent clothes is deployed in the area 1, and the distance between the ultra-wideband positioning tag and 3 ultra-wideband positioning base stations is collected"1" is taken as->Is a tag value of (2);
repeating the steps in the small square areas 1 to n, and collecting the distances between the ultra-wideband positioning labels and the 3 ultra-wideband positioning base stations in different areas。
Further, in the step (1), a denoising and filtering preprocessing algorithm is used for preprocessing the acquired positioning signals, and the preprocessed positioning signals are used for constructing a data set;
the denoising filter preprocessing algorithm comprises the following steps: of length tAs a sequence x, a denoising value of a sequence value of a t bit and a sequence value of a t-1 bit in the sequence is obtained through calculation, the denoising value of the sequence value of the t bit is multiplied by the sequence value of the t bit, and the influence of the noise value on the sequence is reduced, wherein the calculation formula of the denoising filter preprocessing algorithm is as follows:
wherein the method comprises the steps ofDenoising value for the t-th bit sequence value, < >>For inputting the sequence +.>The value of bit t in (a)>Post sequence +.>T, where t is the set time period length.
Further, the feature extraction network in step (2.1) includes three convolution blocks, the first convolution block includes 2 convolution layers, 1 pooling layer and 1 RELU activation function layer, the second convolution block includes 1 convolution layer, 1 pooling layer and 1 RELU activation function layer, and the third convolution block includes 2 convolution layers; pre-processing the noise-removed filterSplicing the information matrixes with the length of t and the height of 3, and taking the information matrixes as the input of a first convolution block; the output of the second convolution block and the output of the third convolution block are the nonlinear variation superposition of the output of the previous layer convolution block and the output of the present layer convolution block.
Further, the first convolution block includes a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 16, a step size of 1, and a padding of 1, a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 32, a step size of 1, a padding of 1, a convolution kernel size of 1*1, a convolution kernel number of 32, a step size of 3, a pooling layer with a padding of 1, and a RELU activation function layer; the second convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 3 and a filling of 1 and a RELU activation function layer; the third convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 1, a step length of 1 and a filling of 1, and a positioning information feature map is output.
Further, the feature classification network in step (2.2) includes two codec blocks; the first codec block includes two decoding layers, a self-attention extraction layer and a decoding layer; the second codec block includes two decoding layers, a self-attention extraction layer, a full-connection layer and a Softmax layer;
the decoding layer performs flattening operation on the positioning information feature map from the first dimension to obtain a sequence Z, and the independent coding sequence Z is endowed by calculation according to the position; the self-attention extraction layer calculates global correlation according to the coding sequence; the decoding layer splices the self-attention values extracted by the self-attention extraction layer into a self-attention matrix according to the positions; the Softmax layer outputs a probability value of the positioning position at the current moment, and takes the maximum value in all positions as a positioning position result at the current moment.
Further, the coding sequence and the self-attention value calculation formula are as follows:
wherein the method comprises the steps ofExpressed as input sequence +.>The%>Position coding of elements->For inputting the sequence +.>The%>Element(s)>Coding sequence for input into the self-attention extraction layer>Is>Element(s)>Attention key score and attention value score, respectively, of the coding sequence of the current self-attention extraction layer,/-attention key score and attention value score, respectively>、/>Parameter learned for back propagation in network, +.>For modifiable parameters controlling the learning rate of the network, +.>Indicating the output positioning result, i.e. the tag prediction value.
Further, the loss function used to train the smart garment positioning model is:
where N is the number of input samples,a true tag value representing the current jth sample, is->Representing the predictive tag value of the current jth sample.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the ultra-wideband positioning chip is arranged on the intelligent clothing, the convolutional learning network (namely the characteristic extraction network) is used for extracting positioning information characteristics, the self-attention-based deep learning network (namely the characteristic classification network) is used for classifying the positioning information, the problem of poor interference resistance of the traditional triangular positioning algorithm based on the ultra-wideband positioning chip is solved, meanwhile, the method has the advantages of portability of a wearable device and high comfort level, a comfortable intelligent clothing method is provided for patients with mobility impairment in the medical safety field, and the method can be applied to places such as hospitals, nursing homes and electric power inspection.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of a network structure of a feature extraction network and a feature classification network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of map division according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Fig. 1 is a schematic flow chart of a smart garment positioning method based on deep learning according to an embodiment, which includes the following steps:
step (1), positioning signal acquisition is carried out through an ultra-wideband positioning chip connected to the intelligent clothing, a denoising and filtering preprocessing algorithm is used for carrying out signal preprocessing on the acquired positioning signals, and the preprocessed positioning signals are used for constructing a data set;
wherein, the positioning signal acquisition includes the following steps:
and determining a room or a park as an acquisition area, dividing the acquisition area into n small squares in equal proportion, wherein the small square at the upper left corner of the map is an area 1, and the square at the lower right corner of the map is an area n. And deploying the ultra-wideband positioning base station A right above the map, deploying the ultra-wideband positioning base station B in the lower left corner and deploying the ultra-wideband positioning base station C in the lower right corner.
The ultra-wideband positioning tag chip on the intelligent clothes is deployed in the area 1, and the distance between the ultra-wideband positioning tag and 3 ultra-wideband positioning base stations is collected"1" is taken as->Is a label value of (a).
Repeating the steps in the areas 1 to n, and collecting the distances between the ultra-wideband positioning labels and the 3 ultra-wideband positioning base stations in different areas。
Carrying out signal preprocessing on the collected positioning signals by using a denoising filtering preprocessing algorithm, and constructing a data set from the preprocessed positioning signals;
the denoising filter preprocessing algorithm comprises the following steps: of length tAs the sequence x, the de-noising value of the sequence value of the t bit and the de-noising value of the sequence value of the t-1 bit in the time sequence are obtained through calculation, the de-noising value of the sequence value of the t bit and the sequence value of the t bit are multiplied, and the influence of the noise value on the sequence is reduced. Wherein the method comprises the steps ofThe calculation formula of the denoising filter preprocessing algorithm is as follows:
wherein the method comprises the steps ofDenoising value for the t-th bit sequence value, < >>For inputting the sequence +.>The value of bit t in (a)>Post sequence +.>The value of the t-th element in (b).
Step (2), transmitting the preprocessed positioning signals into a convolutional neural network (namely a feature extraction network) to extract a positioning information feature map;
wherein the convolutional neural network comprises three convolutional blocks, and the convolutional neural network is preprocessed by denoising filteringSplicing the information matrixes with the length of t and the height of 3, and taking the information matrixes as the input of a first convolution block; the first convolution block comprises a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 16, a step length of 1 and a filling of 1, a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 32, a step length of 1 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 32, a step length of 3 and a filling of 1, and a RELU activation function layer; the second rollThe building block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 3 and a filling of 1 and a RELU activation function layer; the third convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 1, a step length of 1 and a filling of 1, and a positioning information feature map is output; the output of the second convolution block and the output of the third convolution block are the nonlinear variation superposition of the output of the previous layer convolution block and the output of the present layer convolution block. />
And (3) carrying out feature classification on the last layer of positioning information feature map by using a deep learning network (namely a feature classification network), and outputting a high-precision positioning result.
The deep learning network for feature classification comprises two coding and decoding blocks; the first coding and decoding block comprises two decoding layers, a self-attention extraction layer and a decoding layer; the second codec block includes two decoding layers, a self-attention extraction layer, a full-connection layer and a Softmax layer; the decoding layer performs flattening operation on the positioning information feature map from the first dimension to obtain a sequence Z, and the independent coding sequence Z is endowed by calculation according to the position; the self-attention extraction layer calculates global correlation according to the coding sequence; the decoding layer splices the self-attention values extracted by the self-attention extraction layer into a self-attention matrix according to the positions; the Softmax layer outputs a probability value of the positioning position at the current moment, and takes the maximum value in all positions as a positioning position result at the current moment. The sequence coding and self-attention value calculation formula is as follows:
wherein the method comprises the steps ofExpressed as input sequence +.>The%>Position coding of elements->For inputting the sequence +.>The%>Element(s)>Coding sequence for input into the self-attention extraction layer>Is>Element(s)>Attention key score and attention value score, respectively, of the coding sequence of the current self-attention extraction layer,/-attention key score and attention value score, respectively>、/>Parameter learned for back propagation in network, +.>For modifiable parameters controlling the learning rate of the network, +.>Indicating the output positioning result, i.e. the tag prediction value.
The loss functions used by the convolutional neural network in the step (2) and the deep learning network in the step (3) are as follows:
where N is the number of input samples (i.e. input positioning signals),representing the true value of the current jth sample, for example>Representing the predicted value of the current jth sample.
In this embodiment, the positioning area is divided into 25 areas, and the specific division situation and the deployment position of the base station on the map are shown in fig. 3. Before the experiment starts, positioning signal acquisition is respectively carried out in 25 areas through ultra-wideband positioning chips connected to the intelligent clothes, 1000 distances from a base station A, B, C are respectively acquired in each area, and 25000 pieces of data are acquired in total; and carrying out signal preprocessing on the acquired positioning signals by using a denoising filtering preprocessing algorithm, constructing the preprocessed positioning signals into a data set, dividing the data in the data set into a training set and a testing set according to the proportion of 8:2, and training the constructed intelligent clothing positioning model by using the training set to obtain a trained intelligent clothing positioning model. And (3) calculating the accuracy rate of the trained intelligent clothing positioning model on the test set, wherein the accuracy rate reaches 92.50%. In the experiment, ultra-wideband positioning tag chips integrated on smart clothes are deployed in the area 1, the area 5 and the area 18, the distances between the tag chips of the smart clothes and the base station A, B, C in the areas are measured respectively, the distances are used as input sequences and are input into the trained smart clothes positioning model, and the output result of a network is recorded. The specific experimental results are shown in the following table:
the experimental result shows that the predicted experimental region result output by the network is consistent with the experimental region of the intelligent clothing, and the reliability is high.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The solutions in the embodiments of the present application may be implemented in various computer languages, for example, object-oriented programming language Java, and an transliterated scripting language JavaScript, etc.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
Claims (7)
1. The intelligent clothing positioning method based on deep learning is characterized by comprising the following steps of:
step (1), positioning signals are collected through an ultra-wideband positioning chip connected to the intelligent clothes, the collected positioning signals are subjected to signal preprocessing through a denoising and filtering preprocessing algorithm, and the preprocessed positioning signals are constructed into a data set and are divided into a training set and a testing set;
training the constructed intelligent clothing positioning model by utilizing a training set, wherein the intelligent clothing positioning model comprises a feature extraction network and a feature classification network;
step (2.1), transmitting the preprocessed positioning signals into a feature extraction network to extract a positioning information feature map;
step (2.2), the final layer of positioning information feature map is subjected to feature classification by using a feature classification network, and a high-precision positioning result is output;
the feature classification network in step (2.2) comprises two codec blocks; the first codec block includes two decoding layers, a self-attention extraction layer and a decoding layer; the second codec block includes two decoding layers, a self-attention extraction layer, a full-connection layer and a Softmax layer; the decoding layer performs flattening operation on the positioning information feature map from the first dimension to obtain a sequence Z, and the independent coding sequence Z is endowed by calculation according to the position; the self-attention extraction layer calculates global correlation according to the coding sequence; the decoding layer splices the self-attention values extracted by the self-attention extraction layer into a self-attention matrix according to the positions; the Softmax layer outputs a probability value of the positioning position at the current moment, and takes the maximum value in all positions as a positioning position result at the current moment;
and (3) inputting the positioning signals after the pretreatment in the test set into a trained intelligent clothing positioning model, and outputting a positioning result.
2. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the positioning signal acquisition in the step (1) comprises the following steps:
determining an acquisition area, dividing the acquisition area into n small square areas in equal proportion, wherein the small square at the upper left corner is an area 1, the square at the lower right corner is an area n, deploying an ultra-wideband positioning base station A right above the acquisition area, deploying an ultra-wideband positioning base station B at the lower left corner and deploying an ultra-wideband positioning base station C at the lower right corner;
the ultra-wideband positioning tag chip on the intelligent clothes is deployed in the area 1, and the distance between the ultra-wideband positioning tag and 3 ultra-wideband positioning base stations is collected"1" is taken as->Is a tag value of (2);
3. The smart garment positioning method based on deep learning as claimed in claim 2, wherein: in the step (1), a denoising and filtering preprocessing algorithm is used for preprocessing the acquired positioning signals, and the preprocessed positioning signals are used for constructing a data set;
the denoising filter preprocessing algorithm comprises the following steps: of length tAs a sequence x, obtaining a denoising value of a sequence value of a t bit and a sequence value of a t-1 bit in the sequence by calculation, multiplying the denoising value of the t bit sequence value and the t bit sequence value, and reducing the influence of the noise value on the sequence, wherein the calculation of a denoising filtering preprocessing algorithmThe formula is as follows:
4. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the feature extraction network in step (2.1) comprises three convolution blocks, the first convolution block comprising 2 convolution layers, 1 pooling layer and 1 RELU activation function layer, the second convolution block comprising 1 convolution layer, 1 pooling layer and 1 RELU activation function layer, the third convolution block comprising 2 convolution layers; pre-processing the noise-removed filterSplicing the information matrixes with the length of t and the height of 3, and taking the information matrixes as the input of a first convolution block; the output of the second convolution block and the third convolution block is the output of the previous layer convolution block and the present layer volumeThe nonlinear variation of the product output is superimposed.
5. The smart garment positioning method based on deep learning of claim 4, wherein: the first convolution block comprises a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 16, a step length of 1 and a filling of 1, a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 32, a step length of 1 and a filling of 1, a pooling layer with a convolution kernel size of 1*1, a convolution kernel number of 32, a step length of 3 and a filling of 1, and a RELU activation function layer; the second convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a pooling layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 3 and a filling of 1 and a RELU activation function layer; the third convolution block comprises a convolution layer with a convolution kernel size of 3*3, a convolution kernel number of 64, a step length of 2 and a filling of 1, a convolution layer with a convolution kernel size of 1*1, a convolution kernel number of 1, a step length of 1 and a filling of 1, and a positioning information feature map is output.
6. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the coding sequence and the self-attention value calculation formula are as follows:
wherein the method comprises the steps ofExpressed as input sequence +.>The%>Position coding of elements->For inputting the sequence +.>The%>Element(s)>Coding sequence for input into the self-attention extraction layer>Is>Element(s)>Attention key score and attention value score, respectively, of the coding sequence of the current self-attention extraction layer,/-attention key score and attention value score, respectively>、/>Learning for back propagation in a networkParameters (I)>For modifiable parameters controlling the learning rate of the network, +.>Indicating the output positioning result, i.e. the tag prediction value.
7. The smart garment positioning method based on deep learning as claimed in claim 1, wherein: the loss function used to train the smart garment positioning model is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310330129.8A CN116051810B (en) | 2023-03-30 | 2023-03-30 | Intelligent clothing positioning method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310330129.8A CN116051810B (en) | 2023-03-30 | 2023-03-30 | Intelligent clothing positioning method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116051810A CN116051810A (en) | 2023-05-02 |
CN116051810B true CN116051810B (en) | 2023-06-13 |
Family
ID=86129911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310330129.8A Active CN116051810B (en) | 2023-03-30 | 2023-03-30 | Intelligent clothing positioning method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116051810B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020257812A2 (en) * | 2020-09-16 | 2020-12-24 | Google Llc | Modeling dependencies with global self-attention neural networks |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108282743A (en) * | 2018-03-05 | 2018-07-13 | 桂林理工大学 | Indoor orientation method, apparatus and system |
CN110390351B (en) * | 2019-06-24 | 2020-07-24 | 浙江大学 | Epileptic focus three-dimensional automatic positioning system based on deep learning |
CN110933625A (en) * | 2019-11-01 | 2020-03-27 | 武汉纺织大学 | Ultra-wideband fingerprint positioning method based on deep learning |
CN112257509A (en) * | 2020-09-23 | 2021-01-22 | 浙江科技学院 | Stereo image single-stream visual saliency detection method based on joint information coding |
CN114364015A (en) * | 2021-12-10 | 2022-04-15 | 上海应用技术大学 | UWB positioning method based on deep learning |
CN114678097B (en) * | 2022-05-25 | 2022-08-30 | 武汉纺织大学 | Artificial intelligence and digital twinning system and method for intelligent clothes |
-
2023
- 2023-03-30 CN CN202310330129.8A patent/CN116051810B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020257812A2 (en) * | 2020-09-16 | 2020-12-24 | Google Llc | Modeling dependencies with global self-attention neural networks |
Non-Patent Citations (2)
Title |
---|
深度卷积神经网络在图像识别算法中的研究与实现;韩星烁;林伟;;微型机与应用(21);58-60 * |
结合自注意力机制的神经网络文本分类算法研究;贾红雨;王宇涵;丛日晴;林岩;;计算机应用与软件(02);206-212 * |
Also Published As
Publication number | Publication date |
---|---|
CN116051810A (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106096662A (en) | Human motion state identification based on acceleration transducer | |
CN111507296A (en) | Intelligent illegal building extraction method based on unmanned aerial vehicle remote sensing and deep learning | |
CN110674875A (en) | Pedestrian motion mode identification method based on deep hybrid model | |
CN103916954A (en) | Probability locating method and locating device based on WLAN | |
CN113160276B (en) | Target tracking method, target tracking device and computer readable storage medium | |
CN103500342A (en) | Human behavior recognition method based on accelerometer | |
CN108090515A (en) | A kind of environmental rating appraisal procedure based on data fusion | |
CN112836657A (en) | Pedestrian detection method and system based on lightweight YOLOv3 | |
Xia et al. | [Retracted] Gesture Tracking and Recognition Algorithm for Dynamic Human Motion Using Multimodal Deep Learning | |
Mohan et al. | Non-invasive technique for real-time myocardial infarction detection using faster R-CNN | |
CN114937293A (en) | Agricultural service management method and system based on GIS | |
CN116051810B (en) | Intelligent clothing positioning method based on deep learning | |
CN111105124B (en) | Multi-landmark influence calculation method based on distance constraint | |
CN109350072B (en) | Step frequency detection method based on artificial neural network | |
CN109379713B (en) | Floor prediction method based on integrated extreme learning machine and principal component analysis | |
Guo et al. | Multimode pedestrian dead reckoning gait detection algorithm based on identification of pedestrian phone carrying position | |
CN116186581A (en) | Floor identification method and system based on graph pulse neural network | |
CN111597881B (en) | Human body complex behavior identification method based on data separation multi-scale feature combination | |
CN115563652A (en) | Track embedding leakage prevention method and system | |
CN112216387B (en) | Reworking and production management method and system based on cloud data platform | |
CN114550297A (en) | Pedestrian intention analysis method and system | |
CN112380949A (en) | Microseismic wave arrival time point detection method and system | |
Cui et al. | Mobile Big Data Analytics for Human Behavior Recognition in Wireless Sensor Network Based on Transfer Learning | |
CN111079599B (en) | Human body complex behavior recognition method based on multi-feature fusion CNN-BLSTM | |
Junoh et al. | Region Classification using Wi-Fi and Magnetic Field Strength. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |