CN114546117B - Tactical sign language recognition glove system based on deep learning and sensor technology and implementation method - Google Patents
Tactical sign language recognition glove system based on deep learning and sensor technology and implementation method Download PDFInfo
- Publication number
- CN114546117B CN114546117B CN202210157004.5A CN202210157004A CN114546117B CN 114546117 B CN114546117 B CN 114546117B CN 202210157004 A CN202210157004 A CN 202210157004A CN 114546117 B CN114546117 B CN 114546117B
- Authority
- CN
- China
- Prior art keywords
- module
- sign language
- data
- tactical
- raspberry group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000005516 engineering process Methods 0.000 title claims abstract description 18
- 238000013135 deep learning Methods 0.000 title claims abstract description 16
- 230000006870 function Effects 0.000 claims abstract description 35
- 240000007651 Rubus glaucus Species 0.000 claims description 63
- 235000011034 Rubus glaucus Nutrition 0.000 claims description 63
- 235000009122 Rubus idaeus Nutrition 0.000 claims description 63
- 238000011161 development Methods 0.000 claims description 29
- 238000005452 bending Methods 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 238000013178 mathematical model Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 10
- 230000005540 biological transmission Effects 0.000 abstract description 8
- 230000000903 blocking effect Effects 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 abstract description 2
- 230000004888 barrier function Effects 0.000 abstract description 2
- 230000003993 interaction Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 208000035475 disorder Diseases 0.000 description 2
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 description 1
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 1
- YTAHJIFKAKIKAV-XNMGPUDCSA-N [(1R)-3-morpholin-4-yl-1-phenylpropyl] N-[(3S)-2-oxo-5-phenyl-1,3-dihydro-1,4-benzodiazepin-3-yl]carbamate Chemical compound O=C1[C@H](N=C(C2=C(N1)C=CC=C2)C1=CC=CC=C1)NC(O[C@H](CCN1CCOCC1)C1=CC=CC=C1)=O YTAHJIFKAKIKAV-XNMGPUDCSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2228—Indexing structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Electrically Operated Instructional Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention belongs to the field of communication, and particularly discloses a tactical sign language recognition glove system based on deep learning and sensor technology and an implementation method thereof. According to the invention, a single-label multi-classification neural network gesture recognition model is built based on the keras, and a sign language recognition system based on glove collected data is built based on the sensor technology, so that the sign language can be utilized to transmit information in real time, and information exchange based on the sign language recognition is realized; the remote information transmission among users can be realized, and the accurate information interaction of the users under the barrier blocking can be established; the position condition of each user can be obtained in real time; the emergency measure plan can be automatically selected in emergency, and the emergency communication is conveniently carried out by a user, so that the whole system has the complete, accurate and high-real-time communication function.
Description
Technical Field
The invention belongs to the field of communication, and particularly discloses a tactical sign language recognition glove system based on deep learning and sensor technology and an implementation method.
Background
The 21 st century is the information age. With the silent transition from mechanized to informationized warfare, informationized warfare has become a new form of warfare in the 21 st century. The collection, utilization and processing of communication information are critical decisive factors in information war. Tactical decisions, decision delivery, and teammates' communication all need to be implemented by means of information as a medium.
In terms of tactical sign language, in order to meet silent information exchange with concealment and accuracy in special combat environments, tactical sign language exchange is generally adopted between action teams.
However, we note that under the actual battlefield environment, some sign language may be misread between action teams, which has a certain influence on combat, and for this reason, a computer vision method has been used to interpret sign language information, but under certain conditions, this method still has a strong limitation. For example, during night time, extreme weather, or in a wide variety of scene operations, either conventional sign language gestures or computer vision recognition methods are limited in visibility. The mode of identifying sign language by computer vision can affect the efficiency of users in actual combat.
Aiming at the problems, developing a lightweight wearable device suitable for multi-occasion combat is a difficult problem to be solved.
Disclosure of Invention
The invention aims to provide a tactical sign language recognition glove system based on deep learning and sensor technology and an implementation method thereof, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a tactical sign language recognition glove system based on deep learning and sensor technology is used for realizing tactical sign language communication functions, command part remote command functions and satellite positioning functions and comprises a raspberry group module, an arduino development board, a V5 expansion board, a bending sensing module, a gyroscope sensing module, a pressure sensing module, a satellite positioning module, a receiver earphone end, a PC end interface module, a switch module and a power module.
The raspberry group module is used for receiving and processing data, transmitting instruction audio files or position information and is the core of the whole equipment;
the arduino development board is used for carrying out AD conversion on the data acquired by the sensing module and transmitting the digital signals to the raspberry group module;
the V5 expansion board is inserted on the arduino development board and used for expanding an interface of the arduino development board and is directly connected with the switch module, the bending sensing module, the gyroscope sensing module, the pressure sensing module and the satellite positioning module;
the bending sensing module is used for collecting finger bending data, and uploading the data to a MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
the gyroscope sensing module is used for collecting palm deflection angle data, and uploading the data to the MySQL database through the V5 expansion board, the arduino development board and the raspberry group in sequence to form a data set;
the pressure sensing module is used for collecting fingertip pressure data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
the satellite positioning module is used for acquiring the position of a user in real time and uploading the position to the PC interface end sequentially through the V5 expansion board, the arduino development board and the raspberry pie;
the receiver earphone end is used for carrying out Bluetooth connection with the raspberry group module and receiving and playing the instruction audio file sent by the raspberry group module;
the PC end interface module is used for receiving the sign language instruction of the user by the command part, displaying the position information of the user on the map and facilitating the combat deployment;
the switch module is used for starting the equipment to collect a batch of data, and automatically closing the system after the collection is completed to prevent the system from starting to collect invalid data for a long time;
the power module is used for supplying power to the modules.
The invention also provides a realization method of the tactical sign language recognition glove system based on the deep learning and sensor technology, which comprises the following steps:
step 1, a power module is turned on to supply power to equipment, and calibration processing is carried out on a bending sensing module, a gyroscope sensing module, a pressure sensing module and a satellite positioning module;
step 2, constructing, training and generating a mathematical model for sign language recognition;
step 3, deploying the generated model on a raspberry group module;
step 4, designing a PC end interface and functions;
step 5, connecting the PC end and the raspberry group with a network, using MySQL databases as transfer stations for data transmission, checking the internal engineering files of the raspberry group by the PC end in a visual mode through a VNC remote control tool, executing the corresponding engineering files at regular time to acquire the position information of the satellite sensing module, and updating the position information on an interface map in real time;
step 6, carrying out Bluetooth connection on the raspberry sender and the receiver headset so that the raspberry sender sends an instruction audio file to the receiver headset;
step 7, the wearable device touches the pressure sensing switch before the sign language action is made, and the bending sensing module, the gyroscope sensing module and the pressure sensing module automatically acquire 50 groups of data and transmit the 50 groups of data to the arduino development board through the V5 expansion board;
step 8, the arduino development board performs AD conversion on the acquired data and transmits the output digital signals to the raspberry group module;
step 9, the raspberry group module preprocesses the received 50 groups of digital signals;
step 10, outputting sign language instruction numbers after the processed data are processed through a model which is arranged in advance by a raspberry group module;
step 11, the raspberry group module searches an audio file of a corresponding instruction through the outputted sign language instruction number and sends the audio file to a receiver headset end through Bluetooth;
step l2, the raspberry group module uploads the sign language instruction to the MySQL database, and the PC side has export permission.
Further, the step 2 includes:
step 2-1, manually collecting a data set, searching volunteers of different ages, heights, weights and sexes to wear gloves to make 48 tactical sign languages, marking labels for classification, extracting an excel file and performing disorder treatment. Dividing the obtained data set into a training set and a testing set;
step 2-2, preprocessing a data set, and converting each group of data list into a two-dimensional tensor form with the size of 16 x 360;
step 2-3, using one-hot coding to the labels, namely, each label is expressed as an all-zero vector with the length of 48, and only the index bit value corresponding to the label is 1;
step 2-4, constructing a single-label multi-classification neural network model by adopting a keras framework;
step 2-5, setting epochs as 9, setting batch as 512, and performing model training;
step 2-6, in the training process, each batch is used as a step, and each 200 steps are tested and verified once;
step 2-7, obtaining a probability distribution list with the length of 48 after training, and outputting an index of the maximum probability, namely sign language number, with the length of 1.
Further, the step 2-2 includes:
step 2-2-1, writing a function to generate a 16 x 360 all-zero tensor;
step 2-2-2, setting the element of the nth row and the mth column in the all-zero tensor as 1 for the nth item m in a group of data lists of function input parameters, and carrying out the processing of all 16 items in the data list;
and 2-2-3, calling a function to process and store all data sets, and taking the data sets as input of a model.
As a preferable mode of the invention, the sensor module can acquire finger bending information, acceleration information, angle information and fingertip pressure information of a user through the bending sensing module, the gyroscope module and the pressure sensing module.
As a preferable scheme of the invention, the data preprocessing module adopts an arduino development board and a V5 expansion board, and sends and processes the data after being processed and delivered to the raspberry group.
As a preferable scheme of the invention, the gesture recognition module adopts a single-label multi-classification neural network gesture recognition model built based on the keras.
Compared with the prior art, the invention has the following beneficial effects:
(1) The hand-held portable electronic device is convenient to wear, and a user can wear the hand-held portable electronic device on the hand, so that sign language communication is not affected, and direct combat is not affected.
(2) The limitation of fighting in a limited visibility scene is solved, and the practical requirement is met.
(3) The satellite positioning module is added on the premise of not affecting wearing, and the invention has more functions by combining interface use.
(4) The MySQL database is adopted as a transfer station for data transmission with the PC end, so that the limitation that the raspberry group must be connected with the same local area network with the PC end is removed, namely, the command part can command the combat in the rear, and the actual requirement is met.
(5) The raspberry pie module uses Bluetooth to carry out audio transmission with the earphone end of the receiver, so that the data transmission efficiency and accuracy are enhanced, meanwhile, the fact that the distance between team members is relatively short under actual conditions is considered, and the actual requirements are met in the Bluetooth range.
Drawings
Fig. 1 is a schematic diagram of the overall structure of the present invention.
Fig. 2 is a flow chart of the present invention.
FIG. 3 is a diagram of the present invention.
Fig. 4 is a diagram of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment discloses a tactical sign language recognition glove system based on deep learning and sensor technology, which can realize a sign language recognition function, a wireless transmission sign language instruction audio function and a satellite positioning function, wherein the hardware structure of the system is shown in the accompanying figure 1, and the physical diagrams are shown in the accompanying figures 3 and 4. Comprising the following steps:
the raspberry group module is used for receiving and processing data, transmitting instruction audio files or position information and is the core of the whole equipment;
the arduino development board is used for carrying out AD conversion on the data acquired by the sensing module and transmitting the digital signals to the raspberry group module;
the V5 expansion board is inserted on the arduino development board and used for expanding an interface of the arduino development board and is directly connected with the switch module, the bending sensing module, the gyroscope sensing module, the pressure sensing module and the satellite positioning module;
the bending sensing module is used for collecting finger bending data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
the gyroscope sensing module is used for collecting palm deflection angle data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
the pressure sensing module is used for collecting fingertip pressure data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
the satellite positioning module is used for acquiring the position of a user in real time and uploading the position to the PC interface end sequentially through the V5 expansion board, the arduino development board and the raspberry pie;
the receiver earphone end is used for carrying out Bluetooth connection with the raspberry group module and receiving and playing the instruction audio file sent by the raspberry group module;
the PC end interface module is used for receiving the sign language instruction of the user by the command part, displaying the position information of the user on the map and facilitating the combat deployment;
the switch module is used for starting the equipment to collect a batch of data, and automatically closing the system after the collection is completed to prevent the system from starting to collect invalid data for a long time;
the power module is used for supplying power to the modules.
The embodiment also provides a realization method of the tactical sign language recognition glove system based on deep learning and sensor technology, wherein a flow chart is shown in fig. 2, and the realization method comprises the following steps:
step a, a power module is turned on to supply power to equipment, and calibration processing is carried out on the bending sensing module, the gyroscope sensing module, the pressure sensing module and the satellite positioning module;
step b, constructing, training and generating a mathematical model for sign language recognition;
step c, deploying the generated model on a raspberry group module;
step d, designing a PC end interface and functions;
and e, connecting the PC end and the raspberry group with a network, and using the MySQL database as a transfer station for data transmission by both parties, so that the situation of blocking and packet loss caused by transmitting a large amount of data is prevented, the limitation condition that the PC terminal and the raspberry group are connected with the same local area network to mutually send the data is eliminated, and the application range of the device is expanded. The PC end can visually check the internal engineering file of the raspberry group through the VNC remote control tool, and execute the corresponding engineering file at regular time to acquire the position information of the satellite sensing module, and update the position information on an interface map in real time;
step f, carrying out Bluetooth connection on the raspberry pie and the receiver headset so that the raspberry pie can send an instruction audio file to the receiver headset;
step g, the wearable device touches the pressure sensing switch before the sign language action is made, and the bending sensing module, the gyroscope sensing module and the pressure sensing module automatically acquire 50 groups of data and transmit the 50 groups of data to the arduino development board through the V5 expansion board;
step h, the arduino development board performs AD conversion on the acquired data and transmits the output digital signals to the raspberry group module;
step i, the raspberry group module preprocesses the received 50 groups of digital signals;
step j, outputting sign language instruction numbers after the processed data are processed through a model which is arranged in advance by a raspberry group module;
step k, the raspberry group module searches an audio file of a corresponding instruction through the outputted sign language instruction number and sends the audio file to a receiver headset end through Bluetooth;
step l, the raspberry group module uploads the sign language instruction to the MySQL database, and the PC side has export permission.
Further, the step b includes:
step b1, manually collecting a data set, searching volunteers of different ages, heights, weights and sexes to wear gloves to make 48 tactical sign languages, marking labels for classification, extracting an excel file and performing disorder treatment. Dividing the obtained data set into a training set and a testing set;
step b2, preprocessing the data set, and converting each group of data list into a two-dimensional tensor form with the size of 16 x 360;
step b3, using one-hot coding to the labels, namely, each label is expressed as an all-zero vector with the length of 48, and only the index bit value corresponding to the label is 1;
step b4, constructing a single-label multi-classification neural network model by adopting a keras framework;
step b5, setting epochs as 9, setting batch as 512, and performing model training;
step b6, in the training process, each batch is used as a step, and each 200 steps are tested and verified once;
and b7, training to obtain a probability distribution list with the length of 48, and outputting an index with the maximum probability as 1, namely sign language number.
Further, the step d includes:
step d1, carrying out overall layout on the interface by using PyQt;
step d2, compiling a groove function corresponding to the export function, and reading sign language names in the MySQL database by the PC end and displaying the sign language names in a form of a drop-down frame;
step d3, writing a groove function corresponding to the positioning function, receiving the position information of the user at regular time in a multithreading mode, and updating and displaying on the equal-proportion map;
step d4, compiling a groove function corresponding to the receiving function, and receiving an audio file sent by the playing raspberry group module together with the earphone end of the receiver by the PC end;
step d5, writing a groove function corresponding to the command function, wherein the PC end sends audio instructions to all receiver headset ends through the raspberry group module;
and d6, writing a groove function corresponding to the exit function, and completely exiting the interface system.
Further, the step i includes:
step i1, combining 50 groups of digital signals into a group of 50 x 16 two-dimensional tensors;
step i2, searching the mode of each column of the two-dimensional tensor, and if a plurality of coincidence items exist, selecting one of the coincidence items at will;
and step i3, combining the obtained 16 modes into 1 list with the length of 16 in sequence, and taking the list as sign language information to be processed.
It can be seen that, by the system and the method provided by the embodiment, the sign language can be used for transmitting information in real time, and information exchange based on sign language identification can be performed; the remote information transmission among users can be realized, and the accurate information interaction of the users under the barrier blocking can be established; the position condition of each user can be obtained in real time; the emergency measure plan can be automatically selected in emergency, and the emergency communication is conveniently carried out by a user, so that the whole system has the complete, accurate and high-real-time communication function.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (6)
1. A tactical sign language recognition glove system based on deep learning and sensor technology, comprising:
A. the raspberry group module is used for receiving and processing data, transmitting instruction audio files or position information and is the core of the whole equipment;
the arduino development board is used for carrying out AD conversion on the data acquired by the sensing module and transmitting the digital signals to the raspberry group module;
the V5 expansion board is inserted on the arduino development board and used for expanding an interface of the arduino development board and is directly connected with the switch module, the bending sensing module, the gyroscope sensing module, the pressure sensing module and the satellite positioning module;
D. the bending sensing module is used for collecting finger bending data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
E. the gyroscope sensing module is used for collecting palm deflection angle data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
F. the pressure sensing module is used for collecting fingertip pressure data, and uploading the data to the MySQL database sequentially through the V5 expansion board, the arduino development board and the raspberry group to form a data set;
G. the satellite positioning module is used for acquiring the position of a user in real time and uploading the position to the PC interface end sequentially through the V5 expansion board, the arduino development board and the raspberry pie;
H. the receiver earphone end is used for carrying out Bluetooth connection with the raspberry group module and receiving and playing the instruction audio file sent by the raspberry group module;
the PC end interface module is used for receiving the sign language instruction of the user by the command part and displaying the position information of the user on the map so as to facilitate the combat deployment;
J. the switch module is used for starting the equipment to collect a batch of data, and automatically closing the system after the collection is completed to prevent the system from starting to collect invalid data for a long time;
K. and the power supply module is used for supplying power to the modules.
2. A tactical sign language identification glove system based on deep learning and sensor technology according to claim 1, wherein: the switch module selects the pressure sensor, and is activated by continuous pressing for 3s, so that the sign language is identified once.
3. A method for implementing the deep learning and sensor technology based tactical sign language recognition glove system of claim 1, the method being applied to the deep learning and sensor technology based tactical sign language recognition glove system of claim 1, comprising:
(1) The power supply module is turned on to supply power to the equipment, and calibration processing is carried out on the bending sensing module, the gyroscope sensing module, the pressure sensing module and the satellite positioning module;
(2) Constructing, training and generating a mathematical model for sign language recognition;
(3) Deploying the generated model on a raspberry group module;
(4) Designing a PC end interface and functions;
(5) Connecting a PC end and a raspberry group with a network, transmitting data by using a MySQL database as a transfer station, checking an internal engineering file of the raspberry group by the PC end in a visual mode through a VNC remote control tool, executing corresponding engineering files at regular time to acquire the position information of a satellite sensing module, and updating the position information on an interface map in real time;
(6) Bluetooth connection is carried out on the raspberry pie terminal and the receiver headset terminal, so that the raspberry pie terminal sends an instruction audio file to the receiver headset terminal;
(7) The wearable device is characterized in that before sign language actions are made, the wearable device touches the pressure sensing switch, and the bending sensing module, the gyroscope sensing module and the pressure sensing module automatically acquire 50 groups of data and transmit the 50 groups of data to the arduino development board through the V5 expansion board;
(8) The arduino development board performs AD conversion on the acquired data and transmits the output digital signal to the raspberry group module;
(9) The raspberry group module preprocesses the received 50 groups of digital signals;
(10) The processed data is processed by a model which is arranged in advance by a raspberry group module, and then sign language instruction numbers are output;
(11) The raspberry pie module searches an audio file of a corresponding instruction through the outputted sign language instruction number and sends the audio file to a receiver headset end through Bluetooth;
(12) The raspberry group module uploads the sign language instruction to the MySQL database, and the PC side has export permission.
4. A method of implementing a deep learning and sensor technology based tactical sign language identification glove system according to claim 3, wherein (2) comprises the steps of:
(2-1) manually collecting a data set, searching volunteers of different ages, heights, weights and sexes to wear gloves to make 48 tactical sign languages, marking the tactical sign languages, classifying the tactical sign languages, extracting the tactical sign languages as excel files, performing scrambling treatment, and dividing the obtained data set into a training set and a testing set;
(2-2) preprocessing the data set, and converting each group of data list into a two-dimensional tensor form with the size of 16 x 360;
(2-3) using one-hot encoding for the labels, i.e., each label is represented as an all-zero vector of length 48, with only the index bit value corresponding to the label being 1;
(2-4) constructing a single-label multi-classification neural network model by adopting a keras framework;
(2-5) model training was performed with epochs set to 9 and batch set to 512;
(2-6) during training, each batch acts as a step, testing and validation is performed once every 200 steps;
and (2-7) training to obtain a probability distribution list with the length of 48, and outputting an index of the maximum probability, namely sign language number, with the length of 1.
5. A method of implementing a tactical sign language recognition glove system based on deep learning and sensor technology of claim 3, wherein (4) comprises the steps of:
(4-1) overall layout of the interface using PyQt;
(4-2) writing a groove function corresponding to the export function, and reading sign language names in the MySQL database by a PC end and displaying the sign language names in a form of a drop-down box;
(4-3) writing a groove function corresponding to the positioning function, adopting a multithreading mode to receive the position information of the user at fixed time, and updating and displaying on the equal-proportion map;
(4-4) compiling a groove function corresponding to the receiving function, wherein the PC end and the earphone end of the receiver can receive the audio file sent by the playing raspberry group module together;
(4-5) writing a groove function corresponding to the command function, wherein the PC end can send audio instructions to all receiver headset ends through the raspberry group module;
(4-6) writing a groove function corresponding to the exit function, and completely exiting the interface system.
6. A method for implementing a tactical sign language recognition glove system based on deep learning and sensor technology according to claim 3, wherein (9) comprises the steps of:
(9-1) combining the 50 sets of digital signals into a set of 50 x 16 two-dimensional tensors;
(9-2) searching for the mode of each column of the two-dimensional tensor, and if a plurality of coincidence items exist, arbitrarily selecting one of the coincidence items;
(9-3) combining the obtained 16 modes in order into 1 list with the length of 16 as sign language information to be processed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210157004.5A CN114546117B (en) | 2022-02-21 | 2022-02-21 | Tactical sign language recognition glove system based on deep learning and sensor technology and implementation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210157004.5A CN114546117B (en) | 2022-02-21 | 2022-02-21 | Tactical sign language recognition glove system based on deep learning and sensor technology and implementation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114546117A CN114546117A (en) | 2022-05-27 |
CN114546117B true CN114546117B (en) | 2023-11-10 |
Family
ID=81675566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210157004.5A Active CN114546117B (en) | 2022-02-21 | 2022-02-21 | Tactical sign language recognition glove system based on deep learning and sensor technology and implementation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114546117B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109542220A (en) * | 2018-10-25 | 2019-03-29 | 广州大学 | A kind of sign language gloves, system and implementation method with calibration and learning functionality |
WO2020106364A2 (en) * | 2018-09-27 | 2020-05-28 | Hankookin, Inc. | Dynamical object oriented information system to sustain vitality of a target system |
CN111402997A (en) * | 2020-04-08 | 2020-07-10 | 兰州理工大学 | Man-machine interaction system and method |
WO2020244075A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Sign language recognition method and apparatus, and computer device and storage medium |
-
2022
- 2022-02-21 CN CN202210157004.5A patent/CN114546117B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020106364A2 (en) * | 2018-09-27 | 2020-05-28 | Hankookin, Inc. | Dynamical object oriented information system to sustain vitality of a target system |
CN109542220A (en) * | 2018-10-25 | 2019-03-29 | 广州大学 | A kind of sign language gloves, system and implementation method with calibration and learning functionality |
WO2020244075A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Sign language recognition method and apparatus, and computer device and storage medium |
CN111402997A (en) * | 2020-04-08 | 2020-07-10 | 兰州理工大学 | Man-machine interaction system and method |
Non-Patent Citations (1)
Title |
---|
小型单板计算机加速物联网应用落地;杨波;刘梅;张亚宁;;物联网技术(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114546117A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109902659B (en) | Method and apparatus for processing human body image | |
CN111291190B (en) | Training method of encoder, information detection method and related device | |
CN109325547A (en) | Non-motor vehicle image multi-tag classification method, system, equipment and storage medium | |
CN107832662A (en) | A kind of method and system for obtaining picture labeled data | |
CN101605158A (en) | Mobile phone dedicated for deaf-mutes | |
CN111562842B (en) | Virtual keyboard design method based on electromyographic signals | |
CN110633624B (en) | Machine vision human body abnormal behavior identification method based on multi-feature fusion | |
CN113378556A (en) | Method and device for extracting text keywords | |
CN113254684B (en) | Content aging determination method, related device, equipment and storage medium | |
CN111709398A (en) | Image recognition method, and training method and device of image recognition model | |
Angona et al. | Automated Bangla sign language translation system for alphabets by means of MobileNet | |
CN109634439B (en) | Intelligent text input method | |
CN111582342A (en) | Image identification method, device, equipment and readable storage medium | |
CN108960171B (en) | Method for converting gesture recognition into identity recognition based on feature transfer learning | |
CN112148997A (en) | Multi-modal confrontation model training method and device for disaster event detection | |
CN112149494A (en) | Multi-person posture recognition method and system | |
CN112464915A (en) | Push-up counting method based on human body bone point detection | |
CN111833439A (en) | Artificial intelligence-based ammunition throwing analysis and mobile simulation training method | |
CN114546117B (en) | Tactical sign language recognition glove system based on deep learning and sensor technology and implementation method | |
CN109740418B (en) | Yoga action identification method based on multiple acceleration sensors | |
CN111291804A (en) | Multi-sensor time series analysis model based on attention mechanism | |
CN116580211B (en) | Key point detection method, device, computer equipment and storage medium | |
CN115906861B (en) | Sentence emotion analysis method and device based on interaction aspect information fusion | |
CN112183430A (en) | Sign language identification method and device based on double neural network | |
CN111353470B (en) | Image processing method and device, readable medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |