CN109977881B - Character action feature extraction and identification optimization method based on radio frequency technology - Google Patents

Character action feature extraction and identification optimization method based on radio frequency technology Download PDF

Info

Publication number
CN109977881B
CN109977881B CN201910245310.2A CN201910245310A CN109977881B CN 109977881 B CN109977881 B CN 109977881B CN 201910245310 A CN201910245310 A CN 201910245310A CN 109977881 B CN109977881 B CN 109977881B
Authority
CN
China
Prior art keywords
data
radio frequency
readers
action
phase difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910245310.2A
Other languages
Chinese (zh)
Other versions
CN109977881A (en
Inventor
袁呈呈
陈志�
岳文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910245310.2A priority Critical patent/CN109977881B/en
Publication of CN109977881A publication Critical patent/CN109977881A/en
Application granted granted Critical
Publication of CN109977881B publication Critical patent/CN109977881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a figure action characteristic extraction and identification optimization method based on a radio frequency technology. The reader can send out signals to the space, and the tags can send out response signals after receiving the signals. A plurality of readers are arranged in a space, and tags are attached to the human body. When a user acts, signals received by the readers are changed, when the action is finished, data obtained by the readers are integrated, the change of the signals in the movement process is divided into a plurality of sections, and the type of the action of the user is identified by means of a deep learning technology. The method solves the problem of high computational power requirement of the video motion recognition technology.

Description

Figure action characteristic extraction and identification optimization method based on radio frequency technology
Technical Field
The invention relates to a figure action feature extraction and recognition optimization method, and belongs to the technical field of cross of Internet of things and machine learning. The action characteristics are obtained mainly by identifying a radio frequency technology, and then the action is identified by using a deep learning technology.
Background
As a fundamental problem in the field of computer vision, human motion recognition aims to enable a machine to semantically understand and analyze human motion videos through algorithms. Potential applications based on human body action recognition include a plurality of fields such as intelligent monitoring, video content analysis and behavior perception, and intelligent home. Due to such wide application scenes and potential application values, a large amount of scientific research work and research subjects are developed around human body action recognition in the field of computer vision. The potential application based on human body action recognition comprises multiple fields of intelligent monitoring, video content analysis, human body interaction, intelligent home furnishing and the like. Specifically, the research on human motion recognition mainly includes, in application classification: (1) intelligent monitoring; (2) analyzing the video content; (3) behavioral awareness and intelligent control.
In the action recognition system, the traditional behavior perception is completed through input devices such as a keyboard and a mouse, and the traditional behavior perception is a process that human beings adapt to machines. If the machine can be made to understand human languages, including natural languages and body languages of human beings, the efficiency and experience of behavior perception are greatly improved. The understanding of human body language by computer requires classification and recognition of human actions. In addition, if the human activities are recognized and understood through a camera or other machines, and the virtual world processes the information in feedback and virtual world, the machines can help people to complete tasks such as gesture recognition, gesture recognition and the like, correct action error areas of certain activities or skills, and also can entertain life and experience games of behavior perception.
The radio frequency identification technology is gradually permeating into modern life, and has been widely applied in industries such as retail consumer goods, medicine management, transportation, national defense and military industry and the like due to high efficiency and convenience brought by the radio frequency identification technology, and particularly, the unique identification characteristic of the radio frequency identification technology to objects stimulates the research interest of people on the internet of things.
Most of current action discernment still are action recognition technology based on vision, thereby obtain information with the camera admission video promptly, and the video has been as the main carrier of information, and the quantity of video data is explosive growth, and the content that all the time has a large amount of new produces. In the face of video data which is massively emerging, the main problem to be solved by action recognition is that a computer processes and analyzes original images or image sequence data acquired by a camera, learns and understands actions and behaviors of people in the images, generally, on the basis of motion detection and feature extraction, a human motion mode is obtained through analysis, and a mapping relation between video content and action type description is established, so that the computer can 'see' the video or 'understand' the video, namely, the method mainly comprises 3 steps: (1) detecting motion information from the image frame and extracting bottom layer characteristics; (2) modeling a behavior pattern or action; (3) and establishing a corresponding relation between the bottom visual characteristics and high-level semantic information such as action behavior lists.
Although the vision-based motion recognition technology is concerned by many colleges and enterprises at home and abroad, the fact that the storage amount of video information is large is a non-competitive fact, and the processing speed of the video cannot be effectively improved all the time due to the limitations of a processor and an algorithm.
Disclosure of Invention
The invention aims to: in order to overcome the defects in the prior art, the invention provides a person motion characteristic extraction and identification optimization method based on a radio frequency technology, and the method effectively solves the problem of high computational power requirement of a video motion analysis technology.
A character action feature extraction and identification method based on a radio frequency identification technology is disclosed, and a radio frequency identification system mainly comprises a reader and a tag. The reader can send out signals to the space, and the tags can send out response signals after receiving the signals. A plurality of readers are arranged in a space, and tags are attached to the human body. When a user acts, signals received by the readers are changed, when the action is finished, data obtained by the readers are integrated, the change of the signals in the movement process is divided into a plurality of sections, and the type of the action of the user is identified by means of a deep learning technology. The method is used for solving the problem of high computational power requirement of the video motion recognition technology, and specifically comprises the following steps:
step 1), attaching a label to a human body, enabling a user to act, and receiving and storing the change of the phase difference between a sent signal and a received signal by a plurality of readers in a space;
and 2) when the user action is finished, the system splices the phase difference data to obtain a phase difference change process.
And 3) equally dividing the phase difference change process into a plurality of sections.
And 4) summarizing the data of all readers.
Step 5), the processed data is delivered to a deep learning network, and the action type is judged;
step 6), controlling other software to make corresponding response according to the judged action type;
wherein the specific steps of the step 1) are as follows:
and 11) arranging m readers in the space.
Step 12) attaching the label to the human body.
And step 13), the system receives and stores the data of each reader.
Wherein the specific steps of the step 2) are as follows:
step 21), setting n items of original phase difference data received by a reader, and recording the n items as input1[ n ];
step 22), recording the spliced data as input2[ n ], wherein the splicing process comprises the following steps:
Figure GDA0003684236430000021
obtaining a processed data input2[ n ], the data identifying the variation information of the phase difference;
wherein the specific steps of the step 3) are as follows:
step 31), if the processed data has t items, which are recorded as input3 t, and j is more than or equal to 1 and less than or equal to t, the following processing is carried out
Figure GDA0003684236430000031
input3[t]T points are taken in the motion process, and the value of each point corresponds to the size of the phase difference change of the label in the motion process;
wherein the specific steps of the step 4) are as follows:
and step 41) processing the data obtained by the m readers according to the methods from the step 2) to the step 3). A total of mt data are obtained. The data are collectively recorded as data [ mt ] as a preprocessing result;
wherein, the specific steps of the step 6) are as follows:
step 61), firstly, d action instructions need to be recorded, and data preprocessed by the action instructions are obtained through the step 2)3) 4);
step 62), constructing an analog data generator, and generating scrambled data offset [ mt ] for a piece of real data [ mt ], wherein each term is a random number between (-r, r). Let sim [ mt ] + offset [ mt ] be the piece of analog data obtained. Generating a large amount of simulation data according to the method;
and step 63) building a deep learning network, wherein the network comprises an input layer 1 layer, mt nodes, a hidden layer 1 layer mt nodes, an output layer 1 layer and the number d of the nodes. The training of the deep learning network is completed by matching with a simulation data generator;
step 64), solidifying the trained deep learning network;
step 65), in the identification process, the deep learning network obtained from the step 61) to the step 64) is used for processing real data, and the action type of the real data is judged;
in the step 11), m is taken as 4 according to experience;
in the step 31), t is taken as 20 according to experience;
in the step 61), d is taken as 3 according to experience;
in the step 62), r is empirically 0.5;
has the advantages that: compared with the existing video motion recognition technology, the technical scheme adopted by the invention has the following technical effects:
(1) the invention obtains the action characteristics through the radio frequency identification, and the information quantity is small, thereby the identification is rapid.
(2) The invention obtains the action characteristics through the radio frequency identification, and has less information quantity, thereby having low calculation force requirement.
Drawings
FIG. 1 is a flow chart of a method for character motion feature extraction and recognition optimization;
FIG. 2 preprocesses the resulting data;
Detailed Description
The present invention is further illustrated in the accompanying drawings and described in the following detailed description, it is to be understood that such examples are included solely for the purposes of illustration and are not intended as a definition of the limits of the invention, since various equivalent modifications of the invention will become apparent to those skilled in the art after reading the present specification, and it is intended to cover all such modifications as fall within the scope of the invention as defined in the appended claims.
The invention discloses a character action feature extraction and identification method based on a radio frequency identification technology, which comprises the following steps as shown in figures 1 to 2:
step 1), 4 readers are arranged in space, and the tag is attached to a human body. As the user makes an action, the system receives and stores data for each reader.
And step 2) when the user action is finished, the system splices the phase difference data to obtain a phase difference change process. Setting n items of original phase difference data received by a reader, and recording as input1[ n ];
recording the spliced data as input2[ n ], wherein the splicing process comprises the following steps:
Figure GDA0003684236430000041
obtaining a processed data input2[ n ], which identifies the variation information of the phase difference;
and 3) equally dividing the phase difference change process into a plurality of sections. Recording the processed data as input3[20], and for j being more than or equal to 1 and less than or equal to 20, processing as follows:
Figure GDA0003684236430000042
input3[20]the method is characterized in that 20 points are taken in the process of movement, and the value of each point corresponds to the phase change of the label in the process of the movement;
and 4) summarizing data of all readers. And processing the data obtained by the 4 readers according to the methods from the step 2) to the step 3). A total of 80 data were obtained. The data are collectively recorded as data [80] as a result of preprocessing;
step 5), the processed data is delivered to a deep learning network, and the action type is judged;
step 6), controlling other software to make corresponding response according to the judged action type; firstly, inputting 3 action instructions, and acquiring data preprocessed by the action instructions through the step 2)3) 4); an analog data generator is then constructed to generate scrambled data offsets [80] for a piece of real data [80], each term being a random number between (-0.5, 0.5). Let sim [80] + data [80] be the piece of analog data obtained. Generating a large amount of simulation data according to the method; and then constructing a deep learning network, wherein the network comprises an input layer 1 layer, 80 nodes, a hidden layer 1 layer, 80 nodes, an output layer 1 layer and 3 nodes. The training of the deep learning network is completed by matching with a simulation data generator; solidifying the trained deep learning network; in the identification process, the obtained deep learning network is used for processing real data and judging the action type of the real data;
the invention is further described below with reference to the accompanying drawings:
in specific implementation, fig. 1 is a flow chart of a human action feature extraction and identification method based on a radio frequency identification technology.
First 4 readers are placed at different locations in space and then tags are attached to the person's hand positions. The reader sends a signal to the space, and the tag sends a response signal after receiving the signal. The reader may compare the phase difference between the own transmitted signal and the received reply signal.
In the process of movement of a user, the tag moves along with the movement of the user, the distance between the tag and the reader is continuously changed, the phase difference is also changed, and the system stores the phase difference obtained each time.
When the user action is finished, the phase difference stored by one reader is recorded as input1[ n ].
Is processed as follows
Figure GDA0003684236430000051
The first round of processed data input2[ n ] is obtained, which identifies the variation information of the phase difference.
And then the following treatment is carried out:
Figure GDA0003684236430000052
data input3 was obtained after the second round of processing [20 ]. And processing and summarizing the data acquired by the 4 readers one by one according to the method to obtain a preprocessing result data [80 ].
The resulting data is preprocessed once as shown in fig. 2.
And then, judging the data [80] by utilizing a pre-trained and cured deep network model to obtain the action type.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (6)

1. A character action feature extraction and identification optimization method based on a radio frequency technology is characterized by comprising the following steps:
step 1), attaching a tag to a human body, arranging m readers in a space, enabling a user to act, enabling the readers to read tag information, enabling a system to receive and store data of each reader to obtain tag phase difference data, and enabling the system to send and receive a signal phase difference;
step 2), when the user action is finished, the system splices the phase difference data to obtain a phase difference change process, and the steps are as follows:
step 21), the original phase difference data received by the reader has n items in total, and the n items are recorded as input1[ n ];
step 22), the spliced data is input2[ n ], and the splicing process is as follows:
Figure FDA0003684236420000011
obtaining a processed data input2[ n ];
step 3), the data processed in the step 2 has t items, which are marked as input3[ t ], and j is more than or equal to 1 and less than or equal to t, the following processing is carried out:
Figure FDA0003684236420000012
input3[t]the method comprises the following steps of (1) taking t points in the motion process, wherein the value of each point corresponds to the change magnitude of a phase difference of a tag in the motion process;
step 4), summarizing data of all readers;
and 5) delivering the data processed in the step 4) to a deep learning network, and judging the action type.
2. The method for extracting and identifying character motion features based on the wireless radio frequency technology as claimed in claim 1, wherein the method comprises the following steps: the step 4) of summarizing the data of all readers is as follows: processing the data obtained by the m readers according to the methods from the step 2) to the step 3), obtaining mt data in total, and summarizing the data as data [ mt ], wherein the data is used as a preprocessing result.
3. The method for extracting and identifying character motion features based on wireless radio frequency technology as claimed in claim 2, wherein: the specific steps of the step 5) are as follows:
step 51), d action instructions are required to be input, and data preprocessed by the action instructions are obtained through the step 2)3) 4);
step 52), constructing an analog data generator, generating scrambled data offset [ mt ] for a piece of real data [ mt ], wherein each term is a random number between (-r, r), r represents a random number determination interval value, sim [ mt ] ═ data [ mt ] + offset [ mt ] is used as an obtained piece of analog data sim [ mt ], and generating analog data according to the method;
step 53), building a deep learning network, wherein the network comprises an input layer 1 layer, mt nodes, a hidden layer 1 layer, mt nodes, an output layer 1 layer and node number d, and matching with a simulation data generator to finish the training of the deep learning network;
step 54), solidifying the trained deep learning network;
step 55), in the identification process, the deep learning network obtained in the step 51) to the step 54) is used for processing the real data and judging the action type of the real data.
4. The method for extracting and identifying character motion features based on wireless radio frequency technology as claimed in claim 3, wherein: the number m of the readers is 4.
5. The method for extracting and identifying character motion features based on wireless radio frequency technology as claimed in claim 4, wherein: the number of points t taken during the exercise is taken as 20.
6. The method for extracting and identifying character motion features based on the wireless radio frequency technology as claimed in claim 5, wherein the method comprises the following steps: the random number determination interval value r takes 0.5.
CN201910245310.2A 2019-03-28 2019-03-28 Character action feature extraction and identification optimization method based on radio frequency technology Active CN109977881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910245310.2A CN109977881B (en) 2019-03-28 2019-03-28 Character action feature extraction and identification optimization method based on radio frequency technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910245310.2A CN109977881B (en) 2019-03-28 2019-03-28 Character action feature extraction and identification optimization method based on radio frequency technology

Publications (2)

Publication Number Publication Date
CN109977881A CN109977881A (en) 2019-07-05
CN109977881B true CN109977881B (en) 2022-07-22

Family

ID=67081396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910245310.2A Active CN109977881B (en) 2019-03-28 2019-03-28 Character action feature extraction and identification optimization method based on radio frequency technology

Country Status (1)

Country Link
CN (1) CN109977881B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506493A (en) * 2021-06-11 2021-10-15 同济大学 Chemistry experiment teaching system based on virtual-real fusion environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006254305A (en) * 2005-03-14 2006-09-21 Mitsubishi Electric Corp Collation system for information carrier
CN104537401A (en) * 2014-12-19 2015-04-22 南京大学 Reality augmentation system and working method based on technologies of radio frequency identification and depth of field sensor
CN106125917A (en) * 2016-06-20 2016-11-16 南京大学 A kind of gesture based on REID is every empty interactive system and method for work thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006254305A (en) * 2005-03-14 2006-09-21 Mitsubishi Electric Corp Collation system for information carrier
CN104537401A (en) * 2014-12-19 2015-04-22 南京大学 Reality augmentation system and working method based on technologies of radio frequency identification and depth of field sensor
CN106125917A (en) * 2016-06-20 2016-11-16 南京大学 A kind of gesture based on REID is every empty interactive system and method for work thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
支持多频多格式的RFID智能卡阅读器设计;陆兴华等;《世界有色金属》;20150915(第09期);全文 *

Also Published As

Publication number Publication date
CN109977881A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
Mousavi et al. Deep reinforcement learning: an overview
CN110569795B (en) Image identification method and device and related equipment
CN109447140B (en) Image identification and cognition recommendation method based on neural network deep learning
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN108154075A (en) The population analysis method learnt via single
CN111914613B (en) Multi-target tracking and facial feature information recognition method
CN111626126A (en) Face emotion recognition method, device, medium and electronic equipment
CN113435335B (en) Microscopic expression recognition method and device, electronic equipment and storage medium
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
CN109086351B (en) Method for acquiring user tag and user tag system
CN116091667B (en) Character artistic image generation system based on AIGC technology
CN112668638A (en) Image aesthetic quality evaluation and semantic recognition combined classification method and system
CN109977881B (en) Character action feature extraction and identification optimization method based on radio frequency technology
CN110533688A (en) Follow-on method for tracking target, device and computer readable storage medium
Yang et al. Application of computer vision in electronic commerce
CN111611917A (en) Model training method, feature point detection device, feature point detection equipment and storage medium
CN116959424A (en) Speech recognition method, speech recognition system, computer device, and storage medium
Jiao et al. Plant leaf recognition based on conditional generative adversarial nets
CN113568983B (en) Scene graph generation method and device, computer readable medium and electronic equipment
Shukla et al. Deep Learning Model to Identify Hide Images using CNN Algorithm
Ekundayo et al. Facial expression recognition and ordinal intensity estimation: a multilabel learning approach
Dhar et al. A video based human detection and activity recognition–a deep learning approach
CN117711001B (en) Image processing method, device, equipment and medium
CN112149692B (en) Visual relationship identification method and device based on artificial intelligence and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant