CN111860408B - Memory group-based sampling method and system and electronic equipment - Google Patents

Memory group-based sampling method and system and electronic equipment Download PDF

Info

Publication number
CN111860408B
CN111860408B CN202010744822.6A CN202010744822A CN111860408B CN 111860408 B CN111860408 B CN 111860408B CN 202010744822 A CN202010744822 A CN 202010744822A CN 111860408 B CN111860408 B CN 111860408B
Authority
CN
China
Prior art keywords
data
time
data frames
memory
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010744822.6A
Other languages
Chinese (zh)
Other versions
CN111860408A (en
Inventor
刘国良
李军伟
张庆徽
田国会
刘甜甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010744822.6A priority Critical patent/CN111860408B/en
Publication of CN111860408A publication Critical patent/CN111860408A/en
Application granted granted Critical
Publication of CN111860408B publication Critical patent/CN111860408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a sampling method, a sampling system and electronic equipment based on a memory group, which relate to the field of action recognition, and the sampling mechanism is designed, sampling data of an input working group comprises latest sampling data and sampling data which are temporarily stored in the memory group in front, and the sampling data in front in the memory group have smaller and smaller proportion along with the progress of sampling, so that the latest sampling data have larger weight in prediction than data with longer time, the data with more recent time have larger sampling density, the data frames before longer time and the data frames closest to a prediction time point are simultaneously considered based on the memory group, different weights are given to the data with different time points, and the human body behavior recognition is realized by combining a classifier, so that the recognition precision and recognition speed are improved.

Description

Memory group-based sampling method and system and electronic equipment
Technical Field
The disclosure relates to the field of motion recognition, and in particular relates to a memory group-based sampling method, a memory group-based sampling system and electronic equipment.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The human body action recognition has important application value in the aspects of nursing of the elderly, physiotherapy and rehabilitation, cartoon game production, security monitoring, factory man-machine coordination and the like.
For example, in the nursing situation of the old, three-dimensional human body gestures of the user are captured in real time, abnormal behaviors of the user are detected, more comfortable and safe nursing service can be provided for the old, and dangerous situations are prevented.
The inventor finds that the existing human behavior recognition algorithm is aimed at the segmented behavior fragments, and complete behavior fragment data can be used in the recognition process, and the offline human behavior classification method needs to segment a video sequence containing a plurality of behavior classes into behavior fragments containing only one behavior class in advance according to the behavior classes, so that the human behavior recognition requirement in an actual scene cannot be met; in the online behavior recognition process, the method of sliding the window is limited in terms of window size, long-time context information can be lost during sampling, long time delay exists, long-time data and existing data cannot be balanced, and therefore behavior recognition is inaccurate, and requirements are difficult to meet.
Disclosure of Invention
Aiming at the defects existing in the prior art, the purpose of the present disclosure is to provide a sampling method, a system and an electronic device based on a memory group, by designing a sampling mechanism, based on the memory group, a data frame before a longer time and a data frame closest to a predicted time point are considered at the same time, different weights are given to data at different time points, and human behavior recognition is realized by combining a classifier, so that the recognition precision and recognition speed are improved.
The first object of the present disclosure is to provide a memory group-based sampling method, which adopts the following technical scheme:
the method comprises the following steps:
continuously receiving and caching a preset number of data frames, and updating the memory group;
the memory group updates the working group, and the data frame of the working group is input into the classifier for recognition to obtain a primary classification result;
the data frames with the preset number are cached again, half of the data frames with the preset number are sampled from the data frames and input into the working group, half of the data frames with the preset number are sampled from the memory group and input into the working group, the data of the working group is input into the classifier to obtain a second classification result, and the memory group is updated by the data of the working group;
repeating the step of caching the data frames again, realizing continuous sampling, sequentially acquiring the identification classification result, and judging real-time behaviors according to the identification result.
Further, when the memory bank is updated, the data frames in the memory bank are replaced entirely.
Further, when the data frames are acquired, three-dimensional coordinates of the human skeleton action sequence are acquired as the data frames, continuous data frames before the current moment are acquired and cached, and after the cached data frames are output to the working group, the cache is emptied to wait for the next group of cached data frames.
Further, when the data frames are cached again, half of the data frames with the preset number are sampled from the cache and half of the data frames with the preset number are sampled from the memory group to update the working group, and the working group is used for updating the memory group to serve as a next sampling basis.
Further, the working group inputs the classifier, makes real-time prediction for each time step, and adds and averages the prediction result at the current moment and the prediction result at the last time to obtain the final updated real-time prediction result.
Further, when judging the real-time behavior, if the current recognition result is different from the last recognition result, the fact that the new behavior is being recognized is indicated, and the same recognition result is obtained three times in succession to trigger a behavior correct recognition event.
Further, the classifier acquires data of a working group, performs space-time modeling on the action sequence by performing geometric features, joint point set distance features and multiple angles of motion features on the data, and performs action recognition by using a one-dimensional time convolution network with multiple stacked channels and fusing the features of the multiple channels.
A second object of the present disclosure is to provide a memory group-based sampling system, which adopts the following technical scheme:
the buffer module is used for collecting continuous data frames before the current moment, buffering the continuous data frames, sampling the continuous data frames and outputting the sampled continuous data frames to the working group;
the memory module is used for acquiring a data frame of the working group for replacement and updating;
the working module samples half of the data frames from the buffer module and half of the data frames from the memory module to be combined into a working group, and inputs the working group into the classifier;
and the classifier acquires the data frame of the working group, performs recognition and outputs a recognition classification result.
A third object of the present disclosure is to provide a medium, which adopts the following technical scheme: the medium has stored thereon a program which when executed by a processor implements the steps in a memory group based sampling method as described above.
A fourth object of the present disclosure is to provide an electronic device, which adopts the following technical scheme: comprising a memory, a processor and a program stored on the memory and executable on the processor, which processor implements the steps in a memory-based sampling method as described above when executing the program.
Compared with the prior art, the present disclosure has the advantages and positive effects that:
(1) By designing a sampling mechanism, based on a memory group, a data frame before a longer time and a section of data frame closest to a predicted time point are considered at the same time, different weights are given to data at different time points, and human behavior recognition is realized by combining a classifier, so that the recognition precision and recognition speed are improved;
(2) The sampling data of the input working group comprises the latest sampling data and the previous sampling data temporarily stored in the memory group, and the previous sampling data in the memory group has smaller and smaller proportion along with the sampling, so that the latest sampling data has larger weight than the data with longer time in the prediction, the data with more recent time is ensured to have larger sampling density, the context data information under long time is considered, the real-time behavior recognition precision of the acquired real-time sampling data is ensured, and the problem of the traditional sliding window-based method is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the exemplary embodiments of the disclosure and together with the description serve to explain the disclosure, and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic flow chart of a memory group-based sampling mechanism in embodiments 1 and 2 of the present disclosure;
fig. 2 is a network configuration diagram of the classifier in embodiments 1 and 2 of the present disclosure.
Detailed Description
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof;
for convenience of description, the words "upper", "lower", "left" and "right" in this disclosure, if used, merely denote an upper, lower, left, and right direction consistent with the accompanying drawings, and do not limit the structure, but merely facilitate description of the invention and simplify description, without indicating or implying that the apparatus or elements being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present disclosure.
As described in the background art, in the online behavior recognition process in the prior art, the method of sliding a window has a limitation on the window size, long-time context information can be lost during sampling, long time delay exists, long-time data and existing data cannot be balanced, so that the behavior recognition is inaccurate, and the requirement is difficult to meet; aiming at the problems, the disclosure provides a memory group-based sampling method, a memory group-based sampling system and electronic equipment.
Example 1
In an exemplary embodiment of the present disclosure, as shown in fig. 1, a memory group-based sampling method is presented.
When human behaviors are identified by a sliding window method, the window size is limited, long-time context information can be lost, and long time delay exists.
The method comprises the following steps:
continuously receiving and caching a preset number of data frames, and updating the memory group;
the memory group updates the working group, and the data frame of the working group is input into the classifier for recognition to obtain a primary classification result;
the data frames with the preset number are cached again, half of the data frames with the preset number are sampled from the data frames and input into the working group, half of the data frames with the preset number are sampled from the memory group and input into the working group, the data of the working group is input into the classifier to obtain a second classification result, and the memory group is updated by the data of the working group;
repeating the step of caching the data frames again, realizing continuous sampling, sequentially acquiring the identification classification result, and judging real-time behaviors according to the identification result.
In the embodiment, a sampling mechanism is designed to balance the data far away from the current time and the data near to the current time, an input data block is constructed and used as the input of a behavior classifier, a real-time behavior classification result is obtained, and online behavior recognition is realized.
The input human behavior joint point coordinate input stream is sampled using the following sampling function:
wherein the method comprises the steps ofIs a queue in which data of N consecutive frames of the data stream before the current time T is stored, T being equal to the number of frames of data that have been currently received divided by N and rounded down by one, N being the number of sampling frames required for the model input.
For example, when making a prediction for the first time, all N frame data that the current data stream has received is used as input to the model:
at the time of making the prediction for the third time, there is the following sampling formula:
the currently sampled data frame consists of three parts, respectively from the thirdPrimary and secondary queuesAnd->25% of the data frames are sampled while 50% of the data frames are sampled from the past N frames, whereby it can be seen that the most recent data is weighted more heavily in the model prediction than the data that is more time-long, while taking into account the long and short context information.
To avoid storing all incoming data frames, a memory bank S is used F Storing a previously sampled data frame:
wherein cache set Q F Storing data frames of N continuous frames before the current moment, and memorizing a group S F Using F at all time steps prior to storage S A data frame of function samples, 0.5 represents 50% samples.
Function F S Returning to the updated memory set S using the algorithm in Table 1 F Only reserve S in memory F And Q F Preventing memory from being infinitely occupied. The specific steps of the algorithm are shown in the algorithm table 1:
table 1: online behavior recognition algorithm based on sampling mechanism
S F Incremental update and Q of (2) F The sampling of the data frames at more recent times ensures that the data frames have a larger sampling density and thus have a larger weight when the model is input, thereby solving the problems existing in the sliding window-based method.
And then inputting the sampled data into each time step in the classifier to make real-time prediction, and adding and averaging the prediction result at the current moment and the prediction result at the last time to obtain the final updated real-time prediction result.
The specific flow details of the whole sampling mechanism are shown in fig. 1, a working group and a memory group are maintained, and an empty list is initially used;
continuously receiving and caching joint data frames, updating a memory group when the joint data frames reach 16 frames, updating a working group by using the memory group, and inputting the working group data into a classifier to obtain a classification result;
when the data is cached to 16 frames again, 8 frames are sampled from the data, 8 frames are sampled from the memory group at the same time, the data and the memory group are combined to update the working group, and the result is obtained by inputting the classifier;
caching the data to 16 frames again, sampling 8 frames from the data, simultaneously sampling 4 frames from two 8 frames in the previous memory group, merging and updating the working group, and inputting the result into a classifier; and so on.
The classifier acquires data of a working group, performs space-time modeling on the action sequence by geometric features, joint point set distance features and multiple angles of motion features on the data, and performs action recognition by using a one-dimensional time convolution network with multiple stacked channels and fusing the features of the multiple channels;
specifically, as shown in fig. 2, feature data of the joint points are obtained, and joint point set distance features, geometric features and motion features are established;
modeling the time-space characteristics of the action sequence from multiple angles, and modeling the time sequence information of the action sequence by adopting a one-dimensional time convolution network;
human body actions are classified and identified through the time-space characteristics and the time sequence information.
Specifically, the human behavior characteristic representation based on the joint point comprises a joint point set distance characteristic representation, a set characteristic representation and a motion characteristic representation.
Distance feature representation to be combined with joint point:
firstly, calculating the distance between every two of the joint points to obtain a symmetrical matrix. To reduce redundancy, only the lower triangular matrix that does not include diagonal lines is reserved as a joint set distance feature representation.
Assuming a total of K frames of data, each action executor has N nodes of interest. In the kth frame, the Cartesian coordinates of the ith node are expressed asAll joint coordinates form a set +.>Feature F k Calculated from the following formula:
represents->And->Euclidean distance between them. And then expanding the elements in the lower triangular matrix to form a one-dimensional matrix serving as a characteristic representation.
For geometric feature representation:
corresponding nodes, line segments and planes are selected according to the following rules:
closing node: each node is represented by its cartesian coordinates g (x, y, z);
line segment:is a Chinese character g 1 And g 2 The segment formed by connection meets one of the following constraints:
1.g 1 and g 2 Is directly adjacent in human body structure;
2.g 1 and g 2 One is an end point joint (head joint, left and right hand joint, left and right foot joint), and the other is a human chain structure with intervalsAn articulation point of one joint;
3.g 1 and g 2 Are all end point joints
Plane: p is represented by g 1 ,g 2 And g 3 A plane defined by a triangle formed as a vertex. Only five planes are considered, corresponding to the torso, arms and legs, respectively.
Six types of geometric features are then selected based on the selected nodes, segments and planes, each feature being specifically explained in table 1. While removing duplicate or invalid features due to symmetry, etc.
Table 1 geometric feature calculation method and feature description
Geometric feature calculation mode and feature description for motion feature representation:
specifically, the following formula can be used to calculate two different scale motion features:
and->Representing the slow and fast motion characteristics of the kth frame, respectively. G k+1 And G k+2 G is respectively k After one frame and G k Two frames of node data follow.
To change the motion characteristics of each frame into one-dimensional input, the motion characteristics of each frame are firstly changed into one-dimensional inputAnd->And the two-dimensional joint points are unfolded to form a one-dimensional vector, and the unfolded dimensions are all D=3. Simultaneously performing linear interpolation to obtain two dimensions of +.>And->Become->And->Finally, the fast and slow movement characteristics are obtained>And
performing space-time modeling on the action sequence from multiple angles by adopting the geometric features, the joint point set distance features and the motion features, and performing action recognition by using an improved one-dimensional time convolution network with multiple stacked channels and fusing the features of the multiple channels, wherein the network structure diagram is shown in fig. 2;
in fig. 2, "2×cnn (3, 2×filters,/2)" represents two one-dimensional convolutional neural network layers (the kernel size of the convolution is 3, the channel is 2) and one maximum pooling layer (the stride of which is 2). The meaning of the other CNN layers is similar. "spatldropout 1D" represents one-dimensional spatial dropping layer for suppressing overfitting. GAP represents a global average pooling layer. FC represents a fully connected layer. Softmax represents the Softmax layer used to obtain the classification probability. Concate represents a concatenation process, concatenating the outputs of the 4 lanes into one tensor.
Based on the above sampling mechanism, a real-time recognition result can be obtained, if the current recognition result is different from the last recognition result, the fact that new behaviors are recognized is indicated to be ongoing, the same recognition result is obtained three times in succession to trigger a correct recognition event of the behaviors, if the correct recognition event of the behaviors is triggered in the process of executing the actions, and the subsequent recognition result is kept unchanged, the fact that the number of the positive cases of different action categories is equal to the total number of the samples is indicated, and the ratio of the number of the positive cases of the different action categories to the total number of the samples is the real-time behavior recognition rate.
Example 2
In another exemplary embodiment of the present disclosure, as shown in fig. 1 and 2, a memory group-based sampling device is provided.
The buffer module is used for collecting continuous data frames before the current moment, buffering the continuous data frames, sampling the continuous data frames and outputting the sampled continuous data frames to the working group;
the memory module is used for acquiring a data frame of the working group for replacement and updating;
the working module samples half of the data frames from the buffer module and half of the data frames from the memory module to be combined into a working group, and inputs the working group into the classifier;
and the classifier acquires the data frame of the working group, performs recognition and outputs a recognition classification result.
The specific configuration relationship is described in detail in embodiment 1, and is not described herein with reference to embodiment 1 and fig. 1 and 2.
By designing a sampling mechanism, based on a memory group, a data frame before a longer time and a section of data frame closest to a predicted time point are considered at the same time, different weights are given to data at different time points, and human behavior recognition is realized by combining a classifier, so that the recognition precision and recognition speed are improved.
Example 3
In another exemplary embodiment of the present disclosure, a medium is presented.
The medium has stored thereon a program which, when executed by a processor, implements the steps in the memory group based sampling method as described in embodiment 1.
Example 4
In yet another exemplary embodiment of the present disclosure, an electronic device is presented.
Comprising a memory, a processor and a program stored on the memory and executable on the processor, which when executed implements the steps in the memory-based sampling method as described in embodiment 1.
The foregoing description of the preferred embodiments of the present disclosure is provided only and not intended to limit the disclosure so that various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (8)

1. A memory group based sampling method, comprising the steps of:
continuously receiving and caching a preset number of data frames, and updating the memory group;
the memory group updates the working group, and the data frame of the working group is input into the classifier for recognition to obtain a primary classification result;
the data frames with the preset number are cached again, half of the data frames with the preset number are sampled from the data frames and input into the working group, half of the data frames with the preset number are sampled from the memory group and input into the working group, the data of the working group is input into the classifier to obtain a second classification result, and the memory group is updated by the data of the working group;
repeating the step of caching the data frames again, realizing continuous sampling, sequentially acquiring identification classification results, and judging real-time behaviors according to the identification results;
the working group inputs the classifier, makes real-time prediction for each time step, and adds and averages the prediction result at the current moment and the prediction result at the last time to obtain the final updated real-time prediction result;
when judging real-time behaviors, if the current recognition result is different from the last recognition result, the fact that new behaviors are recognized is indicated to be in progress, and the same recognition result is obtained three times continuously to trigger a correct recognition event of one behavior;
the method comprises the steps of sequentially obtaining identification classification results, specifically: acquiring characteristic data of the joint points, and establishing joint point set distance characteristics, geometric characteristics and motion characteristics; modeling the time-space characteristics of the action sequence from multiple angles, and modeling the time sequence information of the action sequence by adopting a one-dimensional time convolution network; human body actions are classified and identified through the time-space characteristics and the time sequence information.
2. The memory group based sampling method of claim 1, wherein the data frames in the memory group are replaced entirely when the memory group is updated.
3. The memory group-based sampling method as claimed in claim 1, wherein when the data frame is acquired, three-dimensional coordinates of a human skeleton motion sequence are acquired as the data frame, continuous data frames before the current moment are acquired for buffering, and after the buffered data frames are output to the working group, the buffer is emptied to wait for the next buffered data frame.
4. The memory group based sampling method of claim 1, wherein when the data frames are buffered again, half of the preset number of data frames are sampled from the buffer and half of the preset number of data frames are sampled from the memory group are combined, so as to update the working group, and the working group is used for updating the memory group as a next sampling basis.
5. The memory group based sampling method of claim 1, wherein the classifier obtains workgroup data, performs space-time modeling on the motion sequence by performing geometric features, joint point set distance features and multiple angles of motion features on the data, and performs motion recognition by using a one-dimensional time convolution network with multiple stacked channels and fusing the features of the multiple channels.
6. A memory group based sampling system, comprising:
the buffer module is used for collecting continuous data frames before the current moment, buffering the continuous data frames, sampling the continuous data frames and outputting the sampled continuous data frames to the working group;
the memory module is used for acquiring a data frame of the working group for replacement and updating;
the working module samples half of the data frames from the buffer module and half of the data frames from the memory module to be combined into a working group, and inputs the working group into the classifier;
the classifier acquires the data frame of the working group, identifies the data frame and outputs an identification classification result;
the working group inputs the classifier, makes real-time prediction for each time step, and adds and averages the prediction result at the current moment and the prediction result at the last time to obtain the final updated real-time prediction result;
when judging real-time behaviors, if the current recognition result is different from the last recognition result, the fact that new behaviors are recognized is indicated to be in progress, and the same recognition result is obtained three times continuously to trigger a correct recognition event of one behavior;
the method comprises the steps of obtaining a working group data frame and identifying, and specifically comprises the following steps: acquiring characteristic data of the joint points, and establishing joint point set distance characteristics, geometric characteristics and motion characteristics; modeling the time-space characteristics of the action sequence from multiple angles, and modeling the time sequence information of the action sequence by adopting a one-dimensional time convolution network; human body actions are classified and identified through the time-space characteristics and the time sequence information.
7. A medium having stored thereon a program which when executed by a processor performs the steps of a memory-based sampling method according to any of claims 1-5.
8. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the memory-based sampling method of any one of claims 1-5 when the program is executed by the processor.
CN202010744822.6A 2020-07-29 2020-07-29 Memory group-based sampling method and system and electronic equipment Active CN111860408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010744822.6A CN111860408B (en) 2020-07-29 2020-07-29 Memory group-based sampling method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010744822.6A CN111860408B (en) 2020-07-29 2020-07-29 Memory group-based sampling method and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN111860408A CN111860408A (en) 2020-10-30
CN111860408B true CN111860408B (en) 2023-08-08

Family

ID=72945439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010744822.6A Active CN111860408B (en) 2020-07-29 2020-07-29 Memory group-based sampling method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN111860408B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113242A (en) * 2019-05-07 2019-08-09 南京磐能电力科技股份有限公司 Multi-node synchronization sampling and data transmission method in ring-type communication network
CN111091045A (en) * 2019-10-25 2020-05-01 重庆邮电大学 Sign language identification method based on space-time attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6934213B2 (en) * 2003-06-11 2005-08-23 Artisan Components, Inc. Method and apparatus for reducing write power consumption in random access memories

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113242A (en) * 2019-05-07 2019-08-09 南京磐能电力科技股份有限公司 Multi-node synchronization sampling and data transmission method in ring-type communication network
CN111091045A (en) * 2019-10-25 2020-05-01 重庆邮电大学 Sign language identification method based on space-time attention mechanism

Also Published As

Publication number Publication date
CN111860408A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
Chen et al. Crowd-robot interaction: Crowd-aware robot navigation with attention-based deep reinforcement learning
CN112906604B (en) Behavior recognition method, device and system based on skeleton and RGB frame fusion
CN111539941B (en) Parkinson's disease leg flexibility task evaluation method and system, storage medium and terminal
JP7292657B2 (en) DATA PROCESSING METHOD, DATA PROCESSING DEVICE, COMPUTER PROGRAM AND ELECTRONIC DEVICE
WO2021051526A1 (en) Multi-view 3d human pose estimation method and related apparatus
WO2022237481A1 (en) Hand-raising recognition method and apparatus, electronic device, and storage medium
EP3968285A2 (en) Model training method and apparatus, keypoint positioning method and apparatus, device and medium
CN112396018B (en) Badminton player foul action recognition method combining multi-mode feature analysis and neural network
WO2023226186A1 (en) Neural network training method, human activity recognition method, and device and storage medium
CN112800990B (en) Real-time human body action recognition and counting method
CN113642379A (en) Human body posture prediction method and system based on attention mechanism fusion multi-flow graph
CN112434679A (en) Rehabilitation exercise evaluation method and device, equipment and storage medium
WO2021217937A1 (en) Posture recognition model training method and device, and posture recognition method and device
CN103679747B (en) A kind of key frame extraction method of motion capture data
CN108256461A (en) A kind of gesture identifying device for virtual reality device
WO2021036397A1 (en) Method and apparatus for generating target neural network model
Zhou et al. Learning multiscale correlations for human motion prediction
CN111860408B (en) Memory group-based sampling method and system and electronic equipment
CN113887501A (en) Behavior recognition method and device, storage medium and electronic equipment
CN111368770A (en) Gesture recognition method based on skeleton point detection and tracking
CN114882305A (en) Image key point detection method, computing device and computer-readable storage medium
CN115546491B (en) Fall alarm method, system, electronic equipment and storage medium
CN112181148A (en) Multimodal man-machine interaction method based on reinforcement learning
WO2023142886A1 (en) Expression transfer method, model training method, and device
CN115205737A (en) Real-time motion counting method and system based on Transformer model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant