CN116189309A - Method for processing human motion data and electronic equipment - Google Patents

Method for processing human motion data and electronic equipment Download PDF

Info

Publication number
CN116189309A
CN116189309A CN202310269065.5A CN202310269065A CN116189309A CN 116189309 A CN116189309 A CN 116189309A CN 202310269065 A CN202310269065 A CN 202310269065A CN 116189309 A CN116189309 A CN 116189309A
Authority
CN
China
Prior art keywords
motion
gesture
coding
motion data
ontology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310269065.5A
Other languages
Chinese (zh)
Other versions
CN116189309B (en
Inventor
黄天羽
李祥臣
唐梦菲
唐明湘
李鹏
李立杰
丁刚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of CN116189309A publication Critical patent/CN116189309A/en
Application granted granted Critical
Publication of CN116189309B publication Critical patent/CN116189309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method for processing human motion data and electronic equipment. The method comprises the following steps: constructing a layered motion model; defining an ontology corresponding to each layer based on the layered motion model; obtaining normalized motion data, wherein the normalized motion data comprises layering categories, motion codes and additional information; instantiating an ontology corresponding to the hierarchical category into a motion entity by utilizing the normalized motion data, wherein the coding attribute value of the ontology is the motion code, and the name attribute and the description attribute of the ontology are assigned according to the additional information; establishing an association relationship among a plurality of moving entities, wherein the association relationship comprises a directed composition relationship, an adjacent relationship and an possession relationship; and storing the plurality of moving entities and the association relation. According to the method, normalized motion data are obtained and materialized, and the motion entities are related to each other, so that potential relations and characteristics between motions can be reflected to the greatest extent.

Description

Method for processing human motion data and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an electronic device for processing human motion data.
Background
The research on sports can help people to make proper sports plans, improve human immunity, promote physical health and the like, and is a research hotspot in recent years. Current human motion research is always in an irregular, nonstandard state, and motion related research has the following challenges compared with other study objects: (1) The problem of how to define and describe any one motion remains to be solved. There are theoretically unlimited kinds of human body motions, but the motion which is defined and described semantically accounts for a small part of the total amount, so that the object of the motion related research is limited to the defined motion, and a large amount of motion information which is not defined and described is lost. (2) The human motion data lacks structural information in both time and space dimensions, and the problems of how the motion data is identified and retrieved in a database and the like remain to be solved. (3) The relationship between human body movements is complex, for example, basketball movements are composed of a plurality of sub-movements such as running, jumping and the like, the running and jumping movements can be recombined to obtain hurdles, jumping and the like, and the father-son relationship and the adjacent relationship of the movements are recorded to effectively promote the research of the field of the movements, so that the movement relationships are still to be modeled and defined at present.
Furthermore, motion data, unlike other data, lacks structural information in both temporal and spatial dimensions, and the relationship between motions is complex. There is a need to efficiently organize the motion data, mine and analyze the potential relationships of the motion data to reveal the laws of motion of the human body.
The above information disclosed in the background section is only for enhancement of understanding of the background of the application and therefore it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The application provides a method for processing human motion data and electronic equipment, wherein the normalized motion data are acquired and materialized, and motion entities are associated with each other, so that potential relations and characteristics between motions can be reflected to the greatest extent.
In one aspect, the present application provides a method of processing human motion data, comprising:
constructing a layered motion model in which an upper layer motion is made up of a sequence of a plurality of lower layer motions;
defining an ontology corresponding to each layer based on the layered motion model, wherein the ontology has coding attributes, name attributes and description attributes;
obtaining normalized motion data, wherein the normalized motion data comprises layering categories, motion codes and additional information;
instantiating an ontology corresponding to the hierarchical category into a motion entity by utilizing the normalized motion data, wherein the coding attribute value of the ontology is the motion code, and the name attribute and the description attribute of the ontology are assigned according to the additional information;
establishing an association relationship among a plurality of moving entities, wherein the association relationship comprises a directed composition relationship, an adjacent relationship and an possession relationship;
and storing the plurality of moving entities and the association relation.
According to some embodiments, the constructing a hierarchical motion model comprises:
dividing the continuous uncertain periodic process of human body movement into four ordered levels of gestures, actions, behaviors and habits;
defining the gestures as the values of gesture bases fixed in sequence for coding;
defining actions as combined coding according to the time stamps of the key gestures and the corresponding gesture codes;
defining the behavior as uniformly coded according to predefined semantics;
habit is defined as compact coding in terms of accumulating natural numbers.
According to some embodiments, the constructing the layered motion model further comprises: a complete set of gesture codes, or a subset thereof, is constructed.
According to some embodiments, the ontology comprises a gesture base ontology, a gesture ontology, a behavior ontology, a habit ontology corresponding to each layer of the layered motion model.
According to some embodiments, the acquiring normalized motion data comprises:
classifying and modeling human body motions by using a layered motion model;
and obtaining normalized motion data according to the classification and modeling results.
According to some embodiments, the composition relationship has a first composition relationship attribute that indicates that a child motion that constitutes a parent motion is the first child motion, the last child motion, or an intermediate child motion in a sequence of child motions.
According to some embodiments, the adjacency has a first adjacency attribute representing a motion number or concatenation of motion numbers of at least one common parent motion of two adjoining child motions.
According to some embodiments, human motion is classified and modeled using a hierarchical motion model, including one or more of the following:
extracting a key frame sequence from the first format motion data as a key gesture sequence;
dividing the key gesture sequence into at least one action;
attributing a plurality of actions to a behavior;
attributing a plurality of behaviors to a habit.
According to some embodiments, the obtaining normalized motion data from the classification and modeling results includes:
converting the key gesture sequences into gesture codes respectively;
determining an action code of each action according to the gesture code of the key gesture included by the action and acquiring additional information of the action;
acquiring additional information of the behavior and uniformly coding the behavior according to predefined semantics;
acquiring additional information of the habit and compactly coding the habit according to the accumulated natural number mode.
In another aspect, the present application further provides an electronic device, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods described above.
In another aspect, the present application also provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform any of the methods described above.
In another aspect, the present application also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform any of the methods described above.
Advantageous effects
The application provides a method for processing human motion data, which is used for acquiring normalized motion data and carrying out materialization processing, wherein motion entities are related to each other, and meanwhile, various attributes and data structures of the motion entities are defined. By processing the motion data into data organization modes such as a motion body, an entity, an attribute, a motion relation and the like, the relation between the characteristics contained in the motion and the motion can be well mined, so that the potential relation and the characteristics between the motions can be reflected to the greatest extent. The data processing organization mode taking the sports entity network as a main body can support the implementation of various algorithms, and meets the omnibearing requirement of the research in the sports related field on the bottom sports data.
Drawings
FIG. 1 illustrates a hierarchical motion model constructed in accordance with an embodiment of the present application.
Fig. 2 shows an example of motion encoding according to an example embodiment.
Fig. 3 shows a flowchart of a method of acquiring human motion data according to an example embodiment.
Fig. 4 shows a flowchart of a method of processing human motion data according to an example embodiment.
Fig. 5 shows a schematic diagram of the association between the mobile entities.
Fig. 6 shows an example of a relationship between moving entities.
Fig. 7 shows another example of a relationship between moving entities.
Description of the embodiments
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, devices, steps, etc. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In recent years, with the improvement of computer technology and the reduction of hardware cost, motion capture systems are continuously popularized, and the motion capture technologies can better save the details of motion and truly record the motion trail of a human body, have the characteristics of high precision and good quality, become a main means for acquiring the motion data of the human body, and provide data support for the fields of medical science, physical training, intelligent media and the like. Human body actions can be split from the passive captured data, and a foundation can be laid for the human body data platform to organize the data. In addition, the three-dimensional data of human motion can also be generated by means of three-dimensional reconstruction of video streams, and the like. Researchers in various fields sequentially disclose the human motion data collected by the researchers. The collection, analysis and standardization of the human motion data in the different fields are beneficial to promoting the development of human motion related research and avoiding repeated data acquisition work.
However, unlike other data, motion data lacks structural information in both temporal and spatial dimensions, and the relationship between motion is complex. Therefore, the motion data needs to be processed so as to be effectively organized, so that implicit information in the motion data can be conveniently mined, and the storage efficiency and the search speed are improved.
According to the embodiment of the application, a set of data organization and processing modes are designed aiming at the motion data, the motion data exists in the form of entities, the motion entities can be associated with each other in a specific tree-shaped and net-shaped structure, the data organization defines various attributes and data structures of the motion entities at the same time, and potential relations and characteristics between motions can be well reflected.
Example embodiments of the present application are specifically described below with reference to the accompanying drawings.
FIG. 1 illustrates a hierarchical motion model constructed in accordance with an embodiment of the present application.
Human body motions can be distinguished from each other through motion coding, and the basis of the coding is to conduct classification modeling on the human body motions so as to define and describe various human body motions.
Human motion is a continuous, non-deterministic periodic process with some degree of unpredictability. The layered motion model according to the embodiment of the application qualitatively describes the human motion, and divides the continuous non-deterministic periodic process of the human motion into four ordered levels of gestures (P: else), actions (a: actions), behaviors (M: moiton) and habits (S: style), the upper layer motion being composed of a sequence of a plurality of lower layer motions. The posture (P: pose) represents a state sequence of all limbs of the human body relatively static in a three-dimensional space at a certain moment, for example, the moment of standing is a posture, and the physical properties of the human body posture are mainly reflected. The Action (A: action) is a section of gesture with practical meaning, describes the change of the state of the spatial position of limbs, namely, a continuous gesture which moves from a certain gesture and returns to a certain gesture, such as upward jump, single step walking, jump and the like, is an Action, reflects the space-time characteristic of the movement, and can be used as a unit of seconds. Behavior (M: moiton) describes the transformation of the spatial position state of a limb from one process to another, i.e. the combination of a series of actions ending after several actions starting from one action, representing the biological nature of the movement, which can last for a longer time, the time span being in minutes or hours, such as walking to work, marathon's competition, etc. Habit (S: style) is the overall performance of behaviors in a longer period of time, often related to custom, tradition or experience, and can also partially represent thinking and emotion characteristics of human beings, and reflect social and psychological characteristics of exercise, and the time span is in units of days, months or years.
According to the layered motion model, the gesture layer motion has relatively definite and accurate description, and the human critical joint pu is used 1 To pu n Position and angle combination of (a)
Figure SMS_1
An attitude P (t) is precisely described as shown in expression (1). The motion layer motion is a single-period process which is formed by a plurality of gesture sequences and is periodically changed, and a motion A (t) can be described by using a combination of gestures, and the motion layer motion represents a set of gesture sequences which are started by a certain gesture and ended after a plurality of gestures, as shown in a formula (2). Behavior layer motion is a random process generated by a plurality of actions and with a non-periodic determined time length, and a combination of actions can be used for describing a behavior M (t), which represents a set of action sequences starting from a certain action and ending after a plurality of actions, and the set of action sequences is a permutation combination of a plurality of actions, as shown in a formula (3). Habit layer motion is a random process of non-periodic indeterminate length of time generated by multiple behaviors, and a combination of behaviors can be used to describe a habit S (t), which represents a set of behavior sequences with certain characteristics generated in a longer time, as shown in formula (4).
Figure SMS_2
Figure SMS_3
Figure SMS_4
/>
Figure SMS_5
The layered motion model describes behavior events of human behavior from a gesture on the millisecond timescale to a gesture on the second timescale, to a behavior event on the minute timescale, and to a habit on the infinite timescale. As the hierarchy progresses, human behavior spans from determining a finite system to indeterminate.
The human motion three-dimensional data is essentially a combination of a section of continuous sequence of three-dimensional space positions of key joints of the human body in time, and motion characteristics cannot be intuitively embodied only by means of the three-dimensional data, so that a great amount of set operation is needed to be executed for motion data retrieval depending on bone characteristic indexes in the past, and the retrieval speed is low and the efficiency is low.
Motion coding can identify and code motion, and is the basis of data retrieval and organization. The application proposes a new coding system based on a motion layering model. Under the coding system, each type of motion is provided with a unique identifier for reference, so that basis and support are provided for finally establishing the corresponding relation of motion data, motion coding and motion semantics, and the method can be used for warehousing and inquiring the motion, supporting the construction of a human motion data platform and the standardized construction of motion types, and improving the reusability and the interpretability of three-dimensional motion data of a human body.
According to the technical conception of the application, gestures are defined to be coded according to values of gesture bases fixed in sequence, actions are defined to be coded according to time stamps of key gestures and corresponding gesture codes in a combined mode, actions are defined to be coded uniformly according to predefined semantics, and habits are defined to be coded compactly according to an accumulated natural number mode.
According to an example embodiment, the gestures are encoded primarily in an exhaustive manner. Therefore, a concept of a Pose Unit (PU) is introduced, and different positions and rotation states of limbs formed by two adjacent joints in a skeleton structure of a human body relative to root nodes of the limbs are the pose unit of the human body. For example, the male-female relationship of standard human skeleton can be followed according to human bodyNine gesture bases with great influence on human gesture are selected for gesture coding according to gesture movement characteristics, and the gesture bases are lumbar vertebrae respectively
Figure SMS_8
Upper left arm
Figure SMS_9
Upper right arm->
Figure SMS_12
Left thigh->
Figure SMS_7
Right thigh->
Figure SMS_11
Left lower arm->
Figure SMS_13
Lower right arm->
Figure SMS_14
Left calf
Figure SMS_6
Right calf->
Figure SMS_10
. The 9-bit number is used as a digital coding part of the human body posture, the sequence of the posture bases in the coding is fixed, and the position rotation states of different posture bases correspond to different posture base values, as shown in a formula (5).
Figure SMS_15
Figure SMS_16
Figure SMS_17
Based on different gesturesThe value ranges are different as shown in the formula (6) and the formula (7). The specific value can be determined according to the position rotation state of the gesture base. In the upper left arm posture
Figure SMS_18
For example, the root joint belongs to a spherical joint, and has 7 position rotation states of neutral, anteversion, backward extension, rotation, inward and outward, which correspond to each other during movement
Figure SMS_19
Seven values from 0 to 6. The complete coding of the gesture is obtained by adding the letter "P" to the numerical coding to identify the gesture layer motion coding.
According to the above-defined coding scheme, the human body gestures can be classified according to the states of the respective gesture bases. In this classification mode, the theoretical number of human body gestures is
Figure SMS_20
The number of bytes required is 9,003,750 ×4=36,015,000 bytes, requiring at least 35MB of space for storage. />
According to an example embodiment, the actions are encoded and described using a number of key gesture sequences of the actions and their corresponding time intervals. Equation (8) defines an action
Figure SMS_21
Key gesture sequences of (a). The code of the action consists of the timestamp of the key gesture and the gesture code, as shown in the formula (9).
Figure SMS_22
Figure SMS_23
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_24
representing that the kth key gesture is in the course of actionThe unit is a frame; />
Figure SMS_25
A digitally encoded portion representing the kth key gesture. In addition, an identifier "a" is added before the digital encoding to indicate the motion encoding of the motion layer.
Unlike gestures and actions, classification of behavior is relatively complete, with relatively many behaviors being defined semantically. Motion coding can be uniformly coded according to the hierarchy of motion as shown in equation (10).
Figure SMS_26
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_27
indicates the type of exercise +.>
Figure SMS_28
Representing a sports scene->
Figure SMS_29
Represents exercise power, +.>
Figure SMS_30
Representing sports props, < >>
Figure SMS_31
Representing the encoded suffix.
The exercise type MT is mainly divided into four types of sports, daily exercise, extreme exercise and literature exercise: sports refers to various exercises for enhancing physical fitness; daily exercise refers to exercise often done in human daily life; limiting movement refers to movement that is more difficult and more challenging; the artistic sports refer to sports having ornamental performance such as dance, martial arts, playing musical instruments and the like. The sports scene MS is mainly classified into land sports, water sports, ice and snow sports, and others. The motion power ME is mainly divided into four gears according to the motion power: 0-150W is the first layer, 150-300W is the second layer, 300W-450W is the third layer, and >450W is the fourth layer. The exercise prop MP represents an auxiliary object used during exercise, such as basketball, bicycle, barbell, etc. Sports prop MP is encoded using a two-digit decimal number, capable of containing up to 99 sports props, encoded as 00 when sports are not prop. The coding table of the moving prop can be expanded at any time along with the increase of the moving variety. The motion suffix N is used for distinguishing codes of two motions when the motion type, the motion scene and the motion consumption of the motions are the same as those of the motion prop codes, and if the pre-codes are the same, the motion suffix N is distinguished by a natural number accumulation mode starting from 0 in sequence. The digitally encoded portion of the final behavior is a combination of four hierarchical encodings. The identifier M of the motion layer is added as a motion code of the behavior before the digital code.
Habit is defined as compact coding in terms of accumulating natural numbers.
The habit mainly reflects the sociality and psychology of the exercise, and coding the habit S is beneficial to promoting the analysis of the social habit of the human body and the establishment of unified standards.
Figure SMS_32
Habit S has long time span, the action sequence forming a single habit S contains more actions, the overall characteristics are not very detailed, and habit S is used as a research object and has more occurrence frequency, so that the encoding of habit S is compactly encoded by using a simple accumulated natural number mode, as shown in a formula (11),
Figure SMS_33
indicating the number of habits S that are already present.
Motion coding may bring several advantages. First, motion coding can assign an Identification (ID) to each type of motion, and the motion data can be distinguished from each other. Second, motion coding can be used as an index to organize and store motion data, thereby enabling savings in storage consumption. In addition, the motion data can be searched through motion coding, so that the searching speed is greatly improved.
Pose, action, behavior and habit are classification of human body motions under different time scales and data characteristics, and father-son relationship and brother relationship exist between motions in a layered motion model. The parent motion is composed of a sequence of child motions that constitute a parent motion, and child motions that constitute the same parent motion in the same hierarchy are sibling motions to each other.
Fig. 2 shows an example of motion encoding according to an example embodiment.
Referring to FIG. 2, shown is a key gesture schematic of a basket action group, consisting essentially of five key gestures
Figure SMS_34
The motions with three periods being one are respectively that the two hands are used for alternately dribbling +.>
Figure SMS_35
Single step running->
Figure SMS_36
And jump up shooting action->
Figure SMS_37
. These movements together with other basketball movements constitute basketball sport +.>
Figure SMS_38
The motion coding mechanism described above can give each type of motion a unique code for identification. The coding principles of different motion levels are different, and combining multiple codes can result in the coding of new motion, which provides great flexibility and can meet a wide range of requirements. The coding scheme of P, A, M will be demonstrated using the set of basket actions in basketball sport.
The basket-up action group is shown in fig. 3, in which five key gesture frames are extracted, and 3 actions with one cycle are used. First for these five poses
Figure SMS_39
Coding, referring to P coding frame, human bodyThe nine gesture bases of the gesture are respectively valued to be combined to obtain a coding result corresponding to the gesture, as shown in table 1.
Table 1:
Figure SMS_40
encoding and description.
Figure SMS_41
After the gesture codes are extracted, the codes of the key gestures are combined to obtain the codes of the actions. The basket-feeding action group contains three actions with one period, namely dribbling actions and two-hand alternate dribbling
Figure SMS_42
Single step running->
Figure SMS_43
And jump up shooting action->
Figure SMS_44
. These three actions are encoded using an action encoding scheme.
Figure SMS_45
Figure SMS_46
Figure SMS_47
Figure SMS_50
The coding principle of (2) is shown as formula (9), the action is represented by +.>
Figure SMS_52
Two key poses constitute->
Figure SMS_54
At->
Figure SMS_49
The time of occurrence of (a) is frame 0, +.>
Figure SMS_51
At->
Figure SMS_53
The time of occurrence of (2) is 3 rd frame, so the motion code is a02101111103221111011 from equation 12. Similarly, according to the calculation results of the formulas 13 and 14, action +.>
Figure SMS_55
Is encoded as A022111101140112111106311111111, action->
Figure SMS_48
Is encoded as a03111111112011000000.
Figure SMS_58
The three actions and other actions together form the basketball sport +.>
Figure SMS_61
. Basketball sport->
Figure SMS_63
The code of (2) can be obtained by the expression (10). The type of basketball sport is sports, +.>
Figure SMS_57
The method comprises the steps of carrying out a first treatment on the surface of the The sports scene of basketball sports is land sports, < ->
Figure SMS_60
The method comprises the steps of carrying out a first treatment on the surface of the The power of basketball is 700W, +.>
Figure SMS_62
The method comprises the steps of carrying out a first treatment on the surface of the Sports prop for basketball sports
Figure SMS_64
The method comprises the steps of carrying out a first treatment on the surface of the The coded suffix of basketball sport is +.>
Figure SMS_56
. So the final basketball sport +.>
Figure SMS_59
Is encoded as M003291.
Fig. 3 shows a flowchart of a method of acquiring human motion data according to an example embodiment.
According to the embodiment, the data extraction and the gesture and action coding are carried out on the motion data in the first format according to the modeling mode of the motion data, so that the collection of massive standard data can be realized.
Referring to fig. 3, at S301, first format motion data is acquired.
According to some embodiments, the first format motion data may be BVH format data. BVH is a human motion capture file format, and the file in the format takes an articulation point as a core element and records the motion condition of a human skeleton in a plurality of continuous frames.
According to some embodiments, the second format motion data or the video file can be converted into the first format motion data, so that the first format motion data is processed, the processing flow is simplified, and the processing resources are saved.
At S303, a key frame sequence is extracted from the first format motion data as a key pose sequence.
According to some embodiments, a curve reduction method may be employed to extract key frames. The method can recursively screen extreme points on the motion data high-dimensional space curve, and extract key gesture sequences in the motion. The number of frames of the key gesture sequence may be set to a fixed value or an indefinite value, and the present application does not limit the number of key gesture frames of the action. The number of pose keyframes may vary within a range depending on the complexity of the motion.
At S305, the key gesture sequence is partitioned into at least one action.
According to some embodiments, the action division may be performed in a manual or automatic manner, such that the key frame sequence is divided into at least one action.
At S307, the key gesture sequences are respectively converted into gesture codes.
As previously discussed, the gesture encoding may include encoding an identifier and sequentially fixed values of the gesture base.
According to some embodiments, a complete set of gesture encodings, or a subset thereof, may be pre-constructed. After the position rotation state of the gesture base in each key gesture is obtained, the complete set or the subset of the gesture codes is queried through the position rotation state of each gesture base, so that the gesture codes can be obtained.
At S309, an action code for each action is determined from the gesture codes for the key gestures that the action includes.
According to some embodiments, after determining the frame ordinal of each key pose as it appears in the corresponding action, the action code is determined as a combination of the frame ordinal of each key pose and the corresponding pose code.
According to some embodiments, multiple actions may also be attributed to a behavior and the behavior uniformly encoded in accordance with predefined semantics, as discussed previously.
According to some embodiments, multiple behaviors may also be attributed to a habit, which is compactly encoded in terms of accumulated natural numbers, as discussed previously.
A similarity determination can be made for the motion based on the motion encoding. Since most of the motions are in undefined and described states, how to determine whether two motions are the same motion, similar motions, or problems to be solved. According to the embodiment, the gesture layer motion coding, the action layer motion coding and the action layer motion coding are based on motion characteristics, and the similarity degree between two identical-level motions can be judged through the motion coding. Suppose there are two identical levels of motion X and Y, encoded as follows:
Figure SMS_65
Figure SMS_66
if X and Y are both gesture layer motions, then the definitions of equations (15) and (16) can be referenced to equation (5). If both motion layers are motion, the definitions of equations (15) and (16) can be referred to as equation (9). If the number of key frames constituting the two actions is not identical, then 0 may be added to complement the number of bits after the action code with a small number of frames so that the lengths of equations (15) and (16) are identical. The definition of equations (15) and (16) can be referred to equation (10) if both are behavioural layer movements. The similarity of the X motion to the Y motion is the euclidean distance between the two encoded vectors:
Figure SMS_67
in equation (17), a smaller s indicates that the two motions are more similar, and a value of 0 indicates that the two motions are the same motion. Based on this coding scheme, motion data can be retrieved using motion coding. The encoded values between the related movements are similar. Based on this property, the user can conduct an associative search.
At S311, additional information of the motion is acquired, and the motion code and the additional information are used as normalized motion data of the motion.
The additional information may include action name information and other descriptive information to satisfy semantic requirements.
At S313, normalized motion data for the action and the associated first format motion data are stored.
According to an example embodiment, normalized motion data including motion coding and additional information and associated first format motion data may be stored in a database to provide support for motion retrieval and data interrogation. According to some embodiments, normalized motion data and associated first format motion data may be stored in the same or different databases.
Thus, according to the example embodiment, the data extraction and the motion coding are performed on the first format file serving as the intermediate file according to the layered motion model, so that on one hand, collection of massive standard data can be achieved, on the other hand, storage consumption of subsequent processing can be reduced, the subsequent processing speed can be improved, and research and understanding of human body motion can be facilitated.
Fig. 4 illustrates a method of processing human motion data according to an example embodiment.
Unlike other data, motion data lacks structural information in both temporal and spatial dimensions, and the relationship between motions is complex. The motion data is processed and organized, the motion data is organized effectively, hidden information in the motion data can be mined, the motion rule of a human body is revealed, the storage efficiency and the searching speed are improved, and the expansibility and the robustness of a motion data platform are improved.
Referring to fig. 4, in S401, a layered motion model is constructed in which an upper layer motion is composed of a sequence of a plurality of lower layer motions.
According to an embodiment, as previously discussed, constructing the hierarchical motion model may include dividing a continuous non-deterministic periodic process of human motion into four ordered levels of pose, action, behavior, and habit, defining the pose as a value of a sequentially fixed pose base encoding, defining the action as a combination encoding of a timestamp of a key pose and a corresponding pose encoding, defining the behavior as a unified encoding according to predefined semantics, and defining the habit as a compact encoding in an additive natural number.
At S403, an ontology corresponding to each layer is defined based on the hierarchical motion model, the ontology having coding properties, name properties, and description properties.
The ontology is a canonical description of classification and meaning owned by the exercise data, and reflects the relationship between the interior and the exterior of the data. The ontology definition of human motion is based on a hierarchical motion model, is strictly normalized, unambiguous, and is universally accepted and accepted. The body comprises a gesture basic body, a gesture body, a motion body and a habit body, wherein the gesture body, the motion body, the behavior body and the habit body correspond to each layer of the layered motion model. The four bodies of the gesture P, the action a, the behavior M and the habit S can divide all movements in a generalized way. Meanwhile, PU represents the position and rotation of the limbs of the human body in the motion process, and the posture base body can be used for helping the expression of the motion essence.
The ontology has encoding properties, name properties, and description properties. The code attribute is a main key of an ontology and is used for uniquely identifying one ontology. The name attribute is the name of the ontology in the real world, and the specific meaning of the identity ontology can be null. The description attribute supplements the ontology with additional information, which may be a null value. Some entities have specific properties, such as cycle length and metabolic equivalent, etc., according to their own nature.
At S405, normalized motion data is acquired, the normalized motion data including hierarchical categories, motion coding, and additional information.
According to an example embodiment, human motion may be classified and modeled using a hierarchical motion model, and normalized motion data may be derived from the results of the classification and modeling. For example, a sequence of key frames may be extracted from the first format motion data as a sequence of key poses. The key gesture sequence may be partitioned into at least one action, multiple actions may be attributed to a behavior, and multiple behaviors may be attributed to a habit. The key gesture sequences may then be converted to gesture codes, respectively. The method comprises the steps of determining the action codes of the actions according to the gesture codes of key gestures included by each action, acquiring additional information of the actions, acquiring the additional information of the actions, uniformly coding the actions according to predefined semantics, acquiring the additional information of habits, and compactly coding the habits according to an accumulated natural number mode. The aforementioned additional information includes name information and description information.
According to some embodiments, when performing gesture encoding, the position rotation state of the gesture base in each key gesture is obtained, and then the complete set or a subset of the gesture encoding is queried through the position rotation state of each gesture base, so as to obtain the gesture encoding.
According to some embodiments, when performing motion encoding, the frame number of each key pose occurring in the corresponding motion is determined, and then the motion encoding may be determined as a combination of the frame number of each key pose and the corresponding pose encoding.
According to some embodiments, the normalized motion data may also be imported externally.
At S407, the ontology corresponding to the hierarchical category is instantiated as a kinematic entity using the normalized kinematic data. For example, the coding attribute of the body takes the value of the motion code, and the name attribute and the description attribute of the body are assigned according to the additional information.
The body of human motion is not yet sufficient to fully represent all the information contained in the human motion data, and the body attributes are used to supplement the motion with more information, as shown in particular in fig. 4. Among the attributes of the body P, the body a, the body M, and the body S, other attributes may be null except that each code cannot be null. Neither of the two attributes of the bodies PU0-PU8 can be null, the name of the gesture base represents the name of the skeleton represented by the gesture base, and the value of the gesture base represents the position rotation information of the skeleton in the three-dimensional space.
The entity category and the specific value of the entity attribute together form an entity, the entity is an object stored and managed by the data platform, the entity is an instantiated entity, and the attribute values of different entities of the same entity are different. For example, "running" is an example of a behavior ontology M, where the relevant attributes of the behavior ontology M are owned by the "running" entity and are given specific values when entering the data platform. The "run" and the related attributes of the "run" together constitute one entity under the behavior ontology M.
In S409, an association relationship between a plurality of moving entities is established, the association relationship including a directed composition relationship, an adjacency relationship, and an possession relationship (see fig. 5).
According to an embodiment, the composition relation has a first composition relation property, indicating that a child motion constituting a parent motion is a first child motion, a last child motion or an intermediate child motion in a sequence of child motions.
According to an embodiment, the adjacency has a first adjacency attribute representing a motion number or concatenation of motion numbers of at least one common parent motion of two adjoining child motions.
According to an embodiment, the possession relation represents a relation between the gesture motion entity and the gesture base entity.
At S411, the plurality of sports entities and the association relationship are stored.
According to an example embodiment, the kinematic entity and the association relationship may be stored in a relational database or graph database. For example, the motion data may be represented in the form of triples, where entities in the triples are nodes, the relationships of the entities are edges, and a knowledge base containing a large number of triples constitutes a huge knowledge graph.
According to the example embodiment, by acquiring normalized motion data and performing materialization processing, the motion entities can be associated with each other in a specific tree/mesh structure, and meanwhile, various attributes and data structures of the motion entities are defined, so that potential relationships and features between motions can be represented to the greatest extent.
According to an embodiment, the motion data is processed into data organization patterns of motion ontology, entity and attribute, motion relationship, and the like. The method is a special organization mode of motion data, and can well dig out the relation between the characteristics contained in the motion and the motion. The data processing organization mode taking the moving entity network as a main body can support the implementation of various algorithms, for example, deeper movement relations can be analyzed by using a graph theory algorithm, and a clustering algorithm can be used for extracting a movement data subset containing various different characteristics and the like so as to meet the omnibearing requirement of the research of the movement related field on the bottom movement data.
Fig. 6 shows one example of a relationship between moving entities, and fig. 7 shows another example of a relationship between moving entities.
Referring to fig. 6, the relationship and relationship attributes of an entity under a behavior body "100 meter hurdle" and an entity under an action body "start running", "single running", "hurdle" are shown. The 100-meter hurdle movement consists of a movement sequence of starting movement, single-step movement and hurdle movement. The 100-meter hurdle and the starting action are in a unidirectional structural relationship, the 100-meter hurdle and the single-step running are in a unidirectional structural relationship, and the 100-meter hurdle and the hurdle action are in a unidirectional structural relationship. The value of the attribute value Status of the relation between the 100-meter hurdle and the starting is 0, which indicates that the starting is the first action of the sub-action sequence of the 100-meter hurdle. Similarly, the attribute value "Status" of the relationship between "100 meters hurdle" and "single step run" has a value of "2", indicating that "single step run" is the last action in the sequence of actions that make up the "100 meters hurdle". The attribute value "Status" of the remaining relationships takes a value of "1". The sub-action sequences of "start running", "single step running", "hurdle running" have the relations of "start running action" adjacent "single step running", "single step running" adjacent "hurdle running" and "hurdle running" according to the sequence of actions. The parent motion of these actions is "100 meters across" and has a motion number of "M003000", so their relationship attribute "parent motion identification pmID" takes on a value of "M003000".
If the information of the motion of '800 m race' is added to the data platform on the basis of fig. 6, the entity relationship changes to the state shown in fig. 7. The 800 meter race is composed of a sequence of actions "start running" - "single step running", beginning with "start running", ending with "single step running" actions. Correspondingly, two constructional relations are added, and the relation attribute Status takes the values of 0 and 2 respectively. The motion sequence formed by the 'starting running' adjacent 'single-step running' can also form the motion of '800-meter racing', and the code of '800-meter racing' is 'M002001', so that the value of the relation attribute 'father motion identification pmID' of the 'starting running' adjacent 'single-step running' relation is expanded to 'M003000M 002001'. Similarly, the relationship attribute "father sports identity pmID" of "single run" adjacent "single run" is also expanded to "M003000M002001".
The schemes described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the names of the units do not constitute a limitation of the units themselves.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of processing human motion data, comprising:
constructing a layered motion model in which an upper layer motion is made up of a sequence of a plurality of lower layer motions;
defining an ontology corresponding to each layer based on the layered motion model, wherein the ontology has coding attributes, name attributes and description attributes;
obtaining normalized motion data, wherein the normalized motion data comprises layering categories, motion codes and additional information;
instantiating an ontology corresponding to the hierarchical category into a motion entity by utilizing the normalized motion data, wherein the coding attribute value of the ontology is the motion code, and the name attribute and the description attribute of the ontology are assigned according to the additional information;
establishing an association relationship among a plurality of moving entities, wherein the association relationship comprises a directed composition relationship, an adjacent relationship and an possession relationship;
and storing the plurality of moving entities and the association relation.
2. The method of claim 1, wherein the constructing a layered motion model comprises:
dividing the continuous uncertain periodic process of human body movement into four ordered levels of gestures, actions, behaviors and habits;
defining the gestures as the values of gesture bases fixed in sequence for coding;
defining actions as combined coding according to the time stamps of the key gestures and the corresponding gesture codes;
defining the behavior as uniformly coded according to predefined semantics;
habit is defined as compact coding in terms of accumulating natural numbers.
3. The method of claim 2, wherein the constructing a layered motion model further comprises: a complete set of gesture codes, or a subset thereof, is constructed.
4. The method of claim 2, wherein the ontology comprises a gesture base ontology and gesture ontologies, action ontologies, behavior ontologies, habit ontologies corresponding to each layer of the layered motion model.
5. The method of claim 2, wherein the acquiring normalized motion data comprises:
classifying and modeling human body motions by using a layered motion model;
and obtaining normalized motion data according to the classification and modeling results.
6. The method of claim 1, wherein the composition relationship has a first composition relationship attribute that indicates that a child motion that constitutes a parent motion is a first child motion, a last child motion, or an intermediate child motion in a sequence of child motions.
7. The method of claim 1, wherein the adjacency has a first adjacency attribute representing a motion number or concatenation of motion numbers of at least one common parent motion of two adjacent child motions.
8. The method of claim 5, wherein classifying and modeling human motion using a layered motion model includes one or more of:
extracting a key frame sequence from the first format motion data as a key gesture sequence;
dividing the key gesture sequence into at least one action;
attributing a plurality of actions to a behavior;
attributing a plurality of behaviors to a habit.
9. The method of claim 8, wherein the deriving normalized motion data from the classification and modeling results comprises:
converting the key gesture sequences into gesture codes respectively;
determining an action code of each action according to the gesture code of the key gesture included by the action and acquiring additional information of the action;
acquiring additional information of the behavior and uniformly coding the behavior according to predefined semantics;
acquiring additional information of the habit and compactly coding the habit according to the accumulated natural number mode.
10. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method according to any one of claims 1-9.
CN202310269065.5A 2022-11-16 2023-03-20 Method for processing human motion data and electronic equipment Active CN116189309B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211438502 2022-11-16
CN2022114385023 2022-11-16

Publications (2)

Publication Number Publication Date
CN116189309A true CN116189309A (en) 2023-05-30
CN116189309B CN116189309B (en) 2024-01-30

Family

ID=86448902

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202310269065.5A Active CN116189309B (en) 2022-11-16 2023-03-20 Method for processing human motion data and electronic equipment
CN202310268926.8A Active CN116469159B (en) 2022-11-16 2023-03-20 Method for acquiring human motion data and electronic equipment
CN202310269086.7A Active CN116189310B (en) 2022-11-16 2023-03-20 Method for providing human motion data set and electronic equipment

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202310268926.8A Active CN116469159B (en) 2022-11-16 2023-03-20 Method for acquiring human motion data and electronic equipment
CN202310269086.7A Active CN116189310B (en) 2022-11-16 2023-03-20 Method for providing human motion data set and electronic equipment

Country Status (1)

Country Link
CN (3) CN116189309B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469159A (en) * 2022-11-16 2023-07-21 北京理工大学 Method for acquiring human motion data and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101180883A (en) * 2005-04-13 2008-05-14 诺基亚公司 Method, device and system for effectively coding and decoding of video data
CN110502564A (en) * 2019-08-28 2019-11-26 北京理工大学 Motion characteristic data library generating method, search method and terminal based on posture base
CN111339313A (en) * 2020-02-18 2020-06-26 北京航空航天大学 Knowledge base construction method based on multi-mode fusion
CN113143257A (en) * 2021-02-09 2021-07-23 国体智慧体育技术创新中心(北京)有限公司 Generalized application system and method based on individual movement behavior hierarchical model
CN113987285A (en) * 2021-12-27 2022-01-28 北京理工大学 Hidden state-based motion characteristic database generation method and search method
CN114398499A (en) * 2022-01-25 2022-04-26 北京理工大学 Human motion knowledge graph construction method and system
FR3121831A1 (en) * 2021-04-14 2022-10-21 Zhor Tech MOTION SENSOR DATA ENCODING METHOD, RELATED DEVICE AND SYSTEM

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140337373A1 (en) * 2013-05-07 2014-11-13 Magnet Systems, Inc. System for managing graph queries on relationships among entities using graph index
US10225567B2 (en) * 2013-10-08 2019-03-05 Sharp Kabushiki Kaisha Image decoder, image encoder, and encoded data converter
CN105320944B (en) * 2015-10-24 2019-09-27 西安电子科技大学 A kind of human body behavior prediction method based on human skeleton motion information
US11449061B2 (en) * 2016-02-29 2022-09-20 AI Incorporated Obstacle recognition method for autonomous robots
US20180043245A1 (en) * 2016-08-10 2018-02-15 Yuanfeng Zhu Simulation System for Balance Control in Interactive Motion
CN107169117B (en) * 2017-05-25 2020-11-10 西安工业大学 Hand-drawn human motion retrieval method based on automatic encoder and DTW
CA3076239A1 (en) * 2017-10-02 2019-04-11 Blackthorn Therapeutics, Inc. Methods and tools for detecting, diagnosing, predicting, prognosticating, or treating a neurobehavioral phenotype in a subject
CN108805080A (en) * 2018-06-12 2018-11-13 上海交通大学 Multi-level depth Recursive Networks group behavior recognition methods based on context
WO2021156647A1 (en) * 2020-02-06 2021-08-12 Mark Oleynik Robotic kitchen hub systems and methods for minimanipulation library
EP4222977A1 (en) * 2020-09-30 2023-08-09 Nokia Technologies Oy A method, an apparatus and a computer program product for video encoding/decoding
CN112906520A (en) * 2021-02-04 2021-06-04 中国科学院软件研究所 Gesture coding-based action recognition method and device
CN114912005A (en) * 2021-02-08 2022-08-16 京东方科技集团股份有限公司 Exercise recommendation method, device, equipment and medium
CN113583980B (en) * 2021-08-26 2023-12-12 中国农业大学 Porcine reproductive and respiratory syndrome mutant virus and construction method and application thereof
CN114676260A (en) * 2021-12-15 2022-06-28 清华大学 Human body bone motion rehabilitation model construction method based on knowledge graph
CN113989943B (en) * 2021-12-27 2022-03-11 北京理工大学 Distillation loss-based human body motion increment identification method and device
CN114819598A (en) * 2022-04-20 2022-07-29 首都医科大学附属北京天坛医院 Examination and evaluation method and device for lumbar puncture and storage medium
CN114943987A (en) * 2022-06-07 2022-08-26 首都体育学院 Motion behavior knowledge graph construction method adopting PAMS motion coding
CN115294228B (en) * 2022-07-29 2023-07-11 北京邮电大学 Multi-figure human body posture generation method and device based on modal guidance
CN116189309B (en) * 2022-11-16 2024-01-30 北京理工大学 Method for processing human motion data and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101180883A (en) * 2005-04-13 2008-05-14 诺基亚公司 Method, device and system for effectively coding and decoding of video data
CN110502564A (en) * 2019-08-28 2019-11-26 北京理工大学 Motion characteristic data library generating method, search method and terminal based on posture base
CN111339313A (en) * 2020-02-18 2020-06-26 北京航空航天大学 Knowledge base construction method based on multi-mode fusion
CN113143257A (en) * 2021-02-09 2021-07-23 国体智慧体育技术创新中心(北京)有限公司 Generalized application system and method based on individual movement behavior hierarchical model
FR3121831A1 (en) * 2021-04-14 2022-10-21 Zhor Tech MOTION SENSOR DATA ENCODING METHOD, RELATED DEVICE AND SYSTEM
CN113987285A (en) * 2021-12-27 2022-01-28 北京理工大学 Hidden state-based motion characteristic database generation method and search method
CN114398499A (en) * 2022-01-25 2022-04-26 北京理工大学 Human motion knowledge graph construction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
岳晓宁: "数据统计与分析", 机械工业出版社, pages: 191 - 192 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469159A (en) * 2022-11-16 2023-07-21 北京理工大学 Method for acquiring human motion data and electronic equipment
CN116469159B (en) * 2022-11-16 2023-11-14 北京理工大学 Method for acquiring human motion data and electronic equipment

Also Published As

Publication number Publication date
CN116189310B (en) 2024-01-23
CN116189309B (en) 2024-01-30
CN116189310A (en) 2023-05-30
CN116469159B (en) 2023-11-14
CN116469159A (en) 2023-07-21

Similar Documents

Publication Publication Date Title
Kapadia et al. Efficient motion retrieval in large motion databases
CN102521843B (en) Three-dimensional human body motion analysis and synthesis method based on manifold learning
CN116189309B (en) Method for processing human motion data and electronic equipment
CN108520166A (en) A kind of drug targets prediction technique based on multiple similitude network wandering
CN102122291A (en) Blog friend recommendation method based on tree log pattern analysis
CN102760151A (en) Implementation method of open source software acquisition and searching system
CN105046720B (en) The behavior dividing method represented based on human body motion capture data character string
CN102542066A (en) Video clustering method, ordering method, video searching method and corresponding devices
Sedmidubsky et al. A key-pose similarity algorithm for motion data retrieval
CN110275744B (en) Method and system for making scalable user interface
CN110516112B (en) Human body action retrieval method and device based on hierarchical model
CN114398499A (en) Human motion knowledge graph construction method and system
Tang et al. PAMS-DP: Building a Unified Open PAMS Human Movement Data Platform
Naik et al. Spatio-temporal querying recurrent multimedia databases using a semantic sequence state graph
CN110364265A (en) A kind of data value generation and implementation method based on health data bank
Kurokawa et al. Representation and retrieval of video scene by using object actions and their spatio-temporal relationships
Riaz et al. Relational databases for motion data
Amagasa et al. Implementing time-interval class for managing temporal data
Thabtah et al. A study of predictive accuracy for four associative classifiers
Riaz et al. A relational database for human motion data
Zhang et al. 3D human motion retrieval based on human hierarchical index structure
Gao et al. Content-based human motion retrieval with automatic transition
Echavarria et al. Studying shape semantics of an architectural moulding collection: Classifying style based on shape analysis methods
Groth Attribute field K-means: clustering trajectories with attribute by fitting multiple fields
CN118245064A (en) Code retrieval method, system, equipment and medium based on user intention enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant