CN114399575A - Real-time correction method and system for dynamic capture data - Google Patents

Real-time correction method and system for dynamic capture data Download PDF

Info

Publication number
CN114399575A
CN114399575A CN202111575792.1A CN202111575792A CN114399575A CN 114399575 A CN114399575 A CN 114399575A CN 202111575792 A CN202111575792 A CN 202111575792A CN 114399575 A CN114399575 A CN 114399575A
Authority
CN
China
Prior art keywords
data
real
time
submodule
capture data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111575792.1A
Other languages
Chinese (zh)
Inventor
黄昌正
周言明
陈曦
王映西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huantek Co ltd
Dongguan Yilian Interation Information Technology Co ltd
Original Assignee
Guangzhou Huantek Co ltd
Dongguan Yilian Interation Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huantek Co ltd, Dongguan Yilian Interation Information Technology Co ltd filed Critical Guangzhou Huantek Co ltd
Priority to CN202111575792.1A priority Critical patent/CN114399575A/en
Publication of CN114399575A publication Critical patent/CN114399575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a real-time correction method and a real-time correction system for dynamic capture data. The method comprises the following steps: the method comprises the steps of constructing a human motion recognition model, receiving original motion capture data in real time, detecting whether a data loss part exists in the original motion capture data in real time, generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part if the data loss part exists, and correcting the real-time original motion capture data by adopting the real-time supplementary data to obtain real-time corrected motion capture data.

Description

Real-time correction method and system for dynamic capture data
Technical Field
The invention relates to the technical field of motion capture, in particular to a real-time correction method and a real-time correction system for motion capture data.
Background
The key of the motion capture system is to process data generated by the motion of an object through a computer to obtain data of three-dimensional space coordinates, so as to render the data or perform other communication, for example, data is adopted to construct a real-time virtual animation model, or the motion of a physical robot is controlled in real time. Real-time motion capture focuses on timeliness of data transmission, and poor timeliness can seriously affect the real-time performance of motion. In the real-time capturing process, data transmission may be affected by various environmental factors, so that data loss may occur. When the system receives incomplete real-time animation data, then the real-time application of the data may be affected, e.g., a virtual character in a virtual animation stops moving, or a real-time controlled physical robot stops moving, etc.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a real-time correction method of dynamic capture data and a corresponding real-time correction system of dynamic capture data, which overcome or at least partially solve the above problems.
In order to solve the above problems, an embodiment of the present invention discloses a real-time correction method for motion capture data, including:
constructing a human motion recognition model;
receiving original dynamic capture data in real time;
detecting whether a data loss part exists in the original moving capture data in real time;
if so, generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part;
and correcting the real-time original dynamic capture data by adopting the real-time supplementary data to obtain real-time corrected dynamic capture data.
Optionally, the step of detecting whether there is a data loss part in the original captured data in real time includes:
analyzing the original dynamic capture data in real time to obtain a current data frame number;
detecting whether the current data frame serial number and the last data frame serial number are continuous or not;
if so, determining that the real-time original moving capture data does not have a data loss part;
if not, determining the data segment corresponding to the missing data frame sequence number as a data missing part.
Optionally, if yes, the step of generating real-time supplementary data by using the human motion recognition model for the data loss part includes:
intercepting a first data segment before a data lost part from the original moving capture data;
identifying a first motion behavior corresponding to the first data segment by adopting the human motion identification model;
predicting a motion path of a human limb based on the first motion behavior;
and generating real-time supplementary data by adopting the motion path of the human body limb.
Optionally, the step of constructing the human motion recognition model comprises:
acquiring training sample data;
constructing an initial neural network model;
and training the initial neural network model by adopting the training sample data to obtain the human motion recognition model.
The embodiment of the invention also discloses a real-time correction system of the dynamic capture data, which comprises the following steps:
the human motion recognition model building module is used for building a human motion recognition model;
the original moving capture data receiving module is used for receiving original moving capture data in real time;
the data loss detection module is used for detecting whether a data loss part exists in the original moving capture data in real time;
the real-time supplementary data generation module is used for generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part if the data loss part is detected;
and the data correction module is used for correcting the real-time original dynamic capture data by adopting the real-time supplementary data to obtain real-time corrected dynamic capture data.
Optionally, the data loss detection module includes:
the data analysis submodule is used for analyzing the original dynamic capture data in real time to obtain the current data frame number;
a frame number detection submodule for detecting whether the current data frame number is continuous with the previous data frame number;
the data integrity determination submodule is used for determining that the real-time original moving capture data does not have a data loss part if the data integrity determination submodule is used for determining that the real-time original moving capture data does not have the data loss part;
and the data loss determining submodule is used for determining the data segment corresponding to the missing data frame sequence number as a data loss part if the data loss determining submodule does not determine the data segment corresponding to the missing data frame sequence number as the data loss part.
Optionally, the real-time supplementary data generating module includes:
the first data segment intercepting submodule is used for intercepting a first data segment before the data loss part from the original moving capture data;
the first motion behavior identification submodule is used for identifying a first motion behavior corresponding to the first data segment by adopting the human motion identification model;
the motion path prediction sub-module of the human body limb is used for predicting the motion path of the human body limb based on the first motion behavior;
and the real-time supplementary data generation submodule is used for generating real-time supplementary data by adopting the motion path of the human body limb.
Optionally, the human motion recognition model building module includes:
the training sample data acquisition submodule is used for acquiring training sample data;
the initial neural network model building submodule is used for building an initial neural network model;
and the neural network training submodule is used for training the initial neural network model by adopting the training sample data to obtain the human motion recognition model.
The embodiment of the invention has the following advantages: the method comprises the steps of constructing a human motion recognition model, receiving original motion capture data in real time, detecting whether a data loss part exists in the original motion capture data in real time, generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part if the data loss part exists, and correcting the real-time original motion capture data by adopting the real-time supplementary data to obtain real-time corrected motion capture data.
Drawings
Fig. 1 is a flowchart illustrating a first embodiment of a method for real-time correction of motion capture data according to the present invention.
Fig. 2 is a block diagram of a real-time data modification system according to a first embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating a first step of a real-time dynamic capture data correction method according to a first embodiment of the present invention is shown, which may specifically include the following steps:
step 101, constructing a human motion recognition model;
in the embodiment of the invention, a human motion recognition model is firstly constructed, and the human motion recognition model can recognize various basic motion states of the human body, including motion models of various basic actions of the human body, wherein the basic actions include walking, jumping, squatting and the like. Specifically, the step of constructing the human motion recognition model includes:
acquiring training sample data;
constructing an initial neural network model;
and training the initial neural network model by adopting the training sample data to obtain the human motion recognition model.
In the embodiment of the invention, firstly, training sample data of the human body for walking, jumping, squatting and the like is obtained, and after an initial neural network model is constructed, the training sample data is adopted to train the initial neural network model to obtain a human body motion recognition model.
Step 102, receiving original moving capture data in real time;
the original moving capture data can be collected in real time by a plurality of inertial sensors distributed at each part of a human body and then transmitted to a system in a wired or wireless mode, so that the original moving capture data can be received in real time.
103, detecting whether a data loss part exists in the original moving capture data in real time;
in the embodiment of the present invention, during the process of receiving the original moving capture data, a phenomenon of data packet loss may occur, and therefore, after receiving the original moving capture data, it is necessary to detect whether a data loss portion exists in the original moving capture data in real time.
The step of detecting whether the original moving capture data has a data loss part in real time comprises the following steps:
analyzing the original dynamic capture data in real time to obtain a current data frame number;
detecting whether the current data frame serial number and the last data frame serial number are continuous or not;
if so, determining that the real-time original moving capture data does not have a data loss part;
if not, determining the data segment corresponding to the missing data frame sequence number as a data missing part.
For example, if the sequence number of the current data frame is 0551 and the sequence number of the previous data frame is 0550, it can be considered that the sequence number of the current data frame is continuous with the sequence number of the previous data frame, and no data loss occurs; if the current data frame sequence number is 0551 and the previous data frame sequence number is 00548, it can be considered that the current data frame sequence number is not continuous with the previous data frame sequence number, and the original motion capture data lacks the data packets corresponding to the data frame sequence number 0549 and the data frame sequence number 0550.
104, if yes, generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part;
when the original moving capture data is lost, the real-time animation virtual model constructed by the original moving capture data is distorted, for example, the limbs of a virtual character are still and motionless. Therefore, real-time supplementary data needs to be generated to correct the original motion capture data lost by the data packet, so that the motion of the real-time human body virtual model is more realistic.
Specifically, if yes, the step of generating real-time supplementary data by using the human motion recognition model for the data loss part includes:
intercepting a first data segment before a data lost part from the original moving capture data;
in the embodiment of the present invention, the interception length of the first data segment is determined by a technician according to actual situations, and the longer the interception length of the first data segment is, the more complete the action analysis is, but the larger the data processing amount is.
Identifying a first motion behavior corresponding to the first data segment by adopting the human motion identification model;
the first motion behavior may be walking, jumping, squatting, or the like, for example, the first motion behavior corresponding to the first data segment identified by the human motion recognition model is walking, which means that the moving capture object is walking before data loss.
Predicting a motion path of a human limb based on the first motion behavior;
because the duration of data loss caused by poor communication environment is generally short, the motion of the motion capture and acquisition object can still continue to be the motion before the data loss when the data is lost, and then the motion path of the human body limb can be predicted when the data is lost based on the first motion behavior.
For example, if the first motion behavior is a walking behavior, it can be considered that the motional capture object continues the walking behavior at the time when the original motional capture data is lost, and based on the walking behavior, the motion path when the data is lost can be predicted.
And generating real-time supplementary data by adopting the motion path of the human body limb.
And 105, correcting the real-time original dynamic capture data by adopting the real-time supplementary data to obtain real-time corrected dynamic capture data.
In the embodiment of the invention, the real-time supplementary data can be filled into the missing data part of the real-time original moving capture data, so that the real-time original moving capture data can be corrected, and the real-time corrected moving capture data can be obtained.
In the embodiment of the invention, a human motion recognition model is constructed, original motion capture data is received in real time, whether a data loss part exists in the original motion capture data is detected in real time, if so, real-time supplementary data is generated by adopting the human motion recognition model aiming at the data loss part, and the real-time original motion capture data is corrected by adopting the real-time supplementary data to obtain real-time corrected motion capture data.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 2, a structural block diagram of a first embodiment of a real-time data modification system according to the present invention is shown, which may specifically include the following modules:
a human motion recognition model construction module 201, configured to construct a human motion recognition model;
an original moving capture data receiving module 202, configured to receive original moving capture data in real time;
the data loss detection module 203 is configured to detect whether a data loss part exists in the original moving capture data in real time;
a real-time supplementary data generating module 204, configured to generate real-time supplementary data by using the human motion recognition model for the data loss part if the data loss part is detected;
and the data correction module 205 is configured to correct the real-time original dynamic capture data by using the real-time supplementary data, so as to obtain real-time corrected dynamic capture data.
In an embodiment of the present invention, the data loss detection module includes:
the data analysis submodule is used for analyzing the original dynamic capture data in real time to obtain the current data frame number;
a frame number detection submodule for detecting whether the current data frame number is continuous with the previous data frame number;
the data integrity determination submodule is used for determining that the real-time original moving capture data does not have a data loss part if the data integrity determination submodule is used for determining that the real-time original moving capture data does not have the data loss part;
and the data loss determining submodule is used for determining the data segment corresponding to the missing data frame sequence number as a data loss part if the data loss determining submodule does not determine the data segment corresponding to the missing data frame sequence number as the data loss part.
In an embodiment of the present invention, the real-time supplementary data generating module includes:
the first data segment intercepting submodule is used for intercepting a first data segment before the data loss part from the original moving capture data;
the first motion behavior identification submodule is used for identifying a first motion behavior corresponding to the first data segment by adopting the human motion identification model;
the motion path prediction sub-module of the human body limb is used for predicting the motion path of the human body limb based on the first motion behavior;
and the real-time supplementary data generation submodule is used for generating real-time supplementary data by adopting the motion path of the human body limb.
In the embodiment of the invention, the human motion recognition model building module comprises:
the training sample data acquisition submodule is used for acquiring training sample data;
the initial neural network model building submodule is used for building an initial neural network model;
and the neural network training submodule is used for training the initial neural network model by adopting the training sample data to obtain the human motion recognition model.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an apparatus, including:
the method comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the method embodiment is realized, the same technical effect can be achieved, and the method is not repeated herein for avoiding repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above-mentioned real-time dynamic capture data correction method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for correcting the dynamic capture data in real time and the system for correcting the dynamic capture data in real time provided by the invention are described in detail, specific examples are applied in the method for explaining the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A real-time correction method for dynamic capture data is characterized by comprising the following steps:
constructing a human motion recognition model;
receiving original dynamic capture data in real time;
detecting whether a data loss part exists in the original moving capture data in real time;
if so, generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part;
and correcting the real-time original dynamic capture data by adopting the real-time supplementary data to obtain real-time corrected dynamic capture data.
2. The method of claim 1, wherein the step of detecting in real time whether a missing data portion exists in the raw motion capture data comprises:
analyzing the original dynamic capture data in real time to obtain a current data frame number;
detecting whether the current data frame serial number and the last data frame serial number are continuous or not;
if so, determining that the real-time original moving capture data does not have a data loss part;
if not, determining the data segment corresponding to the missing data frame sequence number as a data missing part.
3. The method of claim 1, wherein if so, the step of generating real-time supplementary data using the human motion recognition model for the missing portion of data comprises:
intercepting a first data segment before a data lost part from the original moving capture data;
identifying a first motion behavior corresponding to the first data segment by adopting the human motion identification model;
predicting a motion path of a human limb based on the first motion behavior;
and generating real-time supplementary data by adopting the motion path of the human body limb.
4. The method of claim 1, wherein the step of constructing a human motion recognition model comprises:
acquiring training sample data;
constructing an initial neural network model;
and training the initial neural network model by adopting the training sample data to obtain the human motion recognition model.
5. A system for real-time modification of kinetic capture data, the system comprising:
the human motion recognition model building module is used for building a human motion recognition model;
the original moving capture data receiving module is used for receiving original moving capture data in real time;
the data loss detection module is used for detecting whether a data loss part exists in the original moving capture data in real time;
the real-time supplementary data generation module is used for generating real-time supplementary data by adopting the human motion recognition model aiming at the data loss part if the data loss part is detected;
and the data correction module is used for correcting the real-time original dynamic capture data by adopting the real-time supplementary data to obtain real-time corrected dynamic capture data.
6. The system of claim 5, wherein the data loss detection module comprises:
the data analysis submodule is used for analyzing the original dynamic capture data in real time to obtain the current data frame number;
a frame number detection submodule for detecting whether the current data frame number is continuous with the previous data frame number;
the data integrity determination submodule is used for determining that the real-time original moving capture data does not have a data loss part if the data integrity determination submodule is used for determining that the real-time original moving capture data does not have the data loss part;
and the data loss determining submodule is used for determining the data segment corresponding to the missing data frame sequence number as a data loss part if the data loss determining submodule does not determine the data segment corresponding to the missing data frame sequence number as the data loss part.
7. The system of claim 5, wherein the real-time supplemental data generation module comprises:
the first data segment intercepting submodule is used for intercepting a first data segment before the data loss part from the original moving capture data;
the first motion behavior identification submodule is used for identifying a first motion behavior corresponding to the first data segment by adopting the human motion identification model;
the motion path prediction sub-module of the human body limb is used for predicting the motion path of the human body limb based on the first motion behavior;
and the real-time supplementary data generation submodule is used for generating real-time supplementary data by adopting the motion path of the human body limb.
8. The system of claim 5, wherein the human motion recognition model building module comprises:
the training sample data acquisition submodule is used for acquiring training sample data;
the initial neural network model building submodule is used for building an initial neural network model;
and the neural network training submodule is used for training the initial neural network model by adopting the training sample data to obtain the human motion recognition model.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method for real-time correction of kinetic capture data according to any one of claims 1 to 4.
CN202111575792.1A 2021-12-22 2021-12-22 Real-time correction method and system for dynamic capture data Pending CN114399575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111575792.1A CN114399575A (en) 2021-12-22 2021-12-22 Real-time correction method and system for dynamic capture data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111575792.1A CN114399575A (en) 2021-12-22 2021-12-22 Real-time correction method and system for dynamic capture data

Publications (1)

Publication Number Publication Date
CN114399575A true CN114399575A (en) 2022-04-26

Family

ID=81227156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111575792.1A Pending CN114399575A (en) 2021-12-22 2021-12-22 Real-time correction method and system for dynamic capture data

Country Status (1)

Country Link
CN (1) CN114399575A (en)

Similar Documents

Publication Publication Date Title
CN109034660B (en) Method and related device for determining risk control strategy based on prediction model
CN111091200A (en) Updating method, system, agent, server and storage medium of training model
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
CN110339569B (en) Method and device for controlling virtual role in game scene
CN108875482B (en) Object detection method and device and neural network training method and device
JP2008113442A (en) Event-detection in multi-channel sensor-signal stream
CN104792327B (en) A kind of movement locus control methods based on mobile device
CN113535399A (en) NFV resource scheduling method, device and system
CN108969980A (en) The method, apparatus and storage medium of a kind of treadmill and its step number statistics
CN115082752A (en) Target detection model training method, device, equipment and medium based on weak supervision
CN115616937A (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN110245619B (en) Method and system for judging overrun object on escalator
CN114399575A (en) Real-time correction method and system for dynamic capture data
CN111885419B (en) Posture processing method and device, storage medium and electronic device
CN113591885A (en) Target detection model training method, device and computer storage medium
CN103209161A (en) Method and device for processing access requests
CN115797517A (en) Data processing method, device, equipment and medium of virtual model
CN114792320A (en) Trajectory prediction method, trajectory prediction device and electronic equipment
KR102542878B1 (en) Method, Server and System for Recognizing Motion in Video
CN108603809A (en) Automobile Test System, method and computer program product
CN112686987A (en) Method and device for constructing human body virtual model
CN112580689A (en) Training method and application method of neural network model, device and electronic equipment
CN110674764A (en) Method, device and system for detecting exposed earthwork of construction site
CN115439932B (en) Behavior and information detection method taking regular tetrahedral architecture as logic thinking
CN116739115B (en) Unmanned aerial vehicle escape strategy modeling-oriented data sample generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination