CN115474091A - Motion capture method and device based on decomposition metagraph - Google Patents

Motion capture method and device based on decomposition metagraph Download PDF

Info

Publication number
CN115474091A
CN115474091A CN202211087170.9A CN202211087170A CN115474091A CN 115474091 A CN115474091 A CN 115474091A CN 202211087170 A CN202211087170 A CN 202211087170A CN 115474091 A CN115474091 A CN 115474091A
Authority
CN
China
Prior art keywords
image data
motion
recognition model
action
gell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211087170.9A
Other languages
Chinese (zh)
Inventor
袁潮
邓迪旻
肖占中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202211087170.9A priority Critical patent/CN115474091A/en
Publication of CN115474091A publication Critical patent/CN115474091A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations

Abstract

The invention discloses a motion capture method and device based on a decomposition metagraph. Wherein, the method comprises the following steps: acquiring original image data; performing a Grael conversion on the original image data to obtain a Grael image data group; capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold parameter; and inputting the target image data into an action recognition model to obtain an image screening result. The invention solves the technical problems that the image motion capturing method in the prior art only identifies and analyzes original motion data and a motion capture static picture through a fixed frame to obtain characteristic parameters and motion data of a motion to be captured, the original image data cannot be converted, the pre-position and post-position frequencies of pixel segmentation are increased, and the precision and the efficiency of capturing motion image data are reduced.

Description

Motion capture method and device based on decomposition metagraph
Technical Field
The invention relates to the field of image processing, in particular to a motion capture method and device based on a decomposition metagraph.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
At present, for the image capturing process of high-precision camera equipment, technicians in the field usually adopt direct identification of dynamic images and divide the dynamic image data into fixed frames for screenshot to respectively identify the content on each frame of dynamic image, thereby achieving the technical purposes of security, monitoring, detection and analysis and the like. However, in the image motion capture method in the prior art, the original motion data and the motion capture static picture are only identified and analyzed through the fixed frame, so that the characteristic parameters and the motion data of the motion to be captured are obtained, the original image data cannot be converted, the pre-position and post-position frequencies of pixel segmentation are increased, and the precision and the efficiency of capturing the motion image data are reduced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a motion capture method and a motion capture device based on a decomposition metagraph, which at least solve the technical problems that in the image motion capture method in the prior art, original motion data and motion capture static pictures are only identified and analyzed through fixed frames to obtain characteristic parameters and motion data of motion to be captured, the original image data cannot be converted, the pixel segmentation pre-position frequency and post-position frequency are increased, and the motion image data capture accuracy and efficiency are reduced.
According to an aspect of an embodiment of the present invention, there is provided a method for motion capture based on a decomposition metagraph, including: acquiring original image data; performing a Grael conversion on the original image data to obtain a Grael image data group; capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold parameter; and inputting the target image data to an action recognition model to obtain an image screening result.
Optionally, the performing the gray conversion on the original image data to obtain a gray image data group includes: converting the original image data parameters through the following formula to obtain the Geer image data group
Figure BDA0003835589840000021
Wherein H is the set of the Goll image data, n is a natural integer greater than 1, and P1 and P2 are image-related pixel parameters in the original image data.
Optionally, before capturing moving frame image data in which the motion threshold in the set of gray image data exceeds [ N ] and generating target image data, the method further comprises: acquiring the quantity of action data groups in the Gell image data group; and generating N parameter values according to the weight factors and the number of the action data sets.
Optionally, the inputting the target image data into an action recognition model to obtain an image screening result includes: training the action recognition model according to historical data, and defining the sensitivity of the action recognition model; extracting action input parameters in the target image data according to the action recognition model; generating a screening object through the action input parameters and the action recognition model; performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
According to another aspect of the embodiments of the present invention, there is also provided a motion capture apparatus based on a decomposed metagraph, including: the acquisition module is used for acquiring original image data; the conversion module is used for carrying out the Geer conversion on the original image data to obtain a Geer image data group; the capturing module is used for capturing moving frame image data with a motion threshold value exceeding [ N ] in the Geer image data group and generating target image data, wherein N is a preset image motion threshold parameter; and the input module is used for inputting the target image data to the action recognition model to obtain an image screening result.
Optionally, the conversion module includes: a conversion unit for converting the original image data parameters to obtain the Geer image data set by the following formula
Figure BDA0003835589840000022
Wherein H is the set of the Goll image data, n is a natural integer greater than 1, and P1 and P2 are image-related pixel parameters in the original image data.
Optionally, the apparatus further comprises: the acquisition module is also used for acquiring the quantity of the action data groups in the Gell image data group; and the generating module is used for generating N parameter values according to the weight factors and the quantity of the action data sets.
Optionally, the input module includes: the training unit is used for training the action recognition model according to historical data and defining the sensitivity of the action recognition model; the extraction unit is used for extracting action input parameters in the target image data according to the action recognition model; the generating unit is used for generating a screening object through the action input parameters and the action recognition model; a mapping unit, configured to perform a mapping operation on the screening object to obtain the image screening result, where the image screening result includes: motion capture parameters.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls a device in which the non-volatile storage medium is located to execute a motion capture method based on a decomposed metagraph when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method of motion capture based on a decomposed metagraph.
In the embodiment of the invention, the method comprises the steps of acquiring original image data; performing a Grael conversion on the original image data to obtain a Grael image data group; capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold value parameter; the method for inputting the target image data into the motion recognition model to obtain the image screening result solves the technical problems that the image motion capturing method in the prior art only recognizes and analyzes original motion data and motion capture static pictures through fixed frames to obtain characteristic parameters and motion data of the motion to be captured, the original image data cannot be converted, the pixel segmentation pre-position and post-position frequency is increased, and the precision and the efficiency of capturing the motion image data are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of a method of motion capture based on a decomposition metagraph according to an embodiment of the invention;
FIG. 2 is a block diagram of a motion capture device based on a decomposed metagraph according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing a method according to the present invention, according to an embodiment of the present invention;
fig. 4 is a memory unit for holding or carrying program code implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method for motion capture based on a decomposition metagraph, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Example one
Fig. 1 is a flowchart of a motion capture method based on a decomposition metagraph according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, original image data is acquired.
Specifically, in order to solve the technical problems that in the image motion capture method in the prior art, original motion data and a motion capture static picture are only identified and analyzed through a fixed frame to obtain characteristic parameters and motion data of a motion to be captured, the original image data cannot be converted, the pre-position frequency and the post-position frequency of pixel segmentation are increased, and the precision and the efficiency of capturing motion image data are reduced, first, required image data needs to be collected through a high-precision image collecting device, and the image data can be a static image data set or dynamic video stream data.
And step S104, performing Gell conversion on the original image data to obtain a Gell image data set.
Specifically, after the original image data is obtained and collected, in order to process the original image data so as to perform in-capture and recognition on the motion and obtain an image unit which can perform motion capture and has moderate pixels, the embodiment of the invention needs to perform gell calculation conversion on the pixel coordinates and the pixel lattice bits in the original image data, and the gell conversion can be used for refining the whole pixels and paving the pixel stack ratio, thereby improving the subsequent utilization of the refined pixel image data and increasing the technical effects of whole processing and local processing.
Optionally, the performing the gray conversion on the original image data to obtain a gray image data group includes: converting the original image data parameters through the following formula to obtain the Geer image data group
Figure BDA0003835589840000051
Wherein H is the Gell image data set, n is a natural integer greater than 1, and P1 and P2 are image-related pixel parameters in the original image data.
And step S106, capturing moving frame image data with the motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold parameter.
Specifically, after the generation of the boolean image data set, it is necessary to extract and process the motion image data that meets the requirements in the boolean image data set for subsequent recognition of the motion image, and therefore it is necessary to capture the motion frame image data whose motion threshold in the boolean image data set exceeds [ N ], where N is a preset image motion threshold parameter, and generate the target image data.
Optionally, before capturing moving-frame image data in which a motion threshold in the golfer image data set exceeds [ N ] and generating target image data, the method further comprises: acquiring the quantity of action data groups in the Gell image data group; and generating a parameter value of N according to the weight factor and the quantity of the action data sets.
Specifically, in order to count and analyze the moving frame image data in the gray image data group in the embodiment of the present invention by using the weighting factor, the number of data units in the gray image data needs to be obtained, and the number parameter is used as a generation condition of the motion threshold parameter to generate a parameter value of N.
And S108, inputting the target image data to an action recognition model to obtain an image screening result.
Optionally, the inputting the target image data into an action recognition model to obtain an image screening result includes: training the action recognition model according to historical data, and defining the sensitivity of the action recognition model; extracting action input parameters in the target image data according to the action recognition model; generating a screening object through the action input parameters and the action recognition model; performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
Specifically, after the target image data is obtained in the embodiment of the present invention, since the content in the target image data is an important input parameter for motion capture recognition, in order to increase the efficiency of motion recognition, the motion recognition model may be trained according to historical data, and the sensitivity of the motion recognition model may be defined; extracting action input parameters in the target image data according to the action recognition model; generating a screening object through the action input parameters and the action recognition model; performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
By the embodiment, the technical problems that the original motion data and the motion capture static picture are only identified and analyzed through the fixed frame to obtain the characteristic parameters and the motion data of the motion to be captured, the original image data cannot be converted, the pre-position frequency and the post-position frequency of pixel segmentation are increased, and the precision and the efficiency of capturing the motion image data are reduced in the image motion capturing method in the prior art are solved.
Example two
Fig. 2 is a block diagram of a structure of motion capture cycle based on a decomposition metagraph according to an embodiment of the present invention, and as shown in fig. 2, the apparatus includes:
an obtaining module 20, configured to obtain raw image data.
Specifically, in order to solve the technical problems that in the image motion capture method in the prior art, original motion data and a motion capture static picture are only identified and analyzed through a fixed frame to obtain characteristic parameters and motion data of a motion to be captured, the original image data cannot be converted, the pre-position frequency and the post-position frequency of pixel segmentation are increased, and the precision and the efficiency of capturing motion image data are reduced, first, required image data needs to be collected through a high-precision image collecting device, and the image data can be a static image data set or dynamic video stream data.
And the conversion module 22 is configured to perform a gray conversion on the original image data to obtain a gray image data set.
Specifically, after the original image data is acquired and collected, in order to process the original image data so as to perform in-capture and identification on the motion and obtain an image unit which can perform motion capture and has a moderate pixel, the embodiment of the invention needs to perform the gell calculation conversion on the pixel coordinates and the pixel lattice bits in the original image data, and the gell conversion can be used for refining the whole pixel and fully spreading the pixel stack ratio, thereby improving the subsequent utilization of the refined pixel image data and increasing the technical effects of the whole processing and the local processing.
Optionally, the conversion module includes: a conversion unit for converting the original image data parameters to obtain the Geer image data set by the following formula
Figure BDA0003835589840000061
Wherein H is the set of the Goll image data, n is a natural integer greater than 1, and P1 and P2 are image-related pixel parameters in the original image data.
And the capturing module 24 is used for capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group and generating target image data, wherein N is a preset image motion threshold parameter.
Specifically, after the embodiment of the present invention generates the gray image data set, it is necessary to extract and process the motion image data that meets the requirement in the gray image data set for the subsequent identification of the motion image, and therefore it is necessary to capture the motion frame image data whose motion threshold in the gray image data set exceeds [ N ], where N is a preset image motion threshold parameter, and generate the target image data.
Optionally, the apparatus further comprises: the acquisition module is also used for acquiring the quantity of the action data groups in the Gell image data group; and the generating module is used for generating a parameter numerical value of N according to the weight factor and the quantity of the action data sets.
Specifically, in order to count and analyze the moving frame image data in the gray image data group according to the weighting factor, the number of data units in the gray image data needs to be obtained, and the number parameter is used as a generation condition of the motion threshold parameter to generate a parameter value of N.
And the input module 26 is configured to input the target image data to the motion recognition model to obtain an image screening result.
Optionally, the input module includes: the training unit is used for training the action recognition model according to historical data and defining the sensitivity of the action recognition model; the extraction unit is used for extracting action input parameters in the target image data according to the action recognition model; the generating unit is used for generating a screening object through the action input parameters and the action recognition model; a mapping unit, configured to perform a mapping operation on the screening object to obtain the image screening result, where the image screening result includes: motion capture parameters.
Specifically, after the target image data is obtained in the embodiment of the present invention, since the content in the target image data is an important input parameter for motion capture recognition, in order to increase the efficiency of motion recognition, the motion recognition model may be trained according to historical data, and the sensitivity of the motion recognition model may be defined; extracting action input parameters in the target image data according to the action recognition model; generating a screening object through the action input parameters and the action recognition model; performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
By the embodiment, the technical problems that the original motion data and the motion capture static picture are only identified and analyzed through the fixed frame to obtain the characteristic parameters and the motion data of the motion to be captured, the original image data cannot be converted, the pre-position frequency and the post-position frequency of pixel segmentation are increased, and the precision and the efficiency of capturing the motion image data are reduced in the image motion capturing method in the prior art are solved.
According to another aspect of the embodiment of the present invention, a non-volatile storage medium is further provided, and the non-volatile storage medium includes a stored program, where the program controls a device in which the non-volatile storage medium is located to execute a motion capture method based on a decomposed metagraph when running.
Specifically, the method comprises the following steps: acquiring original image data; performing a Grael conversion on the original image data to obtain a Grael image data group; capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold parameter; and inputting the target image data to an action recognition model to obtain an image screening result. Optionally, the performing the gray conversion on the original image data to obtain a gray image data set includes: converting the original image data parameters through the following formula to obtain the Geer image data group
Figure BDA0003835589840000081
Wherein H is the set of the Goll image data, n is a natural integer greater than 1, and P1 and P2 are image-related pixel parameters in the original image data. Optionally, before capturing moving frame image data in which the motion threshold in the set of gray image data exceeds [ N ] and generating target image data, the method further comprises: acquiring the quantity of action data groups in the Gell image data group; and generating N parameter values according to the weight factors and the number of the action data sets. Optionally, the inputting the target image data into an action recognition model to obtain an image screening result includes: training the action recognition model according to historical data, and defining the sensitivity of the action recognition model; extracting action input parameters in the target image data according to the action recognition model; generating a screening object through the action input parameters and the action recognition model; performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method of motion capture based on a decomposed metagraph.
Specifically, the method comprises the following steps: acquiring original image data; performing a Grael conversion on the original image data to obtain a Grael image data group; capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold parameter; and inputting the target image data to an action recognition model to obtain an image screening result. Optionally, the performing the gray conversion on the original image data to obtain a gray image data set includes: converting the original image data parameters through the following formula to obtain the Geer image data group
Figure BDA0003835589840000082
Wherein H is the Gell image data set, n is a natural integer greater than 1, and P1 and P2 are image-related pixel parameters in the original image data. Optionally, before capturing moving frame image data in which the motion threshold in the set of gray image data exceeds [ N ] and generating target image data, the method further comprises: acquiring the quantity of action data groups in the Gell image data group; and generating N parameter values according to the weight factors and the number of the action data sets. Optionally, the inputting the target image data into an action recognition model to obtain an image screening result includes: training the action recognition model according to historical data, and defining the sensitivity of the action recognition model; extracting action input parameters in the target image data according to the action recognition model; generating a screening object through the action input parameters and the action recognition model; performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to realize communication connections between the elements. The memory 33 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through a wired or wireless connection.
Optionally, the input device 30 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 4 is a schematic hardware structure diagram of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the method in the above-described embodiments.
The memory 42 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 42 may comprise a Random Access Memory (RAM) and may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. Processing component 40 may include one or more processors 41 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 40 may include one or more modules that facilitate interaction between processing component 40 and other components. For example, the processing component 40 may include a multimedia module to facilitate interaction between the multimedia component 45 and the processing component 40.
The power supply component 44 provides power to the various components of the terminal device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 45 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 also includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing component 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 48 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor assembly 48 may detect the open/closed status of the terminal device, the relative positioning of the components, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 43, the audio component 46, the input/output interface 47 and the sensor component 48 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (10)

1. A method for motion capture based on a decomposition metagraph, comprising:
acquiring original image data;
performing Gell conversion on the original image data to obtain a Gell image data set;
capturing moving frame image data with a motion threshold value exceeding [ N ] in the Gell image data group, and generating target image data, wherein N is a preset image motion threshold parameter;
and inputting the target image data to an action recognition model to obtain an image screening result.
2. The method of claim 1, wherein said subjecting the raw image data to a gray conversion to obtain a gray image data set comprises:
converting the original image data parameters through the following formula to obtain the Gell image data group
Figure FDA0003835589830000011
Wherein H is the Gell image data set, n is a natural integer greater than 1, P 1 And P 2 Associating pixel parameters for an image in the raw image data.
3. The method of claim 1, wherein prior to said capturing moving frame image data in said set of gol image data having a motion threshold exceeding [ N ], generating target image data, the method further comprises:
acquiring the quantity of action data groups in the Gell image data group;
and generating N parameter values according to the weight factors and the number of the action data sets.
4. The method of claim 1, wherein inputting the target image data to a motion recognition model, and obtaining image filtering results comprises:
training the action recognition model according to historical data, and defining the sensitivity of the action recognition model;
extracting action input parameters in the target image data according to the action recognition model;
generating a screening object through the action input parameters and the action recognition model;
performing mapping operation on the screening object to obtain the image screening result, wherein the image screening result comprises: motion capture parameters.
5. A motion capture device based on a decomposed metagraph, comprising:
the acquisition module is used for acquiring original image data;
the conversion module is used for carrying out the Gell conversion on the original image data to obtain a Gell image data set;
the capturing module is used for capturing moving frame image data with a motion threshold value exceeding [ N ] in the Geer image data group and generating target image data, wherein N is a preset image motion threshold parameter;
and the input module is used for inputting the target image data to the action recognition model to obtain an image screening result.
6. The apparatus of claim 5, wherein the conversion module comprises:
a conversion unit for converting the original image data parameters to obtain the Gell image data set by the following formula
Figure FDA0003835589830000021
Wherein H is the Geer image data group, n is a natural integer greater than 1, P 1 And P 2 Pixel parameters are associated for an image in the raw image data.
7. The apparatus of claim 5, further comprising:
the acquisition module is also used for acquiring the quantity of the action data groups in the Gell image data group;
and the generating module is used for generating N parameter values according to the weight factors and the quantity of the action data sets.
8. The apparatus of claim 5, wherein the input module comprises:
the training unit is used for training the action recognition model according to historical data and defining the sensitivity of the action recognition model;
the extraction unit is used for extracting action input parameters in the target image data according to the action recognition model;
the generating unit is used for generating a screening object through the action input parameters and the action recognition model;
a mapping unit, configured to perform a mapping operation on the screening object to obtain the image screening result, where the image screening result includes: motion capture parameters.
9. A non-volatile storage medium, comprising a stored program, wherein the program when executed controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
CN202211087170.9A 2022-09-07 2022-09-07 Motion capture method and device based on decomposition metagraph Pending CN115474091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211087170.9A CN115474091A (en) 2022-09-07 2022-09-07 Motion capture method and device based on decomposition metagraph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211087170.9A CN115474091A (en) 2022-09-07 2022-09-07 Motion capture method and device based on decomposition metagraph

Publications (1)

Publication Number Publication Date
CN115474091A true CN115474091A (en) 2022-12-13

Family

ID=84368454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211087170.9A Pending CN115474091A (en) 2022-09-07 2022-09-07 Motion capture method and device based on decomposition metagraph

Country Status (1)

Country Link
CN (1) CN115474091A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309523A (en) * 2023-04-06 2023-06-23 北京拙河科技有限公司 Dynamic frame image dynamic fuzzy recognition method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0916774A (en) * 1995-07-04 1997-01-17 Sanyo Electric Co Ltd Image recognition method
US20020067864A1 (en) * 2000-11-15 2002-06-06 Masatoshi Matsuhira Image processing device and image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0916774A (en) * 1995-07-04 1997-01-17 Sanyo Electric Co Ltd Image recognition method
US20020067864A1 (en) * 2000-11-15 2002-06-06 Masatoshi Matsuhira Image processing device and image processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J. SANZ等: "On the Gerchberg - Papoulis algorithm" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309523A (en) * 2023-04-06 2023-06-23 北京拙河科技有限公司 Dynamic frame image dynamic fuzzy recognition method and device

Similar Documents

Publication Publication Date Title
CN115426525B (en) High-speed dynamic frame linkage image splitting method and device
CN115631122A (en) Image optimization method and device for edge image algorithm
CN114842424A (en) Intelligent security image identification method and device based on motion compensation
CN115170818A (en) Dynamic frame image feature extraction method and device
CN115474091A (en) Motion capture method and device based on decomposition metagraph
CN116614453B (en) Image transmission bandwidth selection method and device based on cloud interconnection
CN115623336A (en) Image tracking method and device for hundred million-level camera equipment
CN114866702A (en) Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
CN111008842B (en) Tea detection method, system, electronic equipment and machine-readable medium
CN116723298B (en) Method and device for improving transmission efficiency of camera end
CN116402935B (en) Image synthesis method and device based on ray tracing algorithm
CN115914819B (en) Picture capturing method and device based on orthogonal decomposition algorithm
CN115511735B (en) Snow field gray scale picture optimization method and device
CN108090430B (en) Face detection method and device
CN115460389B (en) Image white balance area optimization method and device
CN116468883B (en) High-precision image data volume fog recognition method and device
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN115187570B (en) Singular traversal retrieval method and device based on DNN deep neural network
CN115205313B (en) Picture optimization method and device based on least square algorithm
CN116723419B (en) Acquisition speed optimization method and device for billion-level high-precision camera
CN117896625A (en) Picture imaging method and device based on low-altitude high-resolution analysis
CN116468751A (en) High-speed dynamic image detection method and device
CN116579965B (en) Multi-image fusion method and device
CN116758165B (en) Image calibration method and device based on array camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221213