CN115760917A - Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium - Google Patents

Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN115760917A
CN115760917A CN202211428844.7A CN202211428844A CN115760917A CN 115760917 A CN115760917 A CN 115760917A CN 202211428844 A CN202211428844 A CN 202211428844A CN 115760917 A CN115760917 A CN 115760917A
Authority
CN
China
Prior art keywords
simulation
dynamic
video
dimensional
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211428844.7A
Other languages
Chinese (zh)
Inventor
刘建
邹江华
杨松
罗颜
谭进
袁晓松
邹继明
袁学士
熊仕磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zunyi Power Supplying Bureau of Guizhou Power Grid Co Ltd
Original Assignee
Zunyi Power Supplying Bureau of Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zunyi Power Supplying Bureau of Guizhou Power Grid Co Ltd filed Critical Zunyi Power Supplying Bureau of Guizhou Power Grid Co Ltd
Priority to CN202211428844.7A priority Critical patent/CN115760917A/en
Publication of CN115760917A publication Critical patent/CN115760917A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method, a device, equipment and a storage medium for dynamically capturing three-dimensional simulation interaction, wherein the method comprises the steps of acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image; carrying out image processing on the simulation image to obtain dynamic track data; the method comprises the steps of receiving action data of a user according to an inertial sensing network, adjusting the dynamic track data according to the action data to obtain three-dimensional interaction data, carrying out three-dimensional interaction in real time, improving the motion capture accuracy of objects and figures in the video, reducing the cost of three-dimensional simulation interaction, reducing the power consumption of hardware equipment, improving the speed and efficiency of dynamic capture of the three-dimensional simulation interaction, and improving the user interaction experience.

Description

Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a dynamic capture three-dimensional simulation interaction method, a device, equipment and a storage medium.
Background
The motion capture technology expands the accurate measurement of human vision, provides precious accurate motion data for scientific research and application in multiple fields, and has important research value; the motion capture technology can be applied to the fields of robot control, sports motion analysis, three-dimensional motion reconstruction, biomechanical analysis, virtual reality, augmented reality systems and the like.
With the rapid development of computer graphics, motion capture technology is increasingly being applied to movies, animation production, and game development; according to statistics, in the domestic three-dimensional animation industry at present, the use rate of a motion capture system is as high as about 60 percent; it is expected that the demand for motion capture technology will increase progressively in the three-dimensional manufacturing industry.
The three-dimensional image technology is interaction between reality and virtual realized on the basis of a dynamic model, the existing three-dimensional image technology needs to carry out motion capture through wearable equipment and is realized through post-stage technical manufacturing, direct feedback cannot be carried out on a real-time three-dimensional model, and the user interaction experience is poor.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for dynamically capturing three-dimensional simulation interaction, and aims to solve the technical problems that in the prior art, a three-dimensional image technology realizes interaction between reality and virtual on the basis of a dynamic model, the existing three-dimensional image technology needs to be subjected to motion capture through wearable equipment and is realized through post-technical manufacturing, direct feedback cannot be carried out on a real-time three-dimensional model, and the user interaction experience is poor.
In a first aspect, the present invention provides a dynamic capture three-dimensional simulation interaction method, including the following steps:
acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image;
carrying out image processing on the simulation image to obtain dynamic track data;
and receiving action data of a user according to the inertial sensing network, and adjusting the dynamic trajectory data according to the action data to obtain three-dimensional interactive data.
Optionally, before the obtaining of the video to be interacted, simulating an object and a person in the video to be interacted according to the target simulation model and obtaining the simulation image, the method for dynamically capturing the three-dimensional simulation interaction further includes:
acquiring a current simulation demand, and performing three-dimensional modeling in a preset simulation system according to the current simulation demand to obtain a simulation model;
and acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model.
Optionally, the obtaining a video to be interacted and simulating an object and a person in the video to be interacted according to a target simulation model to obtain a simulation image includes:
acquiring an original video under a target scene through video acquisition equipment, and preprocessing the original video to obtain a video to be interacted;
and identifying the motion state of the object and the figure in the video to be interacted according to the target simulation model, and simulating the identification result to obtain a simulation image.
Optionally, the obtaining, by the video capture device, an original video in a target scene, and preprocessing the original video to obtain a video to be interacted includes:
acquiring an original video under a target scene through video acquisition equipment, and performing framing processing and color modeling on the original video to obtain a modeling result;
filtering the modeling result to obtain a filtering result;
and adjusting the contrast of the filtering result to obtain the preprocessed video to be interacted.
Optionally, the performing image processing on the simulation image to obtain dynamic trajectory data includes:
carrying out gray processing and interval statistics on images without moving targets in the simulation images to obtain a gray value distribution interval;
carrying out foreground detection on the gray value distribution interval to obtain a background difference image;
performing image segmentation according to the background difference image to obtain an image segmentation result;
and capturing the track of the preset key points in the image segmentation result to obtain dynamic track data.
Optionally, the capturing a track of a preset key point in the image segmentation result to obtain dynamic track data includes:
acquiring a key point identification rule for capturing a motion state in the simulation image, and determining a preset key point in the image segmentation result according to the key point identification rule;
and performing track fitting on the preset key points by using a least square method to obtain dynamic track data.
Optionally, the receiving, according to the inertial sensor network, motion data of a user, and adjusting the dynamic trajectory data according to the motion data to obtain three-dimensional interaction data includes:
receiving motion data fed back by a motion capture sensor worn on a user according to an inertial sensing network;
and reconstructing the dynamic trajectory data in real time according to the action data to obtain real-time reconstructed motion data, and correcting the motion data according to the body data of the user to generate three-dimensional interactive data.
In a second aspect, to achieve the above object, the present invention further provides a dynamic capture three-dimensional simulation interactive device, including:
the image acquisition module is used for acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image;
the image processing module is used for carrying out image processing on the simulation image to obtain dynamic track data;
and the adjustment interaction module is used for receiving action data of a user according to the inertial sensing network, and adjusting the dynamic trajectory data according to the action data to obtain three-dimensional interaction data.
In a third aspect, to achieve the above object, the present invention further provides a dynamic capture three-dimensional simulation interactive device, including: the dynamic capture three-dimensional simulation interaction program comprises a memory, a processor and a dynamic capture three-dimensional simulation interaction program stored on the memory and capable of running on the processor, wherein the dynamic capture three-dimensional simulation interaction program is configured to realize the steps of the dynamic capture three-dimensional simulation interaction method.
In a fourth aspect, to achieve the above object, the present invention further provides a storage medium, on which a dynamic capture three-dimensional simulation interactive program is stored, and when being executed by a processor, the dynamic capture three-dimensional simulation interactive program implements the steps of the dynamic capture three-dimensional simulation interactive method as described above.
The invention provides a dynamic capture three-dimensional simulation interaction method, which comprises the steps of obtaining a video to be interacted, simulating an object and a figure in the video to be interacted according to a target simulation model, and obtaining a simulation image; carrying out image processing on the simulation image to obtain dynamic track data; the method comprises the steps of receiving motion data of a user according to an inertial sensing network, adjusting the dynamic track data according to the motion data to obtain three-dimensional interaction data, carrying out three-dimensional interaction in real time, improving the motion capture accuracy of objects and figures in the video, reducing the cost of three-dimensional simulation interaction, reducing the power consumption of hardware equipment, improving the speed and efficiency of dynamic capture three-dimensional simulation interaction, and improving the user interaction experience.
Drawings
Fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of a dynamic capture three-dimensional simulation interaction method according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of a method for dynamic capture three-dimensional simulation interaction according to the present invention;
FIG. 4 is a schematic flowchart illustrating a third embodiment of a dynamic three-dimensional simulation interaction method for capturing images according to the present invention;
FIG. 5 is a schematic flowchart illustrating a fourth embodiment of a dynamic three-dimensional simulation interaction method for capturing images according to the present invention;
FIG. 6 is a flowchart illustrating a fifth embodiment of a dynamic three-dimensional simulation interaction method according to the present invention;
FIG. 7 is a functional block diagram of a dynamic three-dimensional simulation interactive device according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The solution of the embodiment of the invention is mainly as follows: simulating objects and characters in a video to be interacted according to a target simulation model by acquiring the video to be interacted to obtain a simulation image; carrying out image processing on the simulation image to obtain dynamic track data; the method comprises the steps of receiving action data of a user according to an inertial sensing network, adjusting dynamic track data according to the action data to obtain three-dimensional interaction data, performing three-dimensional interaction in real time, improving motion capture accuracy of objects and figures in a video, reducing cost of three-dimensional simulation interaction, reducing power consumption of hardware equipment, improving speed and efficiency of dynamic capture three-dimensional simulation interaction, improving user interaction experience, and solving the technical problems that in the prior art, a three-dimensional image technology is realized on the basis of a dynamic model, motion capture is required to be performed through wearable equipment, and the prior three-dimensional image technology is manufactured and realized through later-stage technology, cannot be directly fed back to the real-time three-dimensional model, and user interaction experience is poor.
Referring to fig. 1, fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the apparatus may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a Wi-Fi interface). The Memory 1005 may be a high-speed RAM Memory or a Non-Volatile Memory (Non-Volatile Memory), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the apparatus shown in fig. 1 is not intended to be limiting of the apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005 as a storage medium may include an operating device, a network communication module, a user interface module, and a motion capture three-dimensional simulation interactive program therein.
The apparatus of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 through the processor 1001, and performs the following operations:
acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image;
carrying out image processing on the simulation image to obtain dynamic track data;
and receiving action data of a user according to the inertial sensing network, and adjusting the dynamic track data according to the action data to obtain three-dimensional interaction data.
The apparatus of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 through the processor 1001, and further performs the following operations:
acquiring a current simulation demand, and performing three-dimensional modeling in a preset simulation system according to the current simulation demand to obtain a simulation model;
and acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model.
The device of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 by the processor 1001, and further performs the following operations:
acquiring an original video under a target scene through video acquisition equipment, and preprocessing the original video to obtain a video to be interacted;
and identifying the motion state of the object and the figure in the video to be interacted according to the target simulation model, and simulating the identification result to obtain a simulation image.
The apparatus of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 through the processor 1001, and further performs the following operations:
acquiring an original video under a target scene through video acquisition equipment, and performing framing processing and color modeling on the original video to obtain a modeling result;
filtering the modeling result to obtain a filtering result;
and adjusting the contrast of the filtering result to obtain the preprocessed video to be interacted.
The apparatus of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 through the processor 1001, and further performs the following operations:
carrying out gray processing and interval statistics on images without moving targets in the simulation images to obtain a gray value distribution interval;
carrying out foreground detection on the gray value distribution interval to obtain a background difference image;
performing image segmentation according to the background difference image to obtain an image segmentation result;
and capturing the track of the preset key points in the image segmentation result to obtain dynamic track data.
The apparatus of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 through the processor 1001, and further performs the following operations:
acquiring a key point identification rule for capturing a motion state in the simulation image, and determining a preset key point in the image segmentation result according to the key point identification rule;
and performing track fitting on the preset key points by using a least square method to obtain dynamic track data.
The apparatus of the present invention calls the motion capture three-dimensional simulation interactive program stored in the memory 1005 through the processor 1001, and further performs the following operations:
receiving motion data fed back by a motion capture sensor worn on a user according to an inertial sensing network;
and reconstructing the dynamic trajectory data in real time according to the action data to obtain real-time reconstructed motion data, and correcting the motion data according to the body data of the user to generate three-dimensional interactive data.
According to the scheme, by acquiring the video to be interacted, simulating the object and the figure in the video to be interacted according to the target simulation model to obtain a simulation image; carrying out image processing on the simulation image to obtain dynamic track data; the method comprises the steps of receiving action data of a user according to an inertial sensing network, adjusting the dynamic track data according to the action data to obtain three-dimensional interaction data, carrying out three-dimensional interaction in real time, improving the motion capture accuracy of objects and figures in the video, reducing the cost of three-dimensional simulation interaction, reducing the power consumption of hardware equipment, improving the speed and efficiency of dynamic capture of the three-dimensional simulation interaction, and improving the user interaction experience.
Based on the hardware structure, the embodiment of the dynamic capture three-dimensional simulation interaction method is provided.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a dynamic capture three-dimensional simulation interaction method according to the present invention.
In a first embodiment, the dynamic capture three-dimensional simulation interaction method comprises the following steps:
s10, obtaining a video to be interacted, simulating an object and a figure in the video to be interacted according to a target simulation model, and obtaining a simulation image.
It should be noted that the video to be interacted is a video that needs to be dynamically captured and three-dimensional simulation interacted, and a simulation operation can be performed on an object and a figure in the video to be interacted through a target simulation model, so that a corresponding simulation image is obtained.
And S20, carrying out image processing on the simulation image to obtain dynamic track data.
It can be understood that, by performing image processing on the simulation image, corresponding dynamic trajectory data generated by the moving object during the dynamic motion process can be obtained from the simulation image.
And S30, receiving action data of a user according to the inertial sensing network, and adjusting the dynamic trajectory data according to the action data to obtain three-dimensional interactive data.
It should be understood that the inertial sensor network is a sensor network constructed by a plurality of inertial sensors, and the dynamic trajectory data can be subjected to action adjustment according to the action data, so as to obtain three-dimensional interaction data for interacting with a user.
According to the scheme, by acquiring the video to be interacted, simulating the object and the figure in the video to be interacted according to the target simulation model to obtain a simulation image; carrying out image processing on the simulation image to obtain dynamic track data; the method comprises the steps of receiving action data of a user according to an inertial sensing network, adjusting the dynamic track data according to the action data to obtain three-dimensional interaction data, carrying out three-dimensional interaction in real time, improving the motion capture accuracy of objects and figures in the video, reducing the cost of three-dimensional simulation interaction, reducing the power consumption of hardware equipment, improving the speed and efficiency of dynamic capture of the three-dimensional simulation interaction, and improving the user interaction experience.
Further, fig. 3 is a schematic flowchart of a second embodiment of the dynamic three-dimensional simulation interaction method for capturing images, and as shown in fig. 3, the second embodiment of the dynamic three-dimensional simulation interaction method for capturing images is proposed based on the first embodiment of the present invention, and in this embodiment, before the step S10, the dynamic three-dimensional simulation interaction method for capturing images further includes the following steps:
and S01, acquiring a current simulation demand, and performing three-dimensional modeling in a preset simulation system according to the current simulation demand to obtain a simulation model.
It should be noted that the current simulation requirement is preset simulation design requirement information, and three-dimensional modeling can be performed in a preset simulation system according to the current simulation requirement to establish a movable simulation model.
And S02, acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model.
It can be understood that the training sample view is a preset sample view for model training, and the trained target simulation model can be obtained by dynamically training the simulation model through the training sample view.
According to the scheme, the current simulation requirement is obtained, and three-dimensional modeling is carried out in the preset simulation system according to the current simulation requirement, so that a simulation model is obtained; acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model; the method can quickly obtain the trained simulation model, ensures the simulation precision, and improves the speed and efficiency of dynamic capture three-dimensional simulation interaction.
Further, fig. 4 is a schematic flowchart of a third embodiment of the dynamic three-dimensional simulation interaction method for capturing images according to the present invention, and as shown in fig. 4, the third embodiment of the dynamic three-dimensional simulation interaction method for capturing images according to the present invention is proposed based on the first embodiment, in this embodiment, the step S10 specifically includes the following steps:
s11, acquiring an original video in a target scene through video acquisition equipment, and preprocessing the original video to obtain a video to be interacted.
It should be noted that a target scene, that is, an original video in a currently determined video capture scene, may be obtained by the video capture device, and the video to be interacted after the preprocessing may be obtained by preprocessing the original video.
Further, the step S11 specifically includes the following steps:
acquiring an original video under a target scene through video acquisition equipment, and performing framing processing and color modeling on the original video to obtain a modeling result;
filtering the modeling result to obtain a filtering result;
and adjusting the contrast of the filtering result to obtain the preprocessed video to be interacted.
It should be understood that the original video is subjected to framing processing, the image subjected to the framing processing is subjected to color modeling, the modeled image is subjected to filtering processing, and finally the image subjected to the filtering processing is subjected to contrast adjustment to obtain the preprocessed image.
And S12, identifying the motion state of the object and the character in the video to be interacted according to the target simulation model, and simulating the identification result to obtain a simulation image.
It can be understood that the target simulation model identifies the motion states of the object and the person in the video to be interacted, so as to obtain an identification result of the motion states of the dynamic object and the person corresponding to different images along with time through tracking and identification, and further simulate the identification result to obtain a corresponding simulated image.
According to the scheme, the original video under the target scene is obtained through the video acquisition equipment, and the original video is preprocessed to obtain the video to be interacted; and identifying the motion state of the object and the figure in the video to be interacted according to the target simulation model, simulating the identification result to obtain a simulation image, quickly obtaining the simulation image, and further improving the speed and the efficiency of dynamically capturing three-dimensional simulation interaction.
Further, fig. 5 is a schematic flowchart of a fourth embodiment of the dynamic three-dimensional simulation interaction method for capturing images, and as shown in fig. 5, the fourth embodiment of the dynamic three-dimensional simulation interaction method for capturing images according to the present invention is proposed based on the first embodiment, in this embodiment, the step S20 specifically includes the following steps:
and S21, carrying out gray processing and interval statistics on the images without the moving targets in the simulation images to obtain a gray value distribution interval.
It should be noted that, the gray value of the image where no moving object enters is counted according to the interval, and then an approximate distribution interval of the initial background gray value with statistical significance is obtained, so that some external factors such as random interference and camera shake caused by environmental changes can be reduced.
And S22, carrying out foreground detection on the gray value distribution interval to obtain a background difference image.
It should be understood that, the foreground detection is performed on the image in the gray value distribution interval, that is, the processed image is subtracted from the background, and a background difference image may be obtained through a subtraction operation.
And S23, carrying out image segmentation according to the background difference image to obtain an image segmentation result.
It can be understood that, according to the background difference image polar image segmentation, the moving foreground and the background are distinguished, the image segmentation is realized, and the image segmentation result can be obtained.
And S24, capturing the track of the preset key points in the image segmentation result to obtain dynamic track data.
It should be understood that, by capturing the motion trajectory of the preset key point in the image segmentation result, dynamic trajectory data corresponding to the motion trajectory may be obtained.
Further, the step S24 specifically includes the following steps:
acquiring a key point identification rule for capturing a motion state in the simulation image, and determining a preset key point in the image segmentation result according to the key point identification rule;
and performing track fitting on the preset key points by using a least square method to obtain dynamic track data.
It can be understood that the key point identification rule is a preset key point selection rule for capturing a motion state in the simulation image, the preset key points in the image segmentation result are determined through the key point identification rule, and then a least square method can be used to fit a motion track of the key points, so that the dynamic capture of the key points in the video is realized.
According to the scheme, the gray value distribution interval is obtained by performing gray processing and interval statistics on the image without the moving target in the simulation image; carrying out foreground detection on the gray value distribution interval to obtain a background difference image; performing image segmentation according to the background difference image to obtain an image segmentation result; capturing tracks of preset key points in the image segmentation result to obtain dynamic track data; the motion capture accuracy of objects and characters in the video is improved, and the speed and the efficiency of dynamic capture three-dimensional simulation interaction are improved.
Further, fig. 6 is a schematic flowchart of a fifth embodiment of the dynamic three-dimensional simulation interaction method for capturing images, and as shown in fig. 6, the fifth embodiment of the dynamic three-dimensional simulation interaction method for capturing images according to the present invention is proposed based on the first embodiment, in this embodiment, the step S30 specifically includes the following steps:
and S31, receiving motion data fed back by a motion capture sensor worn on the user according to an inertial sensing network.
It should be noted that motion data fed back by the motion capture sensor can be acquired through the inertial sensor network, and the motion capture sensor is worn on the body of the user and used for sensing motion information corresponding to the real-time motion of the user.
And S32, reconstructing the dynamic trajectory data in real time according to the motion data to obtain real-time reconstructed motion data, and correcting the motion data according to the body data of the user to generate three-dimensional interactive data.
It should be understood that, the motion data may be reconstructed in real time by performing a trajectory reconstruction on the dynamic trajectory data through the motion data, so as to obtain real-time reconstructed motion data, and the motion data is corrected through the body data of the user, so as to generate corrected three-dimensional interactive data.
According to the scheme, the motion data fed back by the motion capture sensor worn on the user body is received through the inertial sensing network; the dynamic trajectory data is reconstructed in real time according to the motion data to obtain real-time reconstructed motion data, the motion data is corrected according to the body data of the user to generate three-dimensional interaction data, three-dimensional interaction can be performed in real time, the motion capture accuracy of objects and characters in a video is improved, the cost of three-dimensional simulation interaction is reduced, the power consumption of hardware equipment is reduced, the speed and the efficiency of dynamic capture of three-dimensional simulation interaction are improved, and the user interaction experience is improved.
Correspondingly, the invention further provides a dynamic capture three-dimensional simulation interaction device.
Referring to fig. 7, fig. 7 is a functional block diagram of a dynamic three-dimensional simulation interactive device according to a first embodiment of the present invention.
In a first embodiment of the dynamic three-dimensional simulation interactive device for capturing images, the dynamic three-dimensional simulation interactive device for capturing images comprises:
the image acquisition module 10 is configured to acquire a video to be interacted, and simulate an object and a person in the video to be interacted according to a target simulation model to obtain a simulated image.
And the image processing module 20 is configured to perform image processing on the simulation image to obtain dynamic trajectory data.
And the adjustment interaction module 30 is configured to receive motion data of a user according to the inertial sensor network, and adjust the dynamic trajectory data according to the motion data to obtain three-dimensional interaction data.
The image obtaining module 10 is further configured to obtain a current simulation requirement, and perform three-dimensional modeling in a preset simulation system according to the current simulation requirement to obtain a simulation model; and acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model.
The image obtaining module 10 is further configured to obtain an original video in a target scene through a video collecting device, and pre-process the original video to obtain a video to be interacted; and identifying the motion state of the object and the figure in the video to be interacted according to the target simulation model, and simulating the identification result to obtain a simulation image.
The image obtaining module 10 is further configured to obtain an original video in a target scene through a video collecting device, perform framing processing and color modeling on the original video, and obtain a modeling result; filtering the modeling result to obtain a filtering result; and adjusting the contrast of the filtering result to obtain the preprocessed video to be interacted.
The image processing module 20 is further configured to perform gray processing and interval statistics on an image without a moving target in the simulation image, so as to obtain a gray value distribution interval; carrying out foreground detection on the gray value distribution interval to obtain a background difference image; performing image segmentation according to the background difference image to obtain an image segmentation result; and capturing the track of the preset key points in the image segmentation result to obtain dynamic track data.
The image processing module 20 is further configured to obtain a key point identification rule for capturing a motion state in the simulation image, and determine a preset key point in the image segmentation result according to the key point identification rule; and performing track fitting on the preset key points by using a least square method to obtain dynamic track data.
The adjustment interaction module 30 is further configured to receive motion data fed back by a motion capture sensor worn on the user according to an inertial sensing network; and reconstructing the dynamic trajectory data in real time according to the action data to obtain real-time reconstructed motion data, and correcting the motion data according to the body data of the user to generate three-dimensional interactive data.
The steps of implementing each functional module of the dynamic three-dimensional simulation interaction capturing device can refer to each embodiment of the dynamic three-dimensional simulation interaction capturing method of the present invention, and are not described herein again.
In addition, an embodiment of the present invention further provides a storage medium, where a dynamic capture three-dimensional simulation interactive program is stored on the storage medium, and when executed by a processor, the dynamic capture three-dimensional simulation interactive program implements the following operations:
acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image;
carrying out image processing on the simulation image to obtain dynamic track data;
and receiving action data of a user according to the inertial sensing network, and adjusting the dynamic trajectory data according to the action data to obtain three-dimensional interactive data.
Further, when executed by the processor, the dynamic capture three-dimensional simulation interactive program further realizes the following operations:
acquiring a current simulation demand, and performing three-dimensional modeling in a preset simulation system according to the current simulation demand to obtain a simulation model;
and acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model.
Further, when executed by the processor, the dynamic capture three-dimensional simulation interactive program further realizes the following operations:
acquiring an original video under a target scene through video acquisition equipment, and preprocessing the original video to obtain a video to be interacted;
and identifying the motion state of the object and the figure in the video to be interacted according to the target simulation model, and simulating the identification result to obtain a simulation image.
Further, when executed by the processor, the dynamic capture three-dimensional simulation interactive program further realizes the following operations:
acquiring an original video under a target scene through video acquisition equipment, and performing framing processing and color modeling on the original video to obtain a modeling result;
filtering the modeling result to obtain a filtering result;
and adjusting the contrast of the filtering result to obtain the preprocessed video to be interacted.
Further, when executed by the processor, the dynamic capture three-dimensional simulation interactive program further realizes the following operations:
carrying out gray processing and interval statistics on images without moving targets in the simulation images to obtain gray value distribution intervals;
carrying out foreground detection on the gray value distribution interval to obtain a background difference image;
performing image segmentation according to the background difference image to obtain an image segmentation result;
and capturing the track of the preset key points in the image segmentation result to obtain dynamic track data.
Further, when executed by the processor, the dynamic capture three-dimensional simulation interactive program further realizes the following operations:
acquiring a key point identification rule for capturing a motion state in the simulation image, and determining a preset key point in the image segmentation result according to the key point identification rule;
and performing track fitting on the preset key points by using a least square method to obtain dynamic track data.
Further, when executed by the processor, the dynamic capture three-dimensional simulation interactive program further realizes the following operations:
receiving motion data fed back by a motion capture sensor worn on a user according to an inertial sensing network;
and reconstructing the dynamic trajectory data in real time according to the action data to obtain real-time reconstructed motion data, and correcting the motion data according to the body data of the user to generate three-dimensional interactive data.
According to the scheme, by acquiring the video to be interacted, simulating the object and the figure in the video to be interacted according to the target simulation model to obtain a simulation image; carrying out image processing on the simulation image to obtain dynamic track data; the method comprises the steps of receiving action data of a user according to an inertial sensing network, adjusting the dynamic track data according to the action data to obtain three-dimensional interaction data, carrying out three-dimensional interaction in real time, improving the motion capture accuracy of objects and figures in the video, reducing the cost of three-dimensional simulation interaction, reducing the power consumption of hardware equipment, improving the speed and efficiency of dynamic capture of the three-dimensional simulation interaction, and improving the user interaction experience.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A dynamic capture three-dimensional simulation interaction method is characterized by comprising the following steps:
acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image;
carrying out image processing on the simulation image to obtain dynamic track data;
and receiving action data of a user according to the inertial sensing network, and adjusting the dynamic trajectory data according to the action data to obtain three-dimensional interactive data.
2. The method for dynamic three-dimensional simulation interaction of capturing video to be interacted according to claim 1, wherein before the video to be interacted is obtained, and the object and the character in the video to be interacted are simulated according to the target simulation model, and the method for dynamic three-dimensional simulation interaction of capturing further comprises:
acquiring a current simulation demand, and performing three-dimensional modeling in a preset simulation system according to the current simulation demand to obtain a simulation model;
and acquiring a training sample view, and dynamically training the simulation model according to the training sample view to obtain a trained target simulation model.
3. The method for dynamically capturing three-dimensional simulation interaction according to claim 1, wherein the acquiring a video to be interacted, and simulating an object and a person in the video to be interacted according to a target simulation model to obtain a simulation image comprises:
acquiring an original video under a target scene through video acquisition equipment, and preprocessing the original video to obtain a video to be interacted;
and identifying the motion state of the object and the figure in the video to be interacted according to the target simulation model, and simulating the identification result to obtain a simulation image.
4. The method for dynamic capture three-dimensional simulation interaction of claim 3, wherein the obtaining of an original video under a target scene by a video capture device, and the preprocessing of the original video to obtain a video to be interacted comprises:
acquiring an original video under a target scene through video acquisition equipment, and performing framing processing and color modeling on the original video to obtain a modeling result;
filtering the modeling result to obtain a filtering result;
and adjusting the contrast of the filtering result to obtain the preprocessed video to be interacted.
5. The method for dynamically capturing three-dimensional simulation interaction according to claim 1, wherein the image processing of the simulation image to obtain dynamic trajectory data comprises:
carrying out gray processing and interval statistics on images without moving targets in the simulation images to obtain gray value distribution intervals;
carrying out foreground detection on the gray value distribution interval to obtain a background difference image;
performing image segmentation according to the background difference image to obtain an image segmentation result;
and capturing the track of the preset key points in the image segmentation result to obtain dynamic track data.
6. The method for dynamic capture three-dimensional simulation interaction according to claim 5, wherein the capturing the trajectory of the preset key point in the image segmentation result to obtain dynamic trajectory data comprises:
acquiring a key point identification rule for capturing a motion state in the simulation image, and determining a preset key point in the image segmentation result according to the key point identification rule;
and performing track fitting on the preset key points by using a least square method to obtain dynamic track data.
7. The method for dynamically capturing three-dimensional simulation interaction according to claim 1, wherein the receiving motion data of a user according to an inertial sensor network, and adjusting the dynamic trajectory data according to the motion data to obtain three-dimensional interaction data comprises:
receiving motion data fed back by a motion capture sensor worn on a user according to an inertial sensing network;
and reconstructing the dynamic trajectory data in real time according to the action data to obtain real-time reconstructed motion data, and correcting the motion data according to the body data of the user to generate three-dimensional interactive data.
8. A dynamic capture three-dimensional simulation interactive device, comprising:
the image acquisition module is used for acquiring a video to be interacted, and simulating an object and a figure in the video to be interacted according to a target simulation model to obtain a simulation image;
the image processing module is used for carrying out image processing on the simulation image to obtain dynamic track data;
and the adjustment interaction module is used for receiving action data of a user according to the inertial sensing network, and adjusting the dynamic trajectory data according to the action data to obtain three-dimensional interaction data.
9. A dynamic capture three-dimensional simulation interactive device, characterized in that the dynamic capture three-dimensional simulation interactive device comprises: a memory, a processor and a dynamic capture three-dimensional simulation interactive program stored on the memory and executable on the processor, the dynamic capture three-dimensional simulation interactive program being configured to implement the steps of the dynamic capture three-dimensional simulation interactive method of any one of claims 1 to 7.
10. A storage medium having a dynamic-capture three-dimensional simulation interactive program stored thereon, wherein the dynamic-capture three-dimensional simulation interactive program, when executed by a processor, implements the steps of the dynamic-capture three-dimensional simulation interactive method according to any one of claims 1 to 7.
CN202211428844.7A 2022-11-15 2022-11-15 Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium Pending CN115760917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211428844.7A CN115760917A (en) 2022-11-15 2022-11-15 Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211428844.7A CN115760917A (en) 2022-11-15 2022-11-15 Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115760917A true CN115760917A (en) 2023-03-07

Family

ID=85371827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211428844.7A Pending CN115760917A (en) 2022-11-15 2022-11-15 Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115760917A (en)

Similar Documents

Publication Publication Date Title
CN108875633B (en) Expression detection and expression driving method, device and system and storage medium
Jalal et al. A wrist worn acceleration based human motion analysis and classification for ambient smart home system
CN111417983B (en) Deformable object tracking based on event camera
CN111476871B (en) Method and device for generating video
CN112198959A (en) Virtual reality interaction method, device and system
CN109145788A (en) Attitude data method for catching and system based on video
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN112492297B (en) Video processing method and related equipment
CN111667420B (en) Image processing method and device
CN113573061A (en) Video frame extraction method, device and equipment
Kowalski et al. Holoface: Augmenting human-to-human interactions on hololens
CN110096144B (en) Interactive holographic projection method and system based on three-dimensional reconstruction
CN111179408B (en) Three-dimensional modeling method and equipment
Eom et al. Data‐Driven Reconstruction of Human Locomotion Using a Single Smartphone
CN115760917A (en) Dynamic capture three-dimensional simulation interaction method, device, equipment and storage medium
CN115048954A (en) Retina-imitating target detection method and device, storage medium and terminal
CN113255514B (en) Behavior identification method based on local scene perception graph convolutional network
CN113239848B (en) Motion perception method, system, terminal equipment and storage medium
CN108121963B (en) Video data processing method and device and computing equipment
Verma et al. Motion capture using computer vision
CN115984943B (en) Facial expression capturing and model training method, device, equipment, medium and product
CN117456558A (en) Human body posture estimation and control method based on camera and related equipment
CN117115321B (en) Method, device, equipment and storage medium for adjusting eye gestures of virtual character
US20230262350A1 (en) Information processing device and information processing method
CN117315155A (en) Prompting method, system, equipment and medium for virtual fitting scene conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination