CN115866354A - Interactive virtual reality-based non-material heritage iconic deduction method and device - Google Patents
Interactive virtual reality-based non-material heritage iconic deduction method and device Download PDFInfo
- Publication number
- CN115866354A CN115866354A CN202211497534.0A CN202211497534A CN115866354A CN 115866354 A CN115866354 A CN 115866354A CN 202211497534 A CN202211497534 A CN 202211497534A CN 115866354 A CN115866354 A CN 115866354A
- Authority
- CN
- China
- Prior art keywords
- heritage
- data
- dimensional
- image data
- deduction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a non-material heritage iconic deduction method and a device based on interactive virtual reality; wherein the method comprises the following steps: obtaining three-dimensional image data of manufacturing steps in each step in the manufacturing process of the non-material heritage; obtaining three-dimensional coordinate information and posture information of the non-material heritage in each manufacturing step based on the three-dimensional image data of the manufacturing steps; generating limb actions of an avatar based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step; fitting the limb actions of the virtual image with the three-dimensional image data of the manufacturing steps in each step to generate virtual reality deduction video data; and performing an avatar rendering process on the virtual reality deduction video data based on user settings. In the embodiment of the invention, the tangible deduction of the non-material heritage is realized, so that the general public can more easily know and understand the non-material heritage.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to a non-material heritage iconic deduction method and device based on interactive virtual reality.
Background
The non-material heritage comprises oral transmission and expression forms, including language as a non-material cultural heritage medium, performing art, social practice, ceremony, festival activities, knowledge and practice related to the nature and universe, traditional handicraft and the like; the storage and the propaganda of the traditional handicraft non-material heritage manufacturing process are only limited to a plurality of brochures or propaganda videos, and the imagination propaganda mode cannot be realized, so that the general public cannot deeply understand and know the traditional handicraft non-material heritage manufacturing process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a non-material heritage iconic deduction method and device based on interactive virtual reality, which realizes the iconic deduction of the non-material heritage and enables general public to more easily know and understand the non-material heritage.
In order to solve the above technical problem, an embodiment of the present invention further provides a non-material heritage materialization deduction method based on interactive virtual reality, where the method includes:
obtaining three-dimensional image data of manufacturing steps in each step in the manufacturing process of the non-material heritage;
acquiring three-dimensional coordinate information and posture information of non-material heritage in each manufacturing step based on the three-dimensional image data of the manufacturing steps;
generating limb actions of an avatar based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
fitting the limb actions of the virtual image with the three-dimensional image data of the manufacturing steps in each step to generate virtual reality deduction video data;
and performing an avatar rendering process on the virtual reality deduction video data based on user settings.
Optionally, the obtaining three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-physical heritage includes:
acquiring image data of each angle in each step in the non-material heritage manufacturing process based on camera equipment;
and performing three-dimensional modeling processing based on the angle image data in each step to obtain three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-material heritage.
Optionally, the performing three-dimensional modeling processing based on the angle image data in each step includes:
performing pixel-level image feature point extraction processing on the image data of each angle in each step based on a digital image processing algorithm to obtain extracted image feature point data;
matching the extracted image feature point data of each angle image data based on a matching algorithm to obtain the same extracted image feature point data in each angle image data;
constructing a scene coordinate system where the camera device is located, and acquiring the shooting position and angle of each image data in each angle image data in a calculation mode of establishing a high-order equation set based on the scene coordinate system;
calculating three-dimensional coordinate data of the same extracted image feature point data by using a metrology algorithm according to the shooting position and angle of each image data;
and forming three-dimensional image data of the manufacturing step in each step in the non-material heritage manufacturing process based on the three-dimensional coordinate data of the same image feature point data.
Optionally, the forming three-dimensional image data of each step in the non-material heritage forming process based on the three-dimensional coordinate data of the same image feature point data includes:
constructing three-dimensional coordinate data based on the same image feature point data to form three-dimensional point cloud data;
and connecting all the points based on the three-dimensional point cloud data to form three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage.
Optionally, the obtaining three-dimensional coordinate information and posture information of the non-material heritage in each manufacturing step based on the three-dimensional image data in the manufacturing step includes:
extracting the three-dimensional coordinates of each key point in the three-dimensional image data in the manufacturing steps to obtain the three-dimensional coordinate information of the non-material heritage in each manufacturing step;
and determining attitude information based on the three-dimensional coordinate information of the non-material heritage in each manufacturing step.
Optionally, the generating of the limb movement of the virtual image based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step includes:
acquiring limb action parameters of a virtual image based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
and generating the limb action of the virtual image based on the limb action parameters of the virtual image.
Optionally, fitting the body movement of the virtual image with the three-dimensional image data of the manufacturing step in each step to generate virtual reality deduction video data, includes:
synchronously fitting the limb action of the virtual image and the three-dimensional image data of the manufacturing step in each step to obtain fitting virtual reality deduction image frame data;
matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template to obtain comparison feedback information;
correcting the virtual fitting reality deduction image frame data based on the comparison feedback information, and obtaining a correction result;
and generating virtual reality deduction video data based on the correction result.
Optionally, the matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template includes:
and performing one-to-one corresponding matching comparison of the image frame data by using the fitting virtual reality deduction image frame data and a preset virtual reality template.
Optionally, the performing an imaging deduction playing process on the virtual reality deduction video data based on the user setting includes:
and projecting the virtual reality deduction video data into a designated space based on user setting to perform an imaging deduction playing process.
In addition, the embodiment of the invention also provides a non-material heritage materialization deduction method based on interactive virtual reality, which comprises the following steps:
a first obtaining module: the system is used for obtaining three-dimensional image data of the manufacturing steps in all steps in the manufacturing process of the non-material heritage;
a second obtaining module: the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are obtained based on the three-dimensional image data of the manufacturing steps;
a first generation module: the limb actions are used for generating virtual images based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
a fitting module: fitting the limb action of the virtual image with the three-dimensional image data of the manufacturing step in each step to generate virtual reality deduction video data;
a playing module: for performing an avatar rendering process on the virtual reality deductive video data based on user settings.
In the embodiment of the invention, the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are obtained by obtaining the three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-material heritage; then generating virtual reality deduction video data and carrying out tangible deduction playing processing; the realization of the tangible deduction of the non-material heritage makes the general public more easily know and understand the non-material heritage.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a non-material heritage phenomenological deduction method based on interactive virtual reality in an embodiment of the present invention;
fig. 2 is a schematic structural composition diagram of a non-material heritage representation deduction device based on interactive virtual reality in the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, fig. 1 is a flow chart of a non-material heritage materialization deduction method based on interactive virtual reality according to an embodiment of the present invention.
As shown in fig. 1, a non-material heritage phenomenological deduction method based on interactive virtual reality, the method comprising:
s11: obtaining three-dimensional image data of manufacturing steps in each step in the manufacturing process of the non-material heritage;
in the specific implementation process of the present invention, the obtaining three-dimensional image data of the manufacturing step in each step of the manufacturing process of the non-material heritage comprises: collecting image data of each angle in each step in the non-material heritage manufacturing process based on camera equipment; and performing three-dimensional modeling processing based on the angle image data in each step to obtain three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-material heritage.
Further, the three-dimensional modeling processing based on the angle image data in each step includes: performing pixel-level image feature point extraction processing on the image data of each angle in each step based on a digital image processing algorithm to obtain extracted image feature point data; matching the extracted image feature point data of each angle image data based on a matching algorithm to obtain the same extracted image feature point data in each angle image data; constructing a scene coordinate system where the camera device is located, and acquiring the shooting position and angle of each image data in each angle image data in a calculation mode of establishing a high-order equation set based on the scene coordinate system; calculating three-dimensional coordinate data of the same extracted image feature point data by using a metrology algorithm according to the shooting position and angle of each image data; and forming three-dimensional image data of the manufacturing step in each step in the non-material heritage manufacturing process based on the three-dimensional coordinate data of the same image feature point data.
Further, the forming step three-dimensional image data in each step in the non-material heritage forming process based on the three-dimensional coordinate data of the same image feature point data includes: constructing three-dimensional coordinate data based on the same image feature point data to form three-dimensional point cloud data; and connecting all the points based on the three-dimensional point cloud data to form three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage.
Specifically, in a non-physical heritage (the non-physical heritage is generally a traditional handicraft), image data of each angle in each step in the non-physical heritage manufacturing process is acquired through camera equipment; and then, carrying out three-dimensional modeling processing by using the angle image data in each step to finally obtain the three-dimensional image data of the manufacturing step in each step in the non-material heritage manufacturing process.
In the process of three-dimensional modeling processing, pixel-level image feature point extraction processing is mainly carried out on each angle image data in each step according to a digital image processing algorithm, so that extracted image feature point data are acquired; matching the extracted image feature point data of each angle image data through a matching algorithm, and then matching the same extracted image feature point data in each angle image data; then, a scene coordinate system where the photographic equipment is located is constructed, and the shooting position and the angle of each piece of image data in each angle image data are obtained in a calculation mode of establishing a high-order equation set according to the scene coordinate system; finally, calculating three-dimensional coordinate data of the same extracted image feature point data by using a metrology algorithm according to the shooting position and angle of each image data; and forming three-dimensional image data of each step in the non-material heritage manufacturing process according to the three-dimensional coordinate data of the same image feature point data.
In the step of forming three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage, three-dimensional point cloud data is constructed and formed according to three-dimensional coordinate data of the same image feature point data; and then connecting all the points according to the three-dimensional point cloud data to form three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage.
S12: obtaining three-dimensional coordinate information and posture information of the non-material heritage in each manufacturing step based on the three-dimensional image data of the manufacturing steps;
in a specific implementation process of the present invention, the obtaining three-dimensional coordinate information and posture information of the non-material heritage in each manufacturing step based on the three-dimensional image data in the manufacturing step includes: extracting three-dimensional coordinates of each key point in the three-dimensional image data in the manufacturing steps to obtain three-dimensional coordinate information of non-material heritage in each manufacturing step; and determining attitude information based on the three-dimensional coordinate information of the non-material heritage in each manufacturing step.
Specifically, firstly, determining each key point in the three-dimensional image data in the manufacturing step, then extracting the three-dimensional coordinates of each key point in the three-dimensional image data in the manufacturing step, and then obtaining the three-dimensional coordinate information of the non-material heritage in each manufacturing step; and the position and the posture of the non-material heritage are determined through the three-dimensional coordinate information of the non-material heritage in each manufacturing step, so that the posture information can be determined.
S13: generating a limb action of a virtual image based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
in the specific implementation process of the present invention, the generating of the limb movement of the virtual image based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step includes: acquiring limb action parameters of a virtual image based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step; and generating the limb action of the virtual image based on the limb action parameters of the virtual image.
Specifically, the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are input into a limb action parameter customization model, corresponding limb action parameters of the virtual image are customized, and then the limb action of the virtual image is generated through the limb action parameters of the virtual image.
S14: fitting the limb actions of the virtual image with the three-dimensional image data of the manufacturing steps in each step to generate virtual reality deduction video data;
in the specific implementation process of the invention, the fitting of the limb actions of the virtual image with the three-dimensional image data of the manufacturing steps in each step to generate virtual reality deduction video data comprises: synchronously fitting the limb action of the virtual image and the three-dimensional image data of the manufacturing step in each step to obtain fitting virtual reality deduction image frame data; matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template to obtain comparison feedback information; correcting the virtual fitting reality deduction image frame data based on the comparison feedback information, and obtaining a correction result; and generating virtual reality deduction video data based on the correction result.
Further, the matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template comprises: and performing one-to-one corresponding matching comparison of the image frame data by using the fitting virtual reality deduction image frame data and a preset virtual reality template.
Specifically, the body motion of the virtual image and the three-dimensional image data of the manufacturing step in each step are synchronously fitted, so that the fitted virtual reality deduction image frame data can be fitted; then, matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template to obtain comparison feedback information; correcting the virtual fitting reality deduction image frame data by comparing the feedback information, and obtaining a correction result; and finally, generating virtual reality deduction video data by using the correction result.
The matching comparison is specifically that the image frame data is matched and compared in a one-to-one correspondence manner by utilizing the fitting virtual reality deduction image frame data and a preset virtual reality template.
S15: and performing an avatar rendering process on the virtual reality deduction video data based on user settings.
In a specific implementation process of the present invention, the performing an imaging deduction playback process on the virtual reality deduction video data based on user settings includes: and projecting the virtual reality deduction video data into a designated space based on user setting to perform an imaging deduction playing process.
Specifically, virtual reality deduction video data is projected into a designated space according to the mode set by the user's operation to perform an iconic deduction playback process.
In the embodiment of the invention, the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are obtained by obtaining the manufacturing step three-dimensional image data in each step in the manufacturing process of the non-material heritage; then generating virtual reality deduction video data and carrying out tangible deduction playing processing; the realization of the tangible deduction of the non-material heritage makes the general public more easily know and understand the non-material heritage.
Example two
Referring to fig. 2, fig. 2 is a schematic structural composition diagram of a non-material legacy rendering device based on interactive virtual reality according to an embodiment of the present invention.
As shown in fig. 2, a non-material heritage phenomenological deduction method based on interactive virtual reality, the method comprising:
the first obtaining module 21: the method comprises the steps of obtaining three-dimensional image data of manufacturing steps in the manufacturing process of the non-material heritage;
in the specific implementation process of the present invention, the obtaining three-dimensional image data of the manufacturing step in each step of the manufacturing process of the non-material heritage comprises: acquiring image data of each angle in each step in the non-material heritage manufacturing process based on camera equipment; and performing three-dimensional modeling processing based on the image data of each angle in each step to obtain three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-material heritage.
Further, the three-dimensional modeling processing based on the angle image data in each step includes: performing pixel-level image feature point extraction processing on the image data of each angle in each step based on a digital image processing algorithm to obtain extracted image feature point data; matching the extracted image feature point data of each angle image data based on a matching algorithm to obtain the same extracted image feature point data in each angle image data; constructing a scene coordinate system where the camera device is located, and acquiring the shooting position and angle of each image data in each angle image data in a calculation mode of establishing a high-order equation set based on the scene coordinate system; calculating three-dimensional coordinate data of the same extracted image feature point data by using a metrology algorithm according to the shooting position and angle of each image data; and forming three-dimensional image data of the manufacturing step in each step in the non-material heritage manufacturing process based on the three-dimensional coordinate data of the same image feature point data.
Further, the forming step three-dimensional image data in each step in the non-material heritage forming process based on the three-dimensional coordinate data of the same image feature point data includes: constructing three-dimensional coordinate data based on the same image feature point data to form three-dimensional point cloud data; and connecting all the points based on the three-dimensional point cloud data to form three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage.
Specifically, in a non-physical heritage (the non-physical heritage is generally a traditional handicraft), image data of each angle in each step in the non-physical heritage manufacturing process is acquired through camera equipment; and then, carrying out three-dimensional modeling processing by using the angle image data in each step to finally obtain the three-dimensional image data of the manufacturing step in each step in the non-material heritage manufacturing process.
In the process of three-dimensional modeling processing, pixel-level image feature point extraction processing is mainly carried out on each angle image data in each step according to a digital image processing algorithm, so that extracted image feature point data are acquired; matching the extracted image feature point data of each angle image data through a matching algorithm, and then matching the same extracted image feature point data in each angle image data; then, a scene coordinate system where the photographic equipment is located is constructed, and the shooting position and the angle of each piece of image data in each angle image data are obtained in a calculation mode of establishing a high-order equation set according to the scene coordinate system; finally, calculating three-dimensional coordinate data of the same extracted image feature point data by using a metrology algorithm according to the shooting position and angle of each image data; and forming three-dimensional image data of each step in the non-material heritage manufacturing process according to the three-dimensional coordinate data of the same image feature point data.
In the step of forming three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage, three-dimensional point cloud data are constructed and formed according to three-dimensional coordinate data of the same image feature point data; and then connecting all the points according to the three-dimensional point cloud data to form three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage.
The second obtaining module 22: the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are obtained based on the three-dimensional image data of the manufacturing steps;
in a specific implementation process of the present invention, the obtaining three-dimensional coordinate information and posture information of the non-material heritage in each manufacturing step based on the three-dimensional image data in the manufacturing step includes: extracting the three-dimensional coordinates of each key point in the three-dimensional image data in the manufacturing steps to obtain the three-dimensional coordinate information of the non-material heritage in each manufacturing step; and determining attitude information based on the three-dimensional coordinate information of the non-material heritage in each manufacturing step.
Specifically, firstly, determining each key point in the three-dimensional image data in the manufacturing step, then extracting the three-dimensional coordinates of each key point in the three-dimensional image data in the manufacturing step, and then obtaining the three-dimensional coordinate information of the non-material heritage in each manufacturing step; and the position and the posture of the non-material heritage are determined through the three-dimensional coordinate information of the non-material heritage in each manufacturing step, so that the posture information can be determined.
The first generation module 23: the limb actions are used for generating virtual images based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
in the specific implementation process of the invention, the step of generating the limb actions of the virtual image based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step comprises the following steps: acquiring limb action parameters of an avatar based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step; and generating the limb action of the virtual image based on the limb action parameters of the virtual image.
Specifically, the three-dimensional coordinate information and the posture information of the non-physical heritage in each manufacturing step are input into a limb action parameter customizing model, corresponding limb action parameters of the virtual image are customized, and then the limb action of the virtual image is generated through the limb action parameters of the virtual image.
The fitting module 24: fitting the limb action of the virtual image with the three-dimensional image data of the manufacturing step in each step to generate virtual reality deduction video data;
in the specific implementation process of the invention, the fitting of the limb actions of the virtual image with the three-dimensional image data of the manufacturing steps in each step to generate virtual reality deduction video data comprises: synchronously fitting the limb action of the virtual image and the three-dimensional image data of the manufacturing step in each step to obtain fitting virtual reality deduction image frame data; matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template to obtain comparison feedback information; correcting the virtual fitting reality deduction image frame data based on the comparison feedback information, and obtaining a correction result; and generating virtual reality deduction video data based on the correction result.
Further, the matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template comprises: and performing one-to-one corresponding matching comparison of the image frame data by using the fitting virtual reality deduction image frame data and a preset virtual reality template.
Specifically, the body motion of the virtual image and the three-dimensional image data of the manufacturing step in each step are synchronously fitted, so that the fitted virtual reality deduction image frame data can be fitted; then, matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template to obtain comparison feedback information; correcting the virtual fitting reality deduction image frame data by comparing the feedback information, and obtaining a correction result; and finally, generating virtual reality deduction video data by using the correction result.
The matching comparison is specifically that the image frame data is matched and compared in a one-to-one correspondence manner by utilizing the fitting virtual reality deduction image frame data and a preset virtual reality template.
The play module 25: for performing an avatar rendering process on the virtual reality deductive video data based on user settings.
In a specific implementation process of the present invention, the performing an objectification deduction playing process on the virtual reality deduction video data based on the user setting includes: and projecting the virtual reality deduction video data into a designated space based on user setting to perform an imaging deduction playing process.
Specifically, virtual reality deduction video data is projected into a designated space according to the mode set by the user operation to perform the visualization deduction playing processing.
In the embodiment of the invention, the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are obtained by obtaining the three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-material heritage; then generating virtual reality deduction video data and carrying out tangible deduction playing processing; the realization of the tangible deduction of the non-material heritage makes the general public more easily know and understand the non-material heritage.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
In addition, the non-material heritage tangible deduction method and device based on interactive virtual reality provided by the embodiment of the invention are introduced in detail, and a specific example is adopted herein to explain the principle and the implementation manner of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. A non-material heritage tangible deductive method based on interactive virtual reality, the method comprising:
obtaining three-dimensional image data of manufacturing steps in each step in the manufacturing process of the non-material heritage;
obtaining three-dimensional coordinate information and posture information of the non-material heritage in each manufacturing step based on the three-dimensional image data of the manufacturing steps;
generating limb actions of an avatar based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
fitting the limb actions of the virtual image with the three-dimensional image data of the manufacturing steps in each step to generate virtual reality deduction video data;
and performing an avatar rendering process on the virtual reality deduction video data based on user settings.
2. The non-physical heritage deductive method of claim 1, wherein said obtaining three-dimensional image data of the production step in each step of the non-physical heritage production process comprises:
collecting image data of each angle in each step in the non-material heritage manufacturing process based on camera equipment;
and performing three-dimensional modeling processing based on the image data of each angle in each step to obtain three-dimensional image data of the manufacturing step in each step in the manufacturing process of the non-material heritage.
3. The non-physical heritage tangible deduction method of claim 2, wherein said three-dimensional modeling process based on each angle image data in said each step comprises:
performing pixel-level image feature point extraction processing on the image data of each angle in each step based on a digital image processing algorithm to obtain extracted image feature point data;
matching the extracted image feature point data of each angle image data based on a matching algorithm to obtain the same extracted image feature point data in each angle image data;
constructing a scene coordinate system where the camera device is located, and acquiring the shooting position and angle of each image data in each angle image data by establishing an equation set calculation mode based on the scene coordinate system;
calculating three-dimensional coordinate data of the same extracted image feature point data by using a metrology algorithm according to the shooting position and angle of each image data;
and forming three-dimensional image data of the manufacturing step in each step in the non-material heritage manufacturing process based on the three-dimensional coordinate data of the same image feature point data.
4. The non-physical legacy tangible deduction method of claim 3, wherein said three-dimensional coordinate data based on the same image feature point data forms production step three-dimensional image data in each step in a non-physical legacy production process, comprising:
constructing three-dimensional coordinate data based on the same image feature point data to form three-dimensional point cloud data;
and connecting all the points based on the three-dimensional point cloud data to form three-dimensional image data of the manufacturing steps in the manufacturing process of the non-material heritage.
5. The method of claim 1, wherein the obtaining three-dimensional coordinate information and posture information of the non-physical heritage in each manufacturing step based on the three-dimensional image data of the manufacturing step comprises:
extracting the three-dimensional coordinates of each key point in the three-dimensional image data in the manufacturing steps to obtain the three-dimensional coordinate information of the non-material heritage in each manufacturing step;
and determining attitude information based on the three-dimensional coordinate information of the non-material heritage in each manufacturing step.
6. The tangible deductive method of non-physical heritage according to claim 1, wherein said generating the body movements of the avatar based on the three-dimensional coordinate information and posture information of the non-physical heritage in said each manufacturing step comprises:
acquiring limb action parameters of an avatar based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
and generating the limb action of the virtual image based on the limb action parameters of the virtual image.
7. The non-material heritage tangible deduction method of claim 1, wherein said fitting the body movements of the avatar with the three-dimensional image data of the production step in said respective steps to generate virtual reality deduction video data comprises:
synchronously fitting the limb action of the virtual image and the three-dimensional image data of the manufacturing step in each step to obtain fitting virtual reality deduction image frame data;
matching and comparing the fitting virtual reality deduction image frame data with a preset virtual reality template to obtain comparison feedback information;
correcting the virtual fitting reality deduction image frame data based on the comparison feedback information, and obtaining a correction result;
and generating virtual reality deduction video data based on the correction result.
8. The non-material heritage tangible deduction method of claim 7, wherein said using said fitted virtual reality deduction image frame data to match and compare with a preset virtual reality template comprises:
and performing one-to-one corresponding matching comparison of the image frame data by using the fitting virtual reality deduction image frame data and a preset virtual reality template.
9. The non-physical legacy tangible deduction method of claim 1, wherein said impersonating playback processing of said virtual reality deduction video data based on user settings comprises:
and projecting the virtual reality deduction video data into a designated space based on user setting to perform an imaging deduction playing process.
10. A non-material heritage tangible deductive method based on interactive virtual reality, the method comprising:
a first obtaining module: the method comprises the steps of obtaining three-dimensional image data of manufacturing steps in the manufacturing process of the non-material heritage;
a second obtaining module: the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step are obtained based on the three-dimensional image data of the manufacturing steps;
a first generation module: the limb actions are used for generating virtual images based on the three-dimensional coordinate information and the posture information of the non-material heritage in each manufacturing step;
a fitting module: fitting the limb action of the virtual image with the three-dimensional image data of the manufacturing step in each step to generate virtual reality deduction video data;
a playing module: for performing an avatar rendering process on the virtual reality deductive video data based on user settings.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211497534.0A CN115866354A (en) | 2022-11-25 | 2022-11-25 | Interactive virtual reality-based non-material heritage iconic deduction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211497534.0A CN115866354A (en) | 2022-11-25 | 2022-11-25 | Interactive virtual reality-based non-material heritage iconic deduction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115866354A true CN115866354A (en) | 2023-03-28 |
Family
ID=85666949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211497534.0A Pending CN115866354A (en) | 2022-11-25 | 2022-11-25 | Interactive virtual reality-based non-material heritage iconic deduction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115866354A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025661A (en) * | 2016-01-29 | 2017-08-08 | 成都理想境界科技有限公司 | A kind of method for realizing augmented reality, server, terminal and system |
CN108200445A (en) * | 2018-01-12 | 2018-06-22 | 北京蜜枝科技有限公司 | The virtual studio system and method for virtual image |
CN108629830A (en) * | 2018-03-28 | 2018-10-09 | 深圳臻迪信息技术有限公司 | A kind of three-dimensional environment method for information display and equipment |
CN111223187A (en) * | 2018-11-23 | 2020-06-02 | 广东虚拟现实科技有限公司 | Virtual content display method, device and system |
CN111694429A (en) * | 2020-06-08 | 2020-09-22 | 北京百度网讯科技有限公司 | Virtual object driving method and device, electronic equipment and readable storage |
CN111862348A (en) * | 2020-07-30 | 2020-10-30 | 腾讯科技(深圳)有限公司 | Video display method, video generation method, video display device, video generation device, video display equipment and storage medium |
CN112669448A (en) * | 2020-12-30 | 2021-04-16 | 中山大学 | Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology |
CN112967212A (en) * | 2021-02-01 | 2021-06-15 | 北京字节跳动网络技术有限公司 | Virtual character synthesis method, device, equipment and storage medium |
CN113112612A (en) * | 2021-04-16 | 2021-07-13 | 中德(珠海)人工智能研究院有限公司 | Positioning method and system for dynamic superposition of real person and mixed reality |
-
2022
- 2022-11-25 CN CN202211497534.0A patent/CN115866354A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025661A (en) * | 2016-01-29 | 2017-08-08 | 成都理想境界科技有限公司 | A kind of method for realizing augmented reality, server, terminal and system |
CN108200445A (en) * | 2018-01-12 | 2018-06-22 | 北京蜜枝科技有限公司 | The virtual studio system and method for virtual image |
CN108629830A (en) * | 2018-03-28 | 2018-10-09 | 深圳臻迪信息技术有限公司 | A kind of three-dimensional environment method for information display and equipment |
CN111223187A (en) * | 2018-11-23 | 2020-06-02 | 广东虚拟现实科技有限公司 | Virtual content display method, device and system |
CN111694429A (en) * | 2020-06-08 | 2020-09-22 | 北京百度网讯科技有限公司 | Virtual object driving method and device, electronic equipment and readable storage |
CN111862348A (en) * | 2020-07-30 | 2020-10-30 | 腾讯科技(深圳)有限公司 | Video display method, video generation method, video display device, video generation device, video display equipment and storage medium |
CN112669448A (en) * | 2020-12-30 | 2021-04-16 | 中山大学 | Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology |
CN112967212A (en) * | 2021-02-01 | 2021-06-15 | 北京字节跳动网络技术有限公司 | Virtual character synthesis method, device, equipment and storage medium |
CN113112612A (en) * | 2021-04-16 | 2021-07-13 | 中德(珠海)人工智能研究院有限公司 | Positioning method and system for dynamic superposition of real person and mixed reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022095467A1 (en) | Display method and apparatus in augmented reality scene, device, medium and program | |
CN111080759B (en) | Method and device for realizing split mirror effect and related product | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
CN114025219B (en) | Rendering method, device, medium and equipment for augmented reality special effects | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN113709543B (en) | Video processing method and device based on virtual reality, electronic equipment and medium | |
KR102353556B1 (en) | Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face | |
CN113781660A (en) | Method and device for rendering and processing virtual scene on line in live broadcast room | |
CN108961368A (en) | The method and system of real-time live broadcast variety show in three-dimensional animation environment | |
US20230230304A1 (en) | Volumetric capture and mesh-tracking based machine learning 4d face/body deformation training | |
CN114202615A (en) | Facial expression reconstruction method, device, equipment and storage medium | |
CN112562056A (en) | Control method, device, medium and equipment for virtual light in virtual studio | |
CN113066156A (en) | Expression redirection method, device, equipment and medium | |
CN113222178B (en) | Model training method, user interface generation method, device and storage medium | |
CN115866354A (en) | Interactive virtual reality-based non-material heritage iconic deduction method and device | |
CN113313796B (en) | Scene generation method, device, computer equipment and storage medium | |
CN112598771A (en) | Processing method and device for three-dimensional animation production process | |
WO2023030107A1 (en) | Composite photographing method and apparatus, electronic device, and readable medium | |
US11983819B2 (en) | Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject | |
Li | Application of Computer 3D Technology in Graphic Design of Animation Scene | |
CN115797559A (en) | Virtual reality-based non-material cultural heritage simulation experience interaction method and device | |
CN113908553A (en) | Game character expression generation method and device, electronic equipment and storage medium | |
CN117527994A (en) | Visual presentation method and system for space simulation shooting | |
CN115512026A (en) | Virtual reality style migration method and system, electronic device and storage medium | |
CN114550293A (en) | Action correcting method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |