CN111243067A - High-immersion interactive action content production method - Google Patents
High-immersion interactive action content production method Download PDFInfo
- Publication number
- CN111243067A CN111243067A CN202010028145.8A CN202010028145A CN111243067A CN 111243067 A CN111243067 A CN 111243067A CN 202010028145 A CN202010028145 A CN 202010028145A CN 111243067 A CN111243067 A CN 111243067A
- Authority
- CN
- China
- Prior art keywords
- capture
- staff
- capture points
- points
- repaired
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 14
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 13
- 238000007654 immersion Methods 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000009471 action Effects 0.000 claims abstract description 34
- 230000008569 process Effects 0.000 claims abstract description 8
- 230000008439 repair process Effects 0.000 claims description 21
- 238000012163 sequencing technique Methods 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 235000011229 Prunus domestica subsp. syriaca Nutrition 0.000 claims description 5
- 235000005138 Spondias dulcis Nutrition 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000012550 audit Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 240000005462 Prunus umbellata var. umbellata Species 0.000 claims 1
- 238000012800 visualization Methods 0.000 claims 1
- 238000001914 filtration Methods 0.000 abstract description 6
- 244000169641 Spondias dulcis Species 0.000 description 4
- 238000004806 packaging method and process Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 210000003414 extremity Anatomy 0.000 description 2
- 210000000245 forearm Anatomy 0.000 description 2
- 101000878595 Arabidopsis thaliana Squalene synthase 1 Proteins 0.000 description 1
- 101000713575 Homo sapiens Tubulin beta-3 chain Proteins 0.000 description 1
- 101000713585 Homo sapiens Tubulin beta-4A chain Proteins 0.000 description 1
- 101000642811 Oryza sativa subsp. indica Soluble starch synthase 1, chloroplastic/amyloplastic Proteins 0.000 description 1
- 244000269722 Thea sinensis Species 0.000 description 1
- 102100036790 Tubulin beta-3 chain Human genes 0.000 description 1
- 102100036788 Tubulin beta-4A chain Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001694 thigh bone Anatomy 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
- 210000003857 wrist joint Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method for producing high-immersion interactive action contents, which is used for solving the problem that in the process of performance, not all light reflecting balls can be collected by a camera through large-amplitude and high-frequency actions, so that the recording positions of the light emitting balls are wrong, and the action data contents are inaccurate; the method comprises the following steps: the method comprises the following steps: acquiring characters, pictures and video data of the Huangmei opera through the Internet and performing dance arrangement; the action data is corrected and repaired, a Kalman filtering algorithm is adopted to insert a smooth numerical value to form a new capture point and replace the deleted capture point, the capture point is reasonably distributed to corresponding workers, and the action data is corrected and repaired by the workers, so that the action data is repaired conveniently, the accuracy of the action data is improved, and the accuracy and the authenticity of the action content production are improved.
Description
Technical Field
The invention relates to the technical field of action content production, in particular to a high-immersion interactive action content production method.
Background
The Huangmei drama, the original name Huangmei tune, the tea plucking drama and the like originate from Hubei Huangmei, develop and excel in Anhui Anqing, are called as 'five Dada dramas' in Beijing opera, Yuanhe opera, and the Huangmei opera is simple and smooth in the playing cavity, is praised by express feelings and has rich expressive force; the performance is simple and delicate, and is known as true and lively.
In patent CN109692487A, although the method for operating augmented reality of yellow plum drama combines augmented reality technology with yellow plum drama performance, creates wider expression space for performers, provides wider creation space for creators such as director and the like by absorbing opinions of audiences, and increases the artistic knowledge of audiences and improves artistic appreciation ability by mutual communication of the audiences; the existing defects are as follows: installing cameras, sensors, computing and storing equipment, display equipment and other instruments according to stage arrangement and the use space of performers, and installing positioning marks on the stage; because the actions of large amplitude and high frequency in the performance process cause that not all the reflective balls on a performer can be collected by the cameras, if a certain reflective ball cannot be captured by more than two cameras at the same time, the recording position of the reflective ball is wrong, and the action data content is inaccurate.
Disclosure of Invention
The invention aims to provide a high-immersion interactive action content production method which is used for solving the problems that in the performance process, not all light-reflecting balls on the body can be collected by cameras due to large-amplitude and high-frequency actions, and if a certain light-reflecting ball cannot be captured by more than two cameras at the same time, the recording position of the light-emitting ball is wrong, so that the action data content is inaccurate; the method corrects and repairs the motion data, debugs the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the motion rule of the human body, deletes the wrong capture points with large difference from the surrounding, inserts a smooth numerical value by adopting a Kalman filtering algorithm to form new capture points and replace the deleted capture points, reasonably distributes the capture points to corresponding workers, corrects and repairs the motion data through the workers, and is convenient to repair the motion data, so that the accuracy of the motion data is improved, and the accuracy and the authenticity of motion content production are improved;
the purpose of the invention can be realized by the following technical scheme: a method of high-immersion interactive action content production, the method comprising the steps of:
the method comprises the following steps: acquiring characters, pictures and video data of the Huangmei opera through the Internet and performing dance arrangement;
step two: recording dance gestures by using motion capture equipment to acquire dance motion data, and simultaneously preliminarily establishing a character model of a dancer by using three-dimensional software, wherein the motion capture equipment is a camera; the specific steps of the motion capture device for obtaining dance motion data are as follows:
s1: motion data is acquired through a passive optical motion capture technology, and the method specifically comprises the following steps: the camera emits a light source, a light reflecting ball of the highlight mark point is bound on the action object, and the position of the light reflecting ball is marked as a capture point; the camera collects the light reflection signals corresponding to the capture points and processes the light reflection signals to generate action data;
s2: correcting and repairing the motion data, debugging the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the self motion rule of the human body, deleting wrong capture points with large difference from the surrounding front and back, and inserting a smooth numerical value by using an algorithm to form a new capture point and replace the deleted capture point;
s3: acquiring capture points which are not automatically repaired and capture points which are not accurately repaired through a marking module, distributing the capture points to corresponding workers for manual repair, sending the repaired capture points to a product editing platform, and combining the repaired capture points and the debugged capture points by the product editing platform according to positions and time to obtain dance motion data;
step three: the method comprises the steps of realizing role digital animation through modeling software, establishing an action model, wherein the action model comprises a character motion model, the relation between skin and skeleton, grid vertex position calculation and animation data driving;
step four: building a stage scene through three-dimensional modeling to obtain an application scene;
step five: importing the character model, the action model and the application scene model of the dancer into a product editing platform through a standard format; performing visual operation through a product editing platform to obtain a product, previewing and testing the product, and producing a yellow plum drama product after the test is passed;
step six: the product editing platform sends the Huangmei game product to a server, and the server is in communication connection with VR and AR terminal equipment; and the VR terminal equipment and the AR terminal equipment are used for displaying the Huangmei game products and carrying out information interaction with the server.
The specific allocation steps of allocating to corresponding staff for manual repair described in S3 are as follows:
the method comprises the following steps: the staff is marked as Gi, i ═ 1, … …, n;
step two: the method comprises the following steps that a worker submits worker information through a computer terminal and sends the worker information to a server, the server receives and audits the worker information, and the worker information which is successfully audited is stored in the server; meanwhile, the storage time is the registration time of the staff; the staff information comprises name, age, contact information and time length of job entry;
step three: obtaining the registration duration of the staff according to the registration time and the current time of the system, and marking the duration as TGi;
Step four: marking the working time of the staff as RGi(ii) a Will work peopleAge markers of a person NGi(ii) a Using formulasObtaining the assigned value F of the obtained working personnelGi(ii) a Wherein b1, b2, b3 and b4 are all preset proportionality coefficients; b isGiA tag value for the worker;
step five: setting the number of capture points which are not automatically repaired and capture points which are not repaired accurately as M;
step six: using formulasObtaining a capture point value S corresponding to the distribution of the staffGi(ii) a Rounding the values of the capture points, specifically: when the capture point value contains an integer and a remainder; judging the remainder, and when the remainder is greater than or equal to the zero point five, adding one to the integer of the number of the distributed capture points of the worker; when the remainder is less than the zero point five, the number of the distributed capture points of the worker is an integer; when the number of the capture points is an integer, the number of the capture points is distributed to the staff by the capture point number;
step seven: sequencing the staff according to the number of the capture points from big to small, and respectively sending the number of the corresponding capture points to the computer terminal of the staff according to the sequencing size; repairing the capture point by a worker through the computer terminal; and sending the repaired capture point to a product editing platform.
The specific marking steps of the marking module are as follows:
the method comprises the following steps: the staff accesses the marking module through the computer terminal; the marking module acquires the repaired action data;
step two: the staff accesses the capture points in the motion data through the computer terminal and marks the capture points in the motion data; the marking includes failure to automatically complete the repair and repair inaccuracies;
step three: the marking module counts the number of the access capture points of the staff and the number of the mark capture points;
step four: the number of access points and the number of marker points for the worker were set to H1Gi、H2Gi;
Step five: using formula BGi=H1Gi*b5+H2GiB6 obtaining the marking value B of the staffGi。
Compared with the prior art, the invention has the beneficial effects that:
1. the invention corrects and repairs the motion data, debugs the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the motion rule of the human body, deletes the wrong capture points with large difference from the surrounding, inserts a smooth numerical value by adopting a Kalman filtering algorithm to form new capture points and replace the deleted capture points, reasonably distributes the capture points to corresponding workers, corrects and repairs the motion data through the workers, and is convenient to repair the motion data, thereby improving the accuracy of the motion data and further improving the accuracy and the authenticity of the motion content production.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A method of high-immersion interactive action content production, the method comprising the steps of:
the method comprises the following steps: acquiring characters, pictures and video data of the Huangmei opera through the Internet and performing dance arrangement;
step two: recording dance gesture actions by using motion capture equipment to obtain dance motion data, and simultaneously preliminarily establishing a character model of a dancer by using 3Dsmax to obtain an obj file of a three-dimensional model; an obj file is a text file that can be viewed and edited directly through a notepad; the dance motion data acquisition method comprises the following specific steps that the motion capture device is a camera, and the dance motion data acquisition method comprises the following steps:
s1: motion data is acquired through a passive optical motion capture technology, and the method specifically comprises the following steps: the camera emits a light source, a light reflecting ball of the highlight mark point is bound on the action object, and the position of the light reflecting ball is marked as a capture point; the camera collects the light reflection signals corresponding to the capture points and processes the light reflection signals to generate action data;
s2: correcting and repairing the motion data, debugging the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the motion rule of the human body, deleting the wrong capture points with large difference from the surrounding front and back, and inserting a smooth numerical value by adopting a Kalman filtering algorithm to form a new capture point and replace the deleted capture point;
s3: acquiring capture points which are not automatically repaired and capture points which are not accurately repaired through a marking module; the specific marking steps are as follows:
SS 1: the staff accesses the marking module through the computer terminal; the marking module acquires the repaired action data;
SS 2: the staff accesses the capture points in the motion data through the computer terminal and marks the capture points in the motion data; the marking includes failure to automatically complete the repair and repair inaccuracies;
SS 3: the marking module counts the number of the access capture points of the staff and the number of the mark capture points;
SS 4: the number of access points and the number of marker points for the worker were set to H1Gi、H2Gi;
SS 5: using formula BGi=H1Gi*b5+H2GiB6 obtaining the marking value B of the staffGi(ii) a And distributing to corresponding staff for manual repair, wherein the specific distribution steps of distributing to the corresponding staff for manual repair are as follows:
SSS 1: the staff is marked as Gi, i ═ 1, … …, n;
SSS 2: the method comprises the following steps that a worker submits worker information through a computer terminal and sends the worker information to a server, the server receives and audits the worker information, and the worker information which is successfully audited is stored in the server; meanwhile, the storage time is the registration time of the staff; the staff information comprises name, age, contact information and time length of job entry;
SSS 3: obtaining the registration duration of the staff according to the registration time and the current time of the system, and marking the duration as TGi;
SSS 4: marking the working time of the staff as RGi(ii) a Marking the age of the staff as NGi(ii) a Using formulasObtaining the assigned value F of the obtained working personnelGi(ii) a Wherein b1, b2, b3 and b4 are all preset proportionality coefficients; b isGiA tag value for the worker; the method has the advantages that the longer the registration time of the staff is, the larger the distribution value is, and the more the number of the mark capture points distributed by the staff is; the longer the job duration is, the larger the allocation value is; the closer the age of the worker is to twenty-eight, the larger the assigned value is; the smaller the marking value of the staff is, the larger the distribution value is;
SSS 5: setting the number of capture points which are not automatically repaired and capture points which are not repaired accurately as M;
SSS 6: using formulasObtaining a capture point value S corresponding to the distribution of the staffGi(ii) a Rounding the values of the capture points, specifically: when the capture point value contains an integer and a remainder; judging the remainder, and when the remainder is greater than or equal to the zero point five, adding one to the integer of the number of the distributed capture points of the worker; when the remainder is less than the zero point five, the number of the distributed capture points of the worker is an integer; when the number of the capture points is an integer, the number of the capture points is distributed to the staff by the capture point number;
SSS 7: sequencing the staff according to the number of the capture points from big to small, and respectively sending the number of the corresponding capture points to the computer terminal of the staff according to the sequencing size; repairing the capture point by a worker through the computer terminal; sending the repaired capture point to a product editing platform;
the product editing platform combines the repaired capture point with the debugged capture point according to the position and time to obtain dance motion data;
step three: the method comprises the steps of realizing role digital animation through modeling software, establishing an action model, wherein the action model comprises a character motion model, the relation between skin and skeleton, grid vertex position calculation and animation data driving;
step four: building a stage scene through three-dimensional modeling to obtain an application scene;
step five: importing the character model, the action model and the application scene model of the dancer into a product editing platform through a standard format; performing visual operation through a product editing platform to obtain a product, previewing and testing the product, and producing a yellow plum drama product after the test is passed;
step six: the product editing platform sends the Huangmei game product to a server, and the server is in communication connection with VR and AR terminal equipment; the VR terminal equipment and the AR terminal equipment are used for displaying the Huangmei game products and carrying out information interaction with the server; the product editing platform comprises system platform integration work such as joint debugging and call-through of various release platform systems, program code modular packaging containing interactive logic, compatibility of various SDK joint debugging, algorithm embedded interface setting, modular packaging of digital material content management, packaging of a database and a data statistics function bottom module and the like; simultaneously, designing a whole set of flattened UI experience surrounding a user editing interface; carrying out immersive experience and interactive experience through VR and AR devices; VR and AR devices are mature products in the prior art and are disclosed in patents CN105425398B and CN 208902976U;
the movement of bones in character animation follows the principles of dynamics, and positioning and animating bones include two types of dynamics: forward kinetics FK and reverse kinetics IK; FK is a method by which animators can put nodes of a hierarchy into a shape resembling a skeleton; the concept of a node; it is a general term used in the computer animation industry; in the case of a character animation skeleton. A node represents any object inside or outside the hierarchy, such as a thigh bone, or an auxiliary point, or a sphere; in one set of FK systems, the general rules are: a parent node in the hierarchy drives the motion of any child node. For example, if you move the forearm (father), the wrist (son) will move with it; the wrist joint is moved, and the forearm stays in the original position;
the process of animation using the FK method is very much like putting action modeling: when the limb (child) figure is placed, the trunk (father) of the figure and all the limbs which keep the relative positions with the father nodes can be moved;
the working principle of the invention is as follows: the camera emits a light source, a light reflecting ball of the highlight mark point is bound on the action object, and the position of the light reflecting ball is marked as a capture point; the camera collects the light reflection signals corresponding to the capture points and processes the light reflection signals to generate action data; correcting and repairing the motion data, debugging the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the motion rule of the human body, deleting the wrong capture points with large difference from the surrounding front and back, and inserting a smooth numerical value by adopting a Kalman filtering algorithm to form a new capture point and replace the deleted capture point; using formulasObtaining a capture point value S corresponding to the distribution of the staffGi(ii) a Rounding the values of the capture points, specifically: when the capture point value contains an integer and a remainder; judging the remainder, and when the remainder is greater than or equal to the zero point five, adding one to the integer of the number of the distributed capture points of the worker; when the remainder is less than the zero point five, the number of the distributed capture points of the worker is an integer; when the number of the capture points is an integer, the number of the capture points is distributed to the staff by the capture point number; sequencing the staff according to the number of the capture points from big to small, and respectively sending the number of the corresponding capture points to the computer terminal of the staff according to the sequencing size; repairing the capture point by a worker through the computer terminal; and capture of the restorationThe point is sent to a product editing platform; acquiring capture points which are not automatically repaired and capture points which are not accurately repaired through a marking module; the method adopts the correction and repair of the motion data, debugs the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the self motion rule of the human body, deletes the wrong capture points with large difference from the surrounding, inserts a smooth numerical value by adopting a Kalman filtering algorithm to form a new capture point and replace the deleted capture points, reasonably distributes the capture points to corresponding workers, corrects and repairs the motion data through the workers, and is convenient to repair the motion data, thereby improving the accuracy of the motion data and further improving the accuracy and the authenticity of the motion content production.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.
Claims (3)
1. A method for producing high-immersion interactive action content, the method comprising the steps of:
the method comprises the following steps: acquiring characters, pictures and video data of the Huangmei opera through the Internet and performing dance arrangement;
step two: recording dance gestures by using motion capture equipment to acquire dance motion data, and simultaneously preliminarily establishing a character model of a dancer by using three-dimensional software, wherein the motion capture equipment is a camera; the specific steps of the motion capture device for obtaining dance motion data are as follows:
s1: motion data is acquired through a passive optical motion capture technology, and the method specifically comprises the following steps: the camera emits a light source, a light reflecting ball of the highlight mark point is bound on the action object, and the position of the light reflecting ball is marked as a capture point; the camera collects the light reflection signals corresponding to the capture points and processes the light reflection signals to generate action data;
s2: correcting and repairing the motion data, debugging the capture points according to the forward dynamics and reverse dynamics rules of each joint point through the self motion rule of the human body, deleting wrong capture points with large difference from the surrounding front and back, and inserting a smooth numerical value by using an algorithm to form a new capture point and replace the deleted capture point;
s3: acquiring capture points which are not automatically repaired and capture points which are not accurately repaired through a marking module, distributing the capture points to corresponding workers for manual repair, sending the repaired capture points to a product editing platform, and combining the repaired capture points and the debugged capture points by the product editing platform according to positions and time to obtain dance motion data;
step three: the method comprises the steps of realizing role digital animation through modeling software, establishing an action model, wherein the action model comprises a character motion model, the relation between skin and skeleton, grid vertex position calculation and animation data driving;
step four: building a stage scene through three-dimensional modeling to obtain an application scene;
step five: importing the character model, the action model and the application scene model of the dancer into a product editing platform through a standard format; performing visual operation through a product editing platform; the visualization operation comprises selecting an application scene, an application platform, an AR algorithm, an SDK, an interactive logic and a UI interface; obtaining a product after visual operation, previewing and testing the product, and producing a yellow plum play product after the test is passed;
step six: the product editing platform sends the Huangmei game product to a server, and the server is in communication connection with VR and AR terminal equipment; and the VR terminal equipment and the AR terminal equipment are used for displaying the Huangmei game products and carrying out information interaction with the server.
2. The method for producing high-immersion interactive action content as claimed in claim 1, wherein the specific steps of assigning to the corresponding staff member for manual repair as described in S3 are as follows:
the method comprises the following steps: the staff is marked as Gi, i ═ 1, … …, n;
step two: the method comprises the following steps that a worker submits worker information through a computer terminal and sends the worker information to a server, the server receives and audits the worker information, and the worker information which is successfully audited is stored in the server; meanwhile, the storage time is the registration time of the staff; the staff information comprises name, age, contact information and time length of job entry;
step three: obtaining the registration duration of the staff according to the registration time and the current time of the system, and marking the duration as TGi;
Step four: marking the working time of the staff as RGi(ii) a Marking the age of the staff as NGi(ii) a Using formulasObtaining the assigned value F of the obtained working personnelGi(ii) a Wherein b1, b2, b3 and b4 are all preset proportionality coefficients; b isGiA tag value for the worker;
step five: setting the number of capture points which are not automatically repaired and capture points which are not repaired accurately as M;
step six: using formulasObtaining a capture point value S corresponding to the distribution of the staffGi(ii) a Rounding the values of the capture points, specifically: when the capture point value contains an integer and a remainder; judging the remainder, and when the remainder is greater than or equal to the zero point five, adding one to the integer of the number of the distributed capture points of the worker; when the remainder is less than the zero point five, the number of the distributed capture points of the worker is an integer; when the number of the capture points is an integer, the number of the capture points is distributed to the staff by the capture point number;
step seven: sequencing the staff according to the number of the capture points from big to small, and respectively sending the number of the corresponding capture points to the computer terminal of the staff according to the sequencing size; repairing the capture point by a worker through the computer terminal; and sending the repaired capture point to a product editing platform.
3. The method of producing high-immersion interactive action content according to claim 1, wherein the specific marking steps of the marking module are as follows:
the method comprises the following steps: the staff accesses the marking module through the computer terminal; the marking module acquires the repaired action data;
step two: the staff accesses the capture points in the motion data through the computer terminal and marks the capture points in the motion data; the marking includes failure to automatically complete the repair and repair inaccuracies;
step three: the marking module counts the number of the access capture points of the staff and the number of the mark capture points;
step four: the number of access points and the number of marker points for the worker were set to H1Gi、H2Gi;
Step five: using formula BGi=H1Gi*b5+H2GiB6 obtaining the marking value B of the staffGi。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010028145.8A CN111243067A (en) | 2020-01-10 | 2020-01-10 | High-immersion interactive action content production method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010028145.8A CN111243067A (en) | 2020-01-10 | 2020-01-10 | High-immersion interactive action content production method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111243067A true CN111243067A (en) | 2020-06-05 |
Family
ID=70872534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010028145.8A Pending CN111243067A (en) | 2020-01-10 | 2020-01-10 | High-immersion interactive action content production method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111243067A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256684A (en) * | 2021-06-11 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Method, device and equipment for restoring dynamic capture data and storage medium |
CN116284405A (en) * | 2023-03-15 | 2023-06-23 | 中国科学技术大学 | Nanometer antibody targeting CD150 protein and application thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100782974B1 (en) * | 2006-06-16 | 2007-12-11 | 한국산업기술대학교산학협력단 | Method for embodying 3d animation based on motion capture |
CN104658038A (en) * | 2015-03-12 | 2015-05-27 | 南京梦宇三维技术有限公司 | Method and system for producing three-dimensional digital contents based on motion capture |
CN105931283A (en) * | 2016-04-22 | 2016-09-07 | 南京梦宇三维技术有限公司 | Three-dimensional digital content intelligent production cloud platform based on motion capture big data |
CN108777081A (en) * | 2018-05-31 | 2018-11-09 | 华中师范大学 | A kind of virtual Dancing Teaching method and system |
-
2020
- 2020-01-10 CN CN202010028145.8A patent/CN111243067A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100782974B1 (en) * | 2006-06-16 | 2007-12-11 | 한국산업기술대학교산학협력단 | Method for embodying 3d animation based on motion capture |
CN104658038A (en) * | 2015-03-12 | 2015-05-27 | 南京梦宇三维技术有限公司 | Method and system for producing three-dimensional digital contents based on motion capture |
CN105931283A (en) * | 2016-04-22 | 2016-09-07 | 南京梦宇三维技术有限公司 | Three-dimensional digital content intelligent production cloud platform based on motion capture big data |
CN108777081A (en) * | 2018-05-31 | 2018-11-09 | 华中师范大学 | A kind of virtual Dancing Teaching method and system |
Non-Patent Citations (2)
Title |
---|
王广军,陈晓慧,汤庆丰,马金宇,杨瑾: "基于惯性动捕系统的黄梅戏动作数字化保护", 安庆师范学院学报(自然科学版), vol. 22, no. 2, pages 7 - 8 * |
邹虹,李莹,欧剑,吕德生: "基于动作捕捉技术的孔庙祀典数字化", 计算机系统应用, vol. 21, no. 07 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256684A (en) * | 2021-06-11 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Method, device and equipment for restoring dynamic capture data and storage medium |
CN113256684B (en) * | 2021-06-11 | 2021-10-01 | 腾讯科技(深圳)有限公司 | Method, device and equipment for restoring dynamic capture data and storage medium |
CN116284405A (en) * | 2023-03-15 | 2023-06-23 | 中国科学技术大学 | Nanometer antibody targeting CD150 protein and application thereof |
CN116284405B (en) * | 2023-03-15 | 2024-03-19 | 中国科学技术大学 | Nanometer antibody targeting CD150 protein and application thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Magnenat-Thalmann et al. | Handbook of virtual humans | |
Yue et al. | WireDraw: 3D Wire Sculpturing Guided with Mixed Reality. | |
KR100782974B1 (en) | Method for embodying 3d animation based on motion capture | |
CN102725038B (en) | Combining multi-sensory inputs for digital animation | |
CN110599573A (en) | Method for realizing real-time human face interactive animation based on monocular camera | |
CN111243067A (en) | High-immersion interactive action content production method | |
CN103854300A (en) | Method for achieving three-dimensional scene cooperation carrying under networking control of client sides | |
Wang et al. | Wuju opera cultural creative products and research on visual image under VR technology | |
CN105389005A (en) | Three-dimensional interactive display method for twenty-four-form Tai Chi Chuan | |
Magdin | Simple mocap system for home usage | |
Lee et al. | Holographic construction of generative landscape design using augmented reality technology | |
CN103985153A (en) | Method and system for simulating plant growth | |
Xia et al. | Development and application of parts assembly guidance system based on augmented reality | |
Lake | Technical animation in video games | |
Soga et al. | A system for choreographic simulation of ballet using a 3D motion archive on the web | |
Yang | The application research of 3D simulation modeling technology in the sports teaching | |
CN104574475A (en) | Fine animation manufacturing method based on secondary controllers | |
Kaltenbrunner | An abstraction framework for tangible interactive surfaces | |
Wang et al. | Body and Face Animation Based on Motion Capture | |
Li | Creating for 3D digital Chinese ink-wash landscape paintings based on Maya | |
US20220028145A1 (en) | Animation control rig generation | |
Yang et al. | Application of augmented reality technology in smart cartoon character design and visual modeling | |
Obradovic et al. | Fine arts subjects at computer graphics studies at the Faculty of technical sciences in Novi Sad | |
Husein et al. | Motion capture in humanoid model with Unity engine using Kinect V2 | |
Xiao | Research on Computer 3D Interactive Software Design of Animation Peripheral Products |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |