JPWO2020009800A5 - - Google Patents
Download PDFInfo
- Publication number
- JPWO2020009800A5 JPWO2020009800A5 JP2020571504A JP2020571504A JPWO2020009800A5 JP WO2020009800 A5 JPWO2020009800 A5 JP WO2020009800A5 JP 2020571504 A JP2020571504 A JP 2020571504A JP 2020571504 A JP2020571504 A JP 2020571504A JP WO2020009800 A5 JPWO2020009800 A5 JP WO2020009800A5
- Authority
- JP
- Japan
- Prior art keywords
- component
- input
- linear
- training
- angular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 38
- 230000036544 posture Effects 0.000 claims description 16
- 230000033001 locomotion Effects 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
Description
いくつかの実施形態では、任意の好適なニューラルネットワーク(NN)が、角度および線形成分が分離される限り、補間エンジンとして使用されてもよい。いくつかの実施形態では、正規化線形ユニット(ReLU)活性化関数を使用する、単一隠れ層を伴う全結合ネットワーク等のフィードフォワードニューラルネットワーク(FFNN)が、使用されてもよい。いくつかの実施形態では、隠れ層は、残差ニューラルネットワーク(resnet)ブロックとして組み込まれてもよい。
本発明は、例えば、以下を提供する。
(項目1)
方法であって、
少なくとも1つの角度成分および少なくとも1つの線形成分を備える入力データを受信することと、
前記入力データを、入力として、前記少なくとも1つの線形成分と異なるように前記少なくとも1つの角度成分を評価するように訓練されている少なくとも1つのニューラルネットワーク(NN)に提供することと、
前記少なくとも1つの角度成分および前記少なくとも1つの線形成分の異なる評価に基づいて前記少なくとも1つのNNによって生成される出力データを受信することと
を含む、方法。
(項目2)
前記少なくとも1つの角度成分は、3次元空間内の特殊直交群(SO(3))内にある、項目1に記載の方法。
(項目3)
前記ニューラルネットワークのうちの少なくとも1つは、フィードフォワードニューラルネットワーク(FFNN)である、項目1に記載の方法。
(項目4)
前記少なくとも1つのFFNNは、全結合ネットワークである、項目3に記載の方法。
(項目5)
前記FFNNは、単一隠れ層を備える、項目4に記載の方法。
(項目6)
前記FFNNは、正規化線形ユニット活性化関数を備える、項目5に記載の方法。
(項目7)
前記隠れ層は、残差NNブロックである、項目6に記載の方法。
(項目8)
前記NNのうちの少なくとも1つは、放射基底関数ニューラルネットワーク(RBFNN)である、項目1に記載の方法。
(項目9)
前記入力データは、デジタルキャラクタの姿勢を説明する、項目1に記載の方法。
(項目10)
前記入力データは、デジタルキャラクタの低次骨格を表し、前記出力データは、デジタルキャラクタの高次骨格を表す、項目1に記載の方法。
(項目11)
前記出力データは、デジタルキャラクタの姿勢を説明する、項目1に記載の方法。
(項目12)
前記入力データおよび前記出力データのうちの1つ以上のものはさらに、第3の成分を含む、項目1に記載の方法。
(項目13)
前記角度成分、線形成分、および第3の成分はそれぞれ、運動の異なる成分である、項目12に記載の方法。
(項目14)
前記少なくとも1つの角度成分は、回転運動を説明し、
前記少なくとも1つの線形成分は、平行移動運動を説明し、
前記第3の成分は、スケールを説明する、
項目13に記載の方法。
(項目15)
前記少なくとも1つのNNは、前記少なくとも1つの角度成分を評価する第1のFFNNと、前記少なくとも1つの線形成分を評価する第2のFFNNとを備える、項目12に記載の方法。
(項目16)
前記少なくとも1つのNNは、前記第3の成分を評価する第3のFFNNを備える、項目15に記載の方法。
(項目17)
前記少なくとも1つのNNは、複数のサンプルノードを備え、各サンプルノードは、訓練姿勢に対応し、前記訓練姿勢のうちの少なくとも1つは、少なくとも1つの角度成分および線形成分を含む、項目1に記載の方法。
(項目18)
前記少なくとも1つのNNは、3次元空間内の特殊直交群(SO(3))内の前記少なくとも1つの角度成分を評価することによって、かつユークリッド距離式を利用して、前記少なくとも1つの線形成分を評価することによって、前記少なくとも1つの線形成分と異なるように前記少なくとも1つの角度成分を評価する、項目1に記載の方法。
(項目19)
前記1つ以上のNNを訓練することであって、前記訓練は、
訓練入力データおよび訓練出力データを含む訓練データを受信することであって、前記訓練入力データおよび前記訓練出力データは、1つ以上の訓練姿勢を表し、前記1つ以上の訓練姿勢のうちの少なくとも1つは、入力角度成分、入力線形成分、出力角度成分、および出力線形成分を含む、ことと、
前記1つ以上の姿勢のそれぞれからの前記入力角度成分を入力角度成分群にグループ化することと、
前記1つ以上の姿勢のそれぞれからの前記入力線形成分を入力線形成分群にグループ化することと、
前記訓練入力データを入力として提供し、前記少なくとも1つのNNを訓練することであって、前記入力角度成分群は、前記入力線形成分群と異なるように評価され、前記評価は、前記出力角度成分および前記出力線形成分をもたらす、ことと
を含む、こと
をさらに含む、項目1に記載の方法。
(項目20)
システムであって、
少なくとも1つのニューラルネットワーク(NN)を実行する少なくとも1つのプロセッサと、
前記少なくとも1つのプロセッサに通信可能に結合されるメモリであって、前記メモリは、命令を記憶しており、前記命令は、前記少なくとも1つのプロセッサによって実行されると、前記少なくとも1つのプロセッサに、
少なくとも1つの角度成分および少なくとも1つの線形成分を備える入力データを受信することと、
前記入力データを、入力として、前記少なくとも1つの線形成分と異なるように前記少なくとも1つの角度成分を評価するように訓練されている少なくとも1つのニューラルネットワーク(NN)に提供することと、
前記少なくとも1つの角度成分および前記少なくとも1つの線形成分の異なる評価に基づいて前記少なくとも1つのNNによって生成される出力データを受信することと
を含む動作を実施させる、メモリと
を備える、システム。
In some embodiments, any suitable neural network (NN) may be used as the interpolation engine as long as the angular and linear components are separated. In some embodiments, a feedforward neural network (FFNN), such as a fully coupled network with a single hidden layer, may be used that uses a normalized linear unit (ReLU) activation function. In some embodiments, the hidden layer may be incorporated as a residual neural network (resnet) block.
The present invention provides, for example,:
(Item 1)
It ’s a method,
Receiving input data with at least one angular component and at least one linear component,
The input data is provided as input to at least one neural network (NN) trained to evaluate the at least one angular component differently from the at least one linear component.
To receive the output data produced by the at least one NN based on different evaluations of the at least one angular component and the at least one linear component.
Including, how.
(Item 2)
The method according to item 1, wherein the at least one angular component is in a special orthogonal group (SO (3)) in a three-dimensional space.
(Item 3)
The method according to item 1, wherein at least one of the neural networks is a feedforward neural network (FFNN).
(Item 4)
The method of item 3, wherein the at least one FFNN is a fully coupled network.
(Item 5)
The method of item 4, wherein the FFNN comprises a single hidden layer.
(Item 6)
5. The method of item 5, wherein the FFNN comprises a normalized linear unit activation function.
(Item 7)
The method according to item 6, wherein the hidden layer is a residual NN block.
(Item 8)
The method of item 1, wherein at least one of the NNs is a Radial Basis Function Neural Network (RBFNN).
(Item 9)
The method according to item 1, wherein the input data is a posture of a digital character.
(Item 10)
The method according to item 1, wherein the input data represents a low-order skeleton of a digital character, and the output data represents a high-order skeleton of a digital character.
(Item 11)
The method according to item 1, wherein the output data describes the posture of a digital character.
(Item 12)
The method of item 1, wherein one or more of the input data and the output data further comprises a third component.
(Item 13)
Item 12. The method according to item 12, wherein the angle component, the linear component, and the third component are components having different motions, respectively.
(Item 14)
The at least one angular component describes rotational motion.
The at least one linear component describes translational locomotion.
The third component describes the scale.
The method according to item 13.
(Item 15)
12. The method of item 12, wherein the at least one NN comprises a first FFNN for evaluating the at least one angular component and a second FFNN for evaluating the at least one linear component.
(Item 16)
15. The method of item 15, wherein the at least one NN comprises a third FFNN for evaluating the third component.
(Item 17)
In item 1, the at least one NN comprises a plurality of sample nodes, each sample node corresponding to a training posture, wherein at least one of the training postures comprises at least one angular component and a linear component. The method described.
(Item 18)
The at least one NN is the at least one linear component by evaluating the at least one angular component in a special orthogonal group (SO (3)) in three-dimensional space and by utilizing the Euclidean distance equation. The method of item 1, wherein the at least one angular component is evaluated differently from the at least one linear component by evaluating.
(Item 19)
The training is to train one or more of the NNs.
Receiving training data including training input data and training output data, wherein the training input data and the training output data represent one or more training postures, and at least one of the one or more training postures. One includes the input angle component, the input linear component, the output angle component, and the output linear component.
Grouping the input angle components from each of the one or more postures into an input angle component group,
Grouping the input linear components from each of the one or more postures into an input linear component group,
By providing the training input data as an input and training the at least one NN, the input angle component group is evaluated differently from the input linear component group, and the evaluation is the output angle component. And to bring about the output linear component
Including, that
The method according to item 1, further comprising.
(Item 20)
It ’s a system,
With at least one processor running at least one neural network (NN),
A memory communicably coupled to the at least one processor, the memory storing instructions, and when the instructions are executed by the at least one processor, the at least one processor.
Receiving input data with at least one angular component and at least one linear component,
The input data is provided as input to at least one neural network (NN) trained to evaluate the at least one angular component differently from the at least one linear component.
To receive the output data produced by the at least one NN based on different evaluations of the at least one angular component and the at least one linear component.
To perform operations including, with memory
The system.
Claims (19)
少なくとも1つの角度成分および少なくとも1つの線形成分を備える入力データを受信することと、
前記入力データを、入力として、前記少なくとも1つの線形成分と異なるように前記少なくとも1つの角度成分を評価するように訓練されている少なくとも1つのニューラルネットワーク(NN)に提供することと、
前記少なくとも1つの角度成分および前記少なくとも1つの線形成分の異なる評価に基づいて前記少なくとも1つのNNによって生成される出力データを受信することと
を含み、
前記少なくとも1つの角度成分は、3次元空間内の特殊直交群内にあり、前記特殊直交群は、デジタルキャラクタの全体的移動への相対的寄与を表す加重を割り当てられる、方法。 It ’s a method,
Receiving input data with at least one angular component and at least one linear component,
The input data is provided as input to at least one neural network (NN) trained to evaluate the at least one angular component differently from the at least one linear component.
Including receiving the output data produced by the at least one NN based on different evaluations of the at least one angular component and the at least one linear component.
A method in which the at least one angular component is within a special orthogonal group in three-dimensional space, the special orthogonal group being assigned a weight that represents a relative contribution to the overall movement of the digital character .
前記少なくとも1つの線形成分は、平行移動運動を説明し、
前記第3の成分は、スケールを説明する、
請求項12に記載の方法。 The at least one angular component describes rotational motion.
The at least one linear component describes translational locomotion.
The third component describes the scale.
The method according to claim 12 .
訓練入力データおよび訓練出力データを含む訓練データを受信することであって、前記訓練入力データおよび前記訓練出力データは、1つ以上の訓練姿勢を表し、前記1つ以上の訓練姿勢のうちの少なくとも1つは、入力角度成分、入力線形成分、出力角度成分、および出力線形成分を含む、ことと、
前記1つ以上の姿勢のそれぞれからの前記入力角度成分を入力角度成分群にグループ化することと、
前記1つ以上の姿勢のそれぞれからの前記入力線形成分を入力線形成分群にグループ化することと、
前記訓練入力データを入力として提供し、前記少なくとも1つのNNを訓練することであって、前記入力角度成分群は、前記入力線形成分群と異なるように評価され、前記評価は、前記出力角度成分および前記出力線形成分をもたらす、ことと
を含む、こと
をさらに含む、請求項1に記載の方法。 The training is to train one or more of the NNs.
Receiving training data including training input data and training output data, wherein the training input data and the training output data represent one or more training postures, and at least one of the one or more training postures. One includes the input angle component, the input linear component, the output angle component, and the output linear component.
Grouping the input angle components from each of the one or more postures into an input angle component group,
Grouping the input linear components from each of the one or more postures into an input linear component group,
By providing the training input data as an input and training the at least one NN, the input angle component group is evaluated differently from the input linear component group, and the evaluation is the output angle component. The method of claim 1, further comprising, including, and providing the output linear component.
少なくとも1つのニューラルネットワーク(NN)を実行する少なくとも1つのプロセッサと、
前記少なくとも1つのプロセッサに通信可能に結合されるメモリであって、前記メモリは、命令を記憶しており、前記命令は、前記少なくとも1つのプロセッサによって実行されると、前記少なくとも1つのプロセッサに、
少なくとも1つの角度成分および少なくとも1つの線形成分を備える入力データを受信することと、
前記入力データを、入力として、前記少なくとも1つの線形成分と異なるように前記少なくとも1つの角度成分を評価するように訓練されている少なくとも1つのニューラルネットワーク(NN)に提供することと、
前記少なくとも1つの角度成分および前記少なくとも1つの線形成分の異なる評価に基づいて前記少なくとも1つのNNによって生成される出力データを受信することと
を含む動作を実施させる、メモリと
を備え、
前記少なくとも1つの角度成分は、3次元空間内の特殊直交群内にあり、前記特殊直交群は、デジタルキャラクタの全体的移動への相対的寄与を表す加重を割り当てられる、システム。
It ’s a system,
With at least one processor running at least one neural network (NN),
A memory communicably coupled to the at least one processor, the memory storing instructions, and when the instructions are executed by the at least one processor, the at least one processor.
Receiving input data with at least one angular component and at least one linear component,
The input data is provided as input to at least one neural network (NN) trained to evaluate the at least one angular component differently from the at least one linear component.
It comprises a memory that performs operations including receiving output data produced by the at least one NN based on different evaluations of the at least one angular component and the at least one linear component.
The system in which the at least one angular component is within a special orthogonal group in three-dimensional space, the special orthogonal group being assigned a weight that represents a relative contribution to the overall movement of the digital character .
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862693237P | 2018-07-02 | 2018-07-02 | |
US62/693,237 | 2018-07-02 | ||
PCT/US2019/037952 WO2020009800A1 (en) | 2018-07-02 | 2019-06-19 | Methods and systems for interpolation of disparate inputs |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2021529380A JP2021529380A (en) | 2021-10-28 |
JPWO2020009800A5 true JPWO2020009800A5 (en) | 2022-06-27 |
JP7437327B2 JP7437327B2 (en) | 2024-02-22 |
Family
ID=69055205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2020571504A Active JP7437327B2 (en) | 2018-07-02 | 2019-06-19 | Method and system for interpolation of heterogeneous inputs |
Country Status (5)
Country | Link |
---|---|
US (1) | US11669726B2 (en) |
EP (1) | EP3818530A4 (en) |
JP (1) | JP7437327B2 (en) |
CN (1) | CN112602090A (en) |
WO (1) | WO2020009800A1 (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11113887B2 (en) * | 2018-01-08 | 2021-09-07 | Verizon Patent And Licensing Inc | Generating three-dimensional content from two-dimensional images |
US11669726B2 (en) | 2018-07-02 | 2023-06-06 | Magic Leap, Inc. | Methods and systems for interpolation of disparate inputs |
US10984587B2 (en) * | 2018-07-13 | 2021-04-20 | Nvidia Corporation | Virtual photogrammetry |
US11496723B1 (en) * | 2018-09-28 | 2022-11-08 | Apple Inc. | Automatically capturing a moment |
US10679397B1 (en) | 2018-12-13 | 2020-06-09 | Universal City Studios Llc | Object tracking animated figure systems and methods |
US11132606B2 (en) * | 2019-03-15 | 2021-09-28 | Sony Interactive Entertainment Inc. | Reinforcement learning to train a character using disparate target animation data |
KR102243040B1 (en) * | 2019-04-10 | 2021-04-21 | 한양대학교 산학협력단 | Electronic device, avatar facial expression system and controlling method threrof |
EP4058989A1 (en) * | 2019-11-15 | 2022-09-21 | Snap Inc. | 3d body model generation |
US11687778B2 (en) | 2020-01-06 | 2023-06-27 | The Research Foundation For The State University Of New York | Fakecatcher: detection of synthetic portrait videos using biological signals |
CN111223171A (en) * | 2020-01-14 | 2020-06-02 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2021226128A1 (en) * | 2020-05-04 | 2021-11-11 | Rct Studio Inc. | Methods and systems for 3d character pose prediction based on phase-functioned neural networks |
US11113862B1 (en) * | 2020-05-11 | 2021-09-07 | Sony Interactive Entertainment Inc. | Simulating motion of computer simulation characters to account for simulated injuries to the characters using current motion model |
KR20220018760A (en) * | 2020-08-07 | 2022-02-15 | 삼성전자주식회사 | Edge data network for providing three-dimensional character image to the user equipment and method for operating the same |
US11430217B2 (en) | 2020-12-11 | 2022-08-30 | Microsoft Technology Licensing, Llc | Object of interest colorization |
US20230063681A1 (en) * | 2021-08-25 | 2023-03-02 | Sony Interactive Entertainment Inc. | Dynamic augmentation of stimuli based on profile of user |
CN114241396A (en) * | 2021-12-31 | 2022-03-25 | 深圳市智岩科技有限公司 | Lamp effect control method, system, device, electronic equipment and storage medium |
CN114647650A (en) * | 2022-05-18 | 2022-06-21 | 苏州琞能能源科技有限公司 | Data storage method and device, electronic equipment and storage medium |
CN117240602B (en) * | 2023-11-09 | 2024-01-19 | 北京中海通科技有限公司 | Identity authentication platform safety protection method |
Family Cites Families (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6222525B1 (en) | 1992-03-05 | 2001-04-24 | Brad A. Armstrong | Image controllers with sheet connected sensors |
US5670988A (en) | 1995-09-05 | 1997-09-23 | Interlink Electronics, Inc. | Trigger operated electronic device |
US9525696B2 (en) * | 2000-09-25 | 2016-12-20 | Blue Coat Systems, Inc. | Systems and methods for processing data flows |
GB2403363A (en) * | 2003-06-25 | 2004-12-29 | Hewlett Packard Development Co | Tags for automated image processing |
US11428937B2 (en) | 2005-10-07 | 2022-08-30 | Percept Technologies | Enhanced optical and perceptual digital eyewear |
US20070081123A1 (en) | 2005-10-07 | 2007-04-12 | Lewis Scott W | Digital eyewear |
US8696113B2 (en) | 2005-10-07 | 2014-04-15 | Percept Technologies Inc. | Enhanced optical and perceptual digital eyewear |
US9317110B2 (en) | 2007-05-29 | 2016-04-19 | Cfph, Llc | Game with hand motion control |
US8350916B2 (en) * | 2007-12-03 | 2013-01-08 | Panasonic Corporation | Image processing device, photographing device, reproducing device, integrated circuit, and image processing method |
US9304319B2 (en) | 2010-11-18 | 2016-04-05 | Microsoft Technology Licensing, Llc | Automatic focus improvement for augmented reality displays |
US10156722B2 (en) | 2010-12-24 | 2018-12-18 | Magic Leap, Inc. | Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality |
KR101890328B1 (en) | 2010-12-24 | 2018-08-21 | 매직 립, 인코포레이티드 | An ergonomic head mounted display device and optical system |
EP2705435B8 (en) | 2011-05-06 | 2017-08-23 | Magic Leap, Inc. | Massive simultaneous remote digital presence world |
US11074495B2 (en) * | 2013-02-28 | 2021-07-27 | Z Advanced Computing, Inc. (Zac) | System and method for extremely efficient image and pattern recognition and artificial intelligence platform |
US10795448B2 (en) | 2011-09-29 | 2020-10-06 | Magic Leap, Inc. | Tactile glove for human-computer interaction |
AU2012348348B2 (en) | 2011-10-28 | 2017-03-30 | Magic Leap, Inc. | System and method for augmented and virtual reality |
KR102404537B1 (en) | 2012-04-05 | 2022-05-31 | 매직 립, 인코포레이티드 | Wide-field of view (fov) imaging devices with active foveation capability |
US9671566B2 (en) | 2012-06-11 | 2017-06-06 | Magic Leap, Inc. | Planar waveguide apparatus with diffraction element(s) and system employing same |
EP2864961A4 (en) | 2012-06-21 | 2016-03-23 | Microsoft Technology Licensing Llc | Avatar construction using depth camera |
US9740006B2 (en) | 2012-09-11 | 2017-08-22 | Magic Leap, Inc. | Ergonomic head mounted display device and optical system |
KR102274413B1 (en) | 2013-01-15 | 2021-07-07 | 매직 립, 인코포레이티드 | Ultra-high resolution scanning fiber display |
CA3157218A1 (en) | 2013-03-11 | 2014-10-09 | Magic Leap, Inc. | System and method for augmented and virtual reality |
NZ712192A (en) | 2013-03-15 | 2018-11-30 | Magic Leap Inc | Display system and method |
US9874749B2 (en) | 2013-11-27 | 2018-01-23 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
US10262462B2 (en) | 2014-04-18 | 2019-04-16 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
AU2014337171B2 (en) | 2013-10-16 | 2018-11-15 | Magic Leap, Inc. | Virtual or augmented reality headsets having adjustable interpupillary distance |
NZ720610A (en) | 2013-11-27 | 2020-04-24 | Magic Leap Inc | Virtual and augmented reality systems and methods |
US9857591B2 (en) | 2014-05-30 | 2018-01-02 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
CN106233189B (en) | 2014-01-31 | 2020-06-26 | 奇跃公司 | Multi-focus display system and method |
EP3100099B1 (en) | 2014-01-31 | 2020-07-01 | Magic Leap, Inc. | Multi-focal display system and method |
US10203762B2 (en) | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
KR102173699B1 (en) | 2014-05-09 | 2020-11-03 | 아이플루언스, 인크. | Systems and methods for discerning eye signals and continuous biometric identification |
CN113253476B (en) | 2014-05-30 | 2022-12-27 | 奇跃公司 | Method and system for generating virtual content display using virtual or augmented reality device |
MA41117A (en) | 2014-12-05 | 2017-10-10 | Myfiziq Ltd | IMAGING OF A BODY |
NZ773845A (en) | 2015-03-16 | 2022-07-01 | Magic Leap Inc | Methods and systems for diagnosing and treating health ailments |
US9443192B1 (en) * | 2015-08-30 | 2016-09-13 | Jasmin Cosic | Universal artificial intelligence engine for autonomous computing devices and software applications |
US10402700B2 (en) | 2016-01-25 | 2019-09-03 | Deepmind Technologies Limited | Generating images using neural networks |
US10929743B2 (en) * | 2016-09-27 | 2021-02-23 | Disney Enterprises, Inc. | Learning to schedule control fragments for physics-based character simulation and robots using deep Q-learning |
US10535174B1 (en) * | 2017-09-14 | 2020-01-14 | Electronic Arts Inc. | Particle-based inverse kinematic rendering system |
WO2019204164A1 (en) | 2018-04-16 | 2019-10-24 | Magic Leap, Inc. | Systems and methods for cross-application authoring, transfer, and evaluation of rigging control systems for virtual characters |
US11669726B2 (en) | 2018-07-02 | 2023-06-06 | Magic Leap, Inc. | Methods and systems for interpolation of disparate inputs |
-
2019
- 2019-06-19 US US16/446,236 patent/US11669726B2/en active Active
- 2019-06-19 CN CN201980056102.6A patent/CN112602090A/en active Pending
- 2019-06-19 JP JP2020571504A patent/JP7437327B2/en active Active
- 2019-06-19 EP EP19829805.1A patent/EP3818530A4/en active Pending
- 2019-06-19 WO PCT/US2019/037952 patent/WO2020009800A1/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JPWO2020009800A5 (en) | ||
Chen | Modeling and control for nonlinear structural systems via a NN-based approach | |
Zhang et al. | Optimal kinematic calibration of parallel manipulators with pseudoerror theory and cooperative coevolutionary network | |
CN111216130B (en) | Uncertain robot self-adaptive control method based on variable impedance control | |
CN104992259A (en) | Complex network survivability and key node analysis method based on community structure | |
Mock et al. | A comparison of ppo, td3 and sac reinforcement algorithms for quadruped walking gait generation | |
He et al. | A design methodology for mechatronic vehicles: application of multidisciplinary optimization, multibody dynamics and genetic algorithms | |
Lai et al. | Singularity‐avoiding swing‐up control for underactuated three‐link gymnast robot using virtual coupling between control torques | |
He et al. | Dynamics analysis and numerical simulation of a novel underactuated robot wrist | |
Anantharaman et al. | Numerical simulation of mechanical systems using methods for differential‐algebraic equations | |
CN116956440A (en) | Concrete performance monitoring point optimal arrangement method in complex environment | |
Fijany et al. | A new factorization of the mass matrix for optimal serial and parallel calculation of multibody dynamics | |
CN113055218A (en) | Redundancy evaluation method and device for NFV network and computing equipment | |
Peng et al. | Neural adaptive control for leader–follower flocking of networked nonholonomic agents with unknown nonlinear dynamics | |
Bae et al. | Humanoid state estimation using a moving horizon estimator | |
CN112276952B (en) | Robust simultaneous stabilization method and system for multi-robot system | |
CN109544082B (en) | System and method for digital battlefield countermeasure | |
Kalaba et al. | Associative memory approach to the identification of structural and mechanical systems | |
CN110377957B (en) | Bridge crane neural network modeling method of whale search strategy wolf algorithm | |
Kim et al. | A Deep Learning Part-diagnosis Platform (DLPP) based on an In-vehicle On-board gateway for an Autonomous Vehicle | |
Srinivasarao et al. | Bond graph modeling and multi-body dynamics of a twin rotor system | |
Abdessameud et al. | Attitude synchronization of a spacecraft formation without velocity measurement | |
JP2021174401A5 (en) | ||
Wu et al. | Hierarchical inversion‐based output tracking control for uncertain autonomous underwater vehicles using extended Kalman filter | |
Vos et al. | Port-Hamiltonian approach to deployment |