WO2017099500A1 - Procédé et dispositif de création d'animation - Google Patents

Procédé et dispositif de création d'animation Download PDF

Info

Publication number
WO2017099500A1
WO2017099500A1 PCT/KR2016/014398 KR2016014398W WO2017099500A1 WO 2017099500 A1 WO2017099500 A1 WO 2017099500A1 KR 2016014398 W KR2016014398 W KR 2016014398W WO 2017099500 A1 WO2017099500 A1 WO 2017099500A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
dimensional
person
foreground
depth image
Prior art date
Application number
PCT/KR2016/014398
Other languages
English (en)
Korean (ko)
Inventor
전수영
권지용
Original Assignee
스타십벤딩머신 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 스타십벤딩머신 주식회사 filed Critical 스타십벤딩머신 주식회사
Publication of WO2017099500A1 publication Critical patent/WO2017099500A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates to an animation generation method and an animation generation device, and more particularly, to an animation generation method and an animation generation device for generating a three-dimensional animation based on a depth image of a user.
  • the background art described above is technical information that the inventors possess for the derivation of the present invention or acquired in the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention. .
  • One embodiment of the present invention is to provide an animation generating method and an animation generating apparatus for generating a three-dimensional animation based on a depth image of a person.
  • One embodiment of the present invention is to create a natural three-dimensional animation image by using a single depth image.
  • One embodiment of the present invention has an object of generating an animation immediately by integrally processing the position of the person, the joint information, the three-dimensional position of the foreground and the camera position based on the acquired depth image.
  • One embodiment of the present invention is to provide a fast and natural AR environment by creating an animation by placing a three-dimensional character and three-dimensional background based on the depth image of the user.
  • the animation generating device corresponds to a depth image acquisition unit for acquiring a depth image of a person and a foreground, and a character and a foreground corresponding to the person; It may include an animation generator for generating a three-dimensional animation by arranging the background.
  • an animation generation method includes obtaining a depth image in which a person and a foreground are photographed, and generating a 3D animation by arranging a background corresponding to a character and a foreground corresponding to a person. can do.
  • a computer-readable recording medium having recorded thereon a program for performing an animation generation method.
  • the animation generation method includes: obtaining a depth image of a person and a foreground;
  • the method may include generating a 3D animation by arranging a background corresponding to the foreground.
  • a computer program executed by an animation generating apparatus and stored in a recording medium for performing an animation generating method the animation generating method comprising: obtaining a depth image of a person and a foreground;
  • the method may include generating a 3D animation by arranging a character corresponding to a person and a background corresponding to the foreground.
  • an embodiment of the present invention can provide an animation generating method and animation generating apparatus for generating a three-dimensional animation based on a depth image of a person.
  • an embodiment of the present invention can generate a natural three-dimensional animation image by using a single depth image.
  • an embodiment of the present invention by processing the position of the person, joint information, the three-dimensional position of the foreground and the three-dimensional position of the camera based on the acquired depth image Instantly create animations.
  • an embodiment of the present invention to provide a fast and natural AR environment by creating an animation by placing a three-dimensional character and three-dimensional background based on the depth image of the user photographed. Can be.
  • FIG. 1 is a configuration diagram schematically showing a configuration of an apparatus for generating animation according to an embodiment of the present invention.
  • FIG. 2 is a block diagram schematically illustrating a configuration of an apparatus for generating animation according to an embodiment of the present invention.
  • 3 to 6 are exemplary diagrams for describing a method for generating animation according to an embodiment of the present invention.
  • the animation generating apparatus 100 may acquire a depth image and generate a 3D animation based on the obtained depth image.
  • the animation generating apparatus 100 may generate a 3D animation by arranging a character and a background based on a depth image of a person.
  • the animation generating apparatus 100 may include a camera 10 and a user terminal 20, and each component included in the animation generating apparatus 100 may communicate through a network.
  • the network N may include a local area network (LAN), a wide area network (WAN), a value added network (VAN), a personal local area network (PAN), a mobile communication network. It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network (Wibro), wireless broadband Internet (Wibro), mobile WiMAX, high speed downlink packet access (HSDPA), or satellite communication network.
  • LAN local area network
  • WAN wide area network
  • VAN value added network
  • PAN personal local area network
  • Wibro mobile radio communication network
  • Wibro wireless broadband Internet
  • HSDPA high speed downlink packet access
  • the camera 10 included in the animation generating device 100 may be any device capable of acquiring a depth image.
  • the Microsoft® equipped with the depth image camera 10 based on the Infrared pattern projection may be used.
  • NUI devices for PCs such as Kinect 1, Kinect 2, Xtion pro from Asus, RealSense from Intel®, Structure Sensor from Occipital®, and Project Tango tablet from Google®.
  • the camera 10 may be implemented as an array camera capable of acquiring a depth image by integrating two or more cameras.
  • the user terminal 20 is an information processing apparatus for generating a 3D animation based on a depth image, and may include an interface for displaying a 3D animation image as an output result.
  • the user terminal 20 may be an information processing apparatus in which a client program for communicating with the camera 10 is installed.
  • the user terminal 20 may be implemented as a computer, a portable terminal, a television, a wearable device, or the like, which may be connected to a remote server through the network N or may be connected to another terminal and the server.
  • the computer includes, for example, a notebook, desktop, laptop, etc., which is equipped with a web browser
  • the portable terminal is, for example, a wireless communication device that is guaranteed portability and mobility.
  • the television may include an Internet Protocol Television (IPTV), an Internet Television (Internet Television), a terrestrial TV, a cable TV, or the like.
  • IPTV Internet Protocol Television
  • Internet Television Internet Television
  • the wearable device is, for example, an information processing device of a type that can be worn directly on a human body such as a watch, glasses, accessories, clothes, shoes, etc., and is connected to a remote server or another terminal via a network directly or through another information processing device. It can be connected with.
  • the apparatus 100 for generating animation may include a depth image obtaining unit 101, a silhouette image generating unit 102, a position recognition unit 103, and a body recognition unit 104. ), The position calculator 105 and the animation generator 106.
  • the depth image acquisition unit 101 may acquire a depth image in which 'person' and 'foreground' are photographed.
  • the acquisition of the depth image may be performed through the camera 10 described above.
  • the 'person' may include a user who is provided with an animation generated through an embodiment of the present invention. That is, the depth image acquisition unit 101 may acquire the depth image photographed by the user.
  • 'foreground' refers to the surrounding space excluding people among the photographed subjects. Accordingly, the animation generating apparatus 100 may provide a 3D animation based on a depth image in which the user and the foreground, which is the space where the user exists, are photographed together.
  • the silhouette image generator 102 may recognize a silhouette of a person in the depth image and generate a silhouette image based on the recognized silhouette of the person.
  • the silhouette image generation unit 102 may generate a silhouette image by recognizing a silhouette of a person using a visual recognition algorithm (Visual Recognition Algorithm).
  • the silhouette image generating unit 102 recognizes the shape of the person by analyzing the depth image through a visual recognition algorithm, and partitions an area corresponding to the shape of the person by distinguishing the person's shape from the foreground. can do.
  • the silhouette image generating unit 102 may recognize a region corresponding to the shape of the partitioned person as a silhouette, and the silhouette image generating unit 102 may visually process an image, for example, a border of a silhouette.
  • a silhouette image may be generated as a processed image such as generation, designation of a random color in a silhouette, and overlapping a separate layer in the silhouette.
  • the location recognition unit 103 may recognize the area of the person and the area of the foreground based on at least one of the depth image and the silhouette image.
  • the human area is an area where a person is displayed in the depth image
  • the foreground area is an area where the foreground is displayed in the depth image.
  • the position recognition unit 103 may divide the depth image into an interior of the silhouette, that is, an area where a person is located and an exterior of the silhouette, that is, an area of the foreground where the person is not located.
  • the body recognition unit 104 may identify a region where a person is located based on the depth image and recognize the body. In this case, the body recognition unit 104 may refer to the silhouette image together with the depth image.
  • the body recognition unit 104 may recognize at least one of a joint position, a face position, and a facial feature point of the person by grasping the region of the person in the depth image based on the silhouette image.
  • the facial feature point may include a specific point on the main part constituting the face, such as eyes, nose, mouth.
  • the body recognition unit 104 may calculate a three-dimensional position based on the camera 10 with respect to the joint position and the face position, and with respect to the facial feature point of the facial feature point based on the camera 10.
  • 3D position can be calculated.
  • the three-dimensional position refers to a position on a space having three axes and may be calculated based on depth information included in the depth image.
  • the body recognition unit 104 may track the movement of the joint and the movement of the face of the person based on the recognized joint position, face position, and facial feature point of the person.
  • the position calculator 105 may determine an area of the foreground in which the person is not located based on at least one of the depth image and the silhouette image, and calculate a 3D position of the structure included in the foreground.
  • the position calculator 105 may be based on a structure-from-motion (SfM) algorithm in calculating the three-dimensional position of the structure in the foreground.
  • SfM structure-from-motion
  • the position calculator 105 may calculate the 3D position of the camera 10 that captured the depth image based on at least one of the depth image and the silhouette image.
  • the animation generator 106 may be connected to each unit that acquires and processes the depth image as described above to generate a 3D animation.
  • Related embodiments are as follows.
  • the animation generator 106 may generate a 3D animation by arranging a character corresponding to a person and a background corresponding to the foreground based on the depth image acquired by the depth image acquisition unit 101.
  • the character and the background may be a computer graphic object produced by a two-dimensional or three-dimensional illustration, or a real-life object photographing a real object.
  • the foreground included in the depth image acquired by the depth image acquisition unit 101 may be used. You can create a three-dimensional animation to be used as a background.
  • the three-dimensional animation generated by the animation generator 106 may include a foreground of a three-dimensional character and a depth image replacing a person, a three-dimensional character and a two-dimensional background, or a three-dimensional image.
  • the character and the 3D background may be arranged in various embodiments.
  • the 3D graphic object or the live action object may have depth information.
  • the animation generator 106 may provide a 3D animation based on depth information corresponding to the object to be disposed.
  • the animation generator 106 may arrange a 3D character and a 3D background based on the silhouette image generated by the silhouette image generator 102.
  • the animation generator 106 may arrange a character and a background based on the position of the person recognized by the position recognition unit 103 and the position of the foreground, for example, based on the depth image of the silhouette included in the silhouette image.
  • the 3D character may be disposed inside, that is, the area where the person is located, and the 3D background may be arranged outside the silhouette, that is, the area where the person is not.
  • the animation generator 106 may arrange a 3D character based on at least one of a joint position, a face position, and a facial feature point recognized by the body recognition unit 104. That is, each joint of the 3D character may be positioned to correspond to each joint, for example, the neck, the shoulder, the elbow, the waist, the hip joint, the knee, and the like, and the position and facial feature points of the face, for example, both eyes and the nose
  • the face of the 3D character may be positioned to correspond to the mouth, the ears, and the ears.
  • the animation generator 106 may apply the animation calculated based on the movement of the joint and the movement of the face tracked by the body recognition unit 104 to the 3D character.
  • the 3D character may be implemented as an animation having a movement according to the movement of a person included in the depth image.
  • the animation generator 106 may arrange a three-dimensional background based on the three-dimensional position of the structure of the foreground calculated by the position calculator 105.
  • the 'structure of the foreground' refers to a subject other than a person included in the foreground area.
  • the animation generator 106 may immediately generate a 3D animation by arranging a preset 3D character and 3D background.
  • the animation generator 106 positions the virtual camera at the three-dimensional position of the camera 10 calculated by the position calculator 105, and the three-dimensional character and the three-dimensional background are disposed based on the position of the virtual camera. You can render the scene.
  • the animation generator 106 may generate a 3D animation based on depth information corresponding to the 3D character and the 3D background, according to an exemplary embodiment.
  • the 3D animation rendered by the animation generator 106 may include a 3D character instead of a user, and may include a virtual 3D background instead of a realistic foreground.
  • the animation generator 106 may provide the rendered three-dimensional animation to the user by two-dimensionalizing the rendered three-dimensional animation through a process such as cartoon rendering.
  • the conventional technology using the NUI device for a PC has been used in a fixed form in front of a TV or a computer, and has focused on recognizing the position and joint information / face information of a user.
  • the conventional technology using the NUI device for mobile devices focuses on recognizing the three-dimensional position of the foreground and the position of the camera 10 to utilize them in the AR environment.
  • Animation generation method by combining the technology using the NUI device for PC and the technology using the NUI device for mobile devices, as soon as the integrated information of the joint information / face information and foreground information of the user immediately Enable animation production.
  • FIGS. 3 to 5 are flowcharts illustrating a method for generating animation according to an embodiment of the present invention.
  • An animation generation method to be described below includes steps that are processed in time series in the animation generation device 100 illustrated in FIG. 1. Therefore, even if omitted below, the above description of the system illustrated in FIGS. 1 to 2 may be applied to an animation generation method described below.
  • the apparatus 100 for generating animation may acquire a depth image in which a person and a foreground are photographed (S31), and a character corresponding to a person based on the acquired depth image; A background corresponding to the foreground may be disposed to generate a 3D animation (S32).
  • the apparatus for generating animation 100 recognizes a silhouette of a person in a depth image to generate a silhouette image (S41), and generates a character and a background based on the silhouette image. You can create a 3D animation by placing.
  • the disposed character and the background may be a three-dimensional character and a three-dimensional background, respectively (S42).
  • the apparatus 100 for generating an animation generates a silhouette image. Based on the position of the person and the position of the foreground can be recognized (S51).
  • the animation generating apparatus 100 recognizes at least one of a joint position, a face position, and a facial feature point based on the position of the person, and captures a 3D position and depth image of the structure of the foreground based on the position of the foreground ( The three-dimensional position of 10) may be calculated (S52). To this end, the animation generating apparatus 100 may consider at least one of a silhouette image and a depth image, and in each step described later, the animation generating apparatus 100 performs each step in consideration of at least one of a silhouette image and a depth image. can do.
  • the animation generating apparatus 100 arranges the three-dimensional character and the three-dimensional background based on the position of the person and the foreground (S53). In this case, the animation generating apparatus 100 is based on the movement of the joint and the movement of the face. The animation may be calculated and applied (S54).
  • the animation generating apparatus 100 may generate a three-dimensional animation by placing a virtual camera at a three-dimensional position of the camera 10 and rendering a three-dimensional scene in which the three-dimensional character and the three-dimensional background are disposed (S55). ).
  • the animation generating apparatus 100 may be based on a depth image and a silhouette image in generating a 3D animation. 6, an embodiment in which the animation generating apparatus 100 recognizes a user joint structure, a face position, and a background structure from a silhouette image generated based on the depth image and the depth image is illustrated. In this case, the animation generating apparatus 100 may track the user joint from the user joint structure, track the face movement from the face position, arrange the 3D character, and apply animation of the whole body and the face.
  • a 3D animation can be generated by tracking a camera position from a background structure and applying a 3D background and a virtual camera based on the recognized background structure and the camera position.
  • the animation generating apparatus 100 may provide an immediate AR environment by generating and outputting an animation in which the user is replaced by a 3D character and the environment around the user with a 3D background.
  • the animation generating apparatus 100 generates a silhouette image based on a depth image photographed by a user, and includes a three-dimensional position of a human joint, a three-dimensional position of a facial feature point, and a background structure.
  • a natural and instant AR environment can be provided to the user.
  • ' ⁇ part' used in the present embodiment refers to software or a hardware component such as a field programmable gate array (FPGA) or an ASIC, and the ' ⁇ part' performs certain roles.
  • ' ⁇ ' is not meant to be limited to software or hardware.
  • ' ⁇ Portion' may be configured to be in an addressable storage medium or may be configured to play one or more processors.
  • ' ⁇ ' means components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, procedures, and the like. Subroutines, segments of program patent code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables.
  • the functionality provided within the components and 'parts' may be combined into a smaller number of components and 'parts' or separated from additional components and 'parts'.
  • components and ' ⁇ ' may be implemented to play one or more CPUs in the device or secure multimedia card.
  • the animation generation method may be implemented as a computer program (or computer program product) including instructions executable by a computer.
  • the computer program includes programmable machine instructions processed by the processor and may be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language.
  • the computer program may also be recorded on tangible computer readable media (eg, memory, hard disks, magnetic / optical media or solid-state drives, etc.).
  • the method for generating animation may be implemented by executing the computer program as described above by the computing device.
  • the computing device may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
  • a processor may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
  • Each of these components are connected to each other using a variety of buses and may be mounted on a common motherboard or otherwise mounted in a suitable manner.
  • the processor may process instructions within the computing device, such as to display graphical information for providing a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface. Instructions stored in memory or storage. In other embodiments, multiple processors and / or multiple buses may be used with appropriately multiple memories and memory types.
  • the processor may also be implemented as a chipset consisting of chips comprising a plurality of independent analog and / or digital processors.
  • the memory also stores information within the computing device.
  • the memory may consist of a volatile memory unit or a collection thereof.
  • the memory may consist of a nonvolatile memory unit or a collection thereof.
  • the memory may also be other forms of computer readable media, such as, for example, magnetic or optical disks.
  • the storage device can provide a large amount of storage space to the computing device.
  • the storage device may be a computer readable medium or a configuration including such a medium, and may include, for example, devices or other configurations within a storage area network (SAN), and may include a floppy disk device, a hard disk device, an optical disk device, Or a tape device, flash memory, or similar other semiconductor memory device or device array.
  • SAN storage area network

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de création d'animation et, en particulier, un procédé et un dispositif de création d'animation qui créent une animation 3D sur la base d'images de profondeur capturées par un utilisateur. Selon un premier aspect de la présente invention, un dispositif de création d'animation peut comprendre : une unité d'acquisition d'images de profondeur qui acquiert des images de profondeur d'une personne capturées et une vue complète ; et une unité de création d'animation qui présente un personnage correspondant à la personne et un fond correspondant à la vue complète, et qui crée une animation 3D. Selon un mode de réalisation de la présente invention, par la présentation d'un personnage 3D et d'un fond 3D et la création d'une animation sur la base d'images de profondeur capturées par un utilisateur, il est possible d'obtenir un environnement d'AR rapide et naturel.
PCT/KR2016/014398 2015-12-08 2016-12-08 Procédé et dispositif de création d'animation WO2017099500A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2015-0174429 2015-12-08
KR20150174429 2015-12-08

Publications (1)

Publication Number Publication Date
WO2017099500A1 true WO2017099500A1 (fr) 2017-06-15

Family

ID=59013815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/014398 WO2017099500A1 (fr) 2015-12-08 2016-12-08 Procédé et dispositif de création d'animation

Country Status (2)

Country Link
KR (2) KR20170067673A (fr)
WO (1) WO2017099500A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140559A (zh) * 2021-12-15 2022-03-04 深圳市前海手绘科技文化有限公司 一种动画生成方法以及装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102464944B1 (ko) * 2018-10-19 2022-11-09 한국과학기술원 카메라 워크를 재현하는 방법 및 장치
US11645802B2 (en) 2018-12-19 2023-05-09 Anipen Co., Ltd. Method, system, and non-transitory computer-readable recording medium for generating animation sequence
KR102337020B1 (ko) * 2019-01-25 2021-12-08 주식회사 버츄얼넥스트 3d스캔데이터를 이용한 증강현실 동영상제작시스템 및 그 방법
KR102185454B1 (ko) * 2019-04-17 2020-12-02 한국과학기술원 3차원 스케치 방법 및 장치
KR102479374B1 (ko) * 2021-12-20 2022-12-19 유진 3d 랜더링을 이용한 맵생성 시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130100140A1 (en) * 2011-10-25 2013-04-25 Cywee Group Limited Human body and facial animation systems with 3d camera and method thereof
KR20130067882A (ko) * 2011-12-14 2013-06-25 한국전자통신연구원 멀티 gpu를 이용한 실시간 3차원 외형 복원 모델 생성 방법 및 그 장치
KR101519775B1 (ko) * 2014-01-13 2015-05-12 인천대학교 산학협력단 오브젝트의 모션 인식 기반 애니메이션 생성 방법 및 장치
KR20150068895A (ko) * 2013-12-12 2015-06-22 한국전자통신연구원 삼차원 출력 데이터 생성 장치 및 방법
KR20150114016A (ko) * 2014-03-28 2015-10-12 주식회사 다림비젼 3d 객체 모듈을 이용한 가상 스튜디오 영상 생성 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130100140A1 (en) * 2011-10-25 2013-04-25 Cywee Group Limited Human body and facial animation systems with 3d camera and method thereof
KR20130067882A (ko) * 2011-12-14 2013-06-25 한국전자통신연구원 멀티 gpu를 이용한 실시간 3차원 외형 복원 모델 생성 방법 및 그 장치
KR20150068895A (ko) * 2013-12-12 2015-06-22 한국전자통신연구원 삼차원 출력 데이터 생성 장치 및 방법
KR101519775B1 (ko) * 2014-01-13 2015-05-12 인천대학교 산학협력단 오브젝트의 모션 인식 기반 애니메이션 생성 방법 및 장치
KR20150114016A (ko) * 2014-03-28 2015-10-12 주식회사 다림비젼 3d 객체 모듈을 이용한 가상 스튜디오 영상 생성 방법 및 장치

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140559A (zh) * 2021-12-15 2022-03-04 深圳市前海手绘科技文化有限公司 一种动画生成方法以及装置

Also Published As

Publication number Publication date
KR102012405B1 (ko) 2019-08-20
KR20170067673A (ko) 2017-06-16
KR20180098507A (ko) 2018-09-04

Similar Documents

Publication Publication Date Title
WO2017099500A1 (fr) Procédé et dispositif de création d'animation
KR102541812B1 (ko) 거울상을 포함하는 시야 내의 증강 현실
US9811894B2 (en) Image processing method and apparatus
JP6276475B2 (ja) カラービデオと深度ビデオとの同期方法、装置、および媒体
US11398044B2 (en) Method for face modeling and related products
CN111787242B (zh) 用于虚拟试衣的方法和装置
CN114219878B (zh) 虚拟角色的动画生成方法及装置、存储介质、终端
TW201835723A (zh) 圖形處理方法和裝置、虛擬實境系統和計算機儲存介質
CN111968207A (zh) 动画生成方法、装置、系统及存储介质
WO2021082801A1 (fr) Procédé et appareil de traitement de réalité augmentée, système, support d'enregistrement et dispositif électronique
CN112199016B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN103578135A (zh) 虚拟影像与真实场景相结合的舞台交互集成系统及实现方法
WO2016000309A1 (fr) Procédé et système de réalité augmentée basés sur un dispositif vestimentaire
US20180288387A1 (en) Real-time capturing, processing, and rendering of data for enhanced viewing experiences
JP2016530581A (ja) 平面固有の特徴のターゲットのinsitu生成
US20220358662A1 (en) Image generation method and device
CN106598211A (zh) 一种基于多摄像头的可穿戴式头盔的手势交互系统及识别方法
CN203630822U (zh) 虚拟影像与真实场景相结合的舞台交互集成系统
TW201826223A (zh) 深度圖產生裝置
WO2022071743A1 (fr) Estimation de la forme du corps et de la pose par régresseur volumétrique pour modèles de balayage tridimensionnels bruts
CN113793420B (zh) 深度信息处理方法、装置、电子设备及存储介质
JP7387001B2 (ja) 画像合成方法、装置、およびストレージ媒体
Narducci et al. Enabling consistent hand-based interaction in mixed reality by occlusions handling
JP2007102478A (ja) 画像処理装置、画像処理方法、及び半導体集積回路
CN103729060B (zh) 多环境虚拟投影互动系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16873365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16873365

Country of ref document: EP

Kind code of ref document: A1