WO2020017668A1 - Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues - Google Patents

Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues Download PDF

Info

Publication number
WO2020017668A1
WO2020017668A1 PCT/KR2018/007996 KR2018007996W WO2020017668A1 WO 2020017668 A1 WO2020017668 A1 WO 2020017668A1 KR 2018007996 W KR2018007996 W KR 2018007996W WO 2020017668 A1 WO2020017668 A1 WO 2020017668A1
Authority
WO
WIPO (PCT)
Prior art keywords
point clouds
avatar
point
matching
view image
Prior art date
Application number
PCT/KR2018/007996
Other languages
English (en)
Korean (ko)
Inventor
신후랑
Original Assignee
주식회사 이누씨
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 이누씨 filed Critical 주식회사 이누씨
Priority to PCT/KR2018/007996 priority Critical patent/WO2020017668A1/fr
Publication of WO2020017668A1 publication Critical patent/WO2020017668A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present embodiment relates to a method and apparatus for generating an avatar using multi-view image matching.
  • the avatar represents another self existing in the Internet or a mobile communication environment, which is a virtual space, and is capable of being transformed into any form that can be expressed not only in the form of a human but also in an animal or a plant.
  • the production and use of the avatar is similar to the user's own appearance, and can represent the user's characteristics, and may include meanings such as user's curiosity, surrogate satisfaction, and the ideal person required by the individual. The user has become very interested in the production and use of avatars representing individuals.
  • photographic or moving picture information as a means for indicating the shape of an individual, but these data are enormous in size and are difficult to transmit and process in the Internet or a mobile communication terminal. Since the user cannot edit or control the image data or the video data, the user cannot easily be favored to other users, and the user's personality cannot be properly displayed.
  • two-dimensional or three-dimensional avatars are configured in a form that can express individuality of individuals, and avatars are exchanged between users on a network, or data exchange using avatars is more actively performed.
  • an avatar is created by a designer directly looking at a user or a user's picture to draw an avatar, selecting a desired avatar from a predetermined avatar, and combining each item stored in a database. It can be divided into the configuration method.
  • the user may create his or her avatar by simple operation.
  • the avatar created by the above-described method may be produced to emphasize the personality of the user's desired form, but it is impossible to produce the avatar in a form similar to the actual appearance of the user.
  • the conventional avatar generation method transmits a picture of a user photographing his or her face to the avatar service company to design an appropriate avatar according to the image provided by the user by the designer who creates the avatar in the service company.
  • the avatar can be made close to the user's appearance. However, when the avatar is produced by the designer, it takes a lot of time to produce the avatar. Resource and effort are consumed.
  • the user's face can be recognized and modeled from the live image, and the avatar can be completed based on the modeled face image, there is a problem that it is difficult to create an avatar having high similarity with the user using only one image. There is a problem in that a large load is required to generate an avatar having high similarity to a user using a plurality of images, and the speed of generating the avatar is slow.
  • fast modeling is performed so that computational load is minimized when generating an avatar using multi-view image matching, so that the virtualized view data can be quickly generated or transformed into an avatar, synthesized with a background image, or the like. It is an object of the present invention to provide a multi-view image registration method and apparatus that can replace other characters and body parts.
  • an image acquisition unit for obtaining a plurality of multi-view image information obtained by photographing a specific object to a multi-view from a plurality of cameras;
  • An extraction unit for recognizing an object from each of the plurality of multi-view image information, extracting a feature point for the object, and extracting a point cloud based on the feature point;
  • a duplicate confirmation unit for generating duplicate point data by extracting point clouds in which overlap between the point clouds occurs after performing mutual position matching between the point clouds; And removing the point clouds corresponding to the duplicated point data among all the point clouds, and performing modeling to minimize the computational load between the remaining point clouds, thereby creating an avatar capable of 360 ° rotation to enable a 3D virtualized view. It provides an image registration device comprising a matching unit.
  • the method comprises: obtaining a plurality of multi-view image information obtained by photographing a specific object from a plurality of cameras in a multi-view; Recognizing an object from each of the plurality of multi-view image information, extracting a feature point for the object, and extracting a point cloud based on the feature point; Generating overlapping point data by extracting point clouds in which overlap between the point clouds occurs after performing mutual location matching between the point clouds; And removing the point clouds corresponding to the duplicated point data among all the point clouds, and performing modeling to minimize the computational load between the remaining point clouds, thereby creating an avatar capable of 360 ° rotation to enable a 3D virtualized view. It provides a video registration method comprising a process.
  • fast modeling is performed to minimize the computational load when generating an avatar using multi-view image matching, so that the virtualized view data can be quickly generated or transformed into the avatar. Rather, it can be combined with a background image or replace other characters and body parts.
  • FIGS. 1A and 1B are block diagrams schematically illustrating a multiview image matching system according to an exemplary embodiment.
  • FIG. 2 is a block diagram schematically illustrating a user terminal for multiview image matching according to the present embodiment.
  • FIG. 3 is a block diagram schematically illustrating a multiview image matching device according to an embodiment.
  • FIG. 4 is a view for explaining mesh modeling according to an embodiment.
  • FIG. 5 is a view for explaining the appearance change and the background replacement of the avatar according to the present embodiment.
  • FIG 6 illustrates avatar rotation according to the present embodiment.
  • FIG. 7 is a flowchart illustrating a method of generating an avatar using multi-view image registration according to the present embodiment.
  • FIGS. 1A and 1B are block diagrams schematically illustrating a multiview image matching system according to an exemplary embodiment.
  • the multi-view image matching system includes a plurality of cameras 110_1, 110_2, 110_N, a plurality of control devices 120_1, 120_2, 120_N, a user terminal 130 for matching, and a streaming server 140. do.
  • the plurality of cameras 110_1, 110_2, and 110_N are apparatuses for photographing a specific object.
  • the plurality of cameras 110_1, 110_2, and 110_N photograph a specific object (eg, a user) as a multi-view and transmit the photographed object to the plurality of control devices 120_1, 120_2, and 120_N.
  • the plurality of cameras 110_1, 110_2, and 110_N are peripheral devices used in connection with control devices 120_1, 120_2, and 120_N that can recognize a specific object and experience games and entertainment without a separate controller.
  • the plurality of cameras 110_1, 110_2, and 110_N may be, for example, a peripheral device such as Kinect.
  • a plurality of cameras may be provided with a separate sensor, when equipped with a sensor, by using a sensor to recognize the operation or gesture of a specific object (user), the microphone module provided voice Can be recognized.
  • a plurality of cameras 110_1, 110_2, and 110_N need a separate power source to connect to the plurality of control devices 120_1, 120_2, and 120_N.
  • the sensors provided in the plurality of cameras 110_1, 110_2, and 110_N are depth cameras, and provide RGB images and joint tracking information as well as depth information in real time.
  • a plurality of cameras using the data provided from the depth sensor to detect the human / body parts or poses required for gesture recognition, and to play a game or human-computer interaction.
  • the plurality of control apparatuses 120_1, 120_2, and 120_N are apparatuses for processing an image, and receive and photograph information of a specific object (for example, a user) received from a plurality of cameras 110_1, 110_2, and 110_N. Generate point image information.
  • the plurality of control apparatuses 120_1, 120_2, and 120_N transmit the multi-view image information of photographing a specific object (eg, a user) to the user terminal 130.
  • the user terminal 130 generates a avatar corresponding to a specific object (eg, a user) by quickly matching the multi-view image to minimize the computational load.
  • the user terminal 130 includes an image matching program 232 and generates an avatar using the mounted image matching program 232.
  • the user terminal 130 transmits the generated avatar to the streaming server 140.
  • the user terminal 130 includes all surfaces of a specific object (eg, a user's body shape) using a plurality of cameras 110_1, 110_2, and 110_N (at least three cameras). Create an avatar.
  • a specific object eg, a user's body shape
  • cameras 110_1, 110_2, and 110_N at least three cameras.
  • the streaming server 140 transmits the avatar received from the user terminal 130 to a smart phone, a tablet, a notebook, and the like.
  • the streaming server 140 transmits and plays a multimedia file such as sound (music) or video.
  • the file is opened after downloading, but when playing a large file such as a video, it may take a long time to download, but the streaming server 140 waits by downloading and playing the file. Can be greatly reduced.
  • the streaming server 140 may also stream in real time the avatar received from the user terminal 130 by streaming on the computer network.
  • the user terminal 130 collects 3D virtualized view image data captured by the plurality of cameras 110_1, 110_2, and 110_N, generates a 3D avatar, and transmits the 3D avatar to the streaming server 140.
  • 140 may use the 3D avatar received from the user terminal 130 to transmit to the mobile devices of general users to use the online virtual fan meeting for the service.
  • FIG. 2 is a block diagram schematically illustrating a user terminal for multiview image matching according to the present embodiment.
  • the user terminal 130 may include a CPU 210, a main memory 220, a main memory 220, a memory 230, a display 240, an input 250, and a communicator 260. Include. Components included in the user terminal 130 are not necessarily limited thereto.
  • the user terminal 130 refers to an electronic device that performs voice or data communication via a network according to a user's key manipulation.
  • the user terminal 130 includes a memory for storing a program or protocol for communicating with a game server via a network, a microprocessor for executing and controlling the program, and the like.
  • the user terminal 130 is preferably a personal computer (PC), but is not necessarily limited thereto, and may be a smartphone, a tablet, a laptop, a personal digital assistant (PDA). Electronic devices such as a digital assistant, a game console, a portable multimedia player (PMP), a PlayStation Portable (PSP), a wireless communication terminal, a media player, and the like.
  • PC personal computer
  • PDA personal digital assistant
  • Electronic devices such as a digital assistant, a game console, a portable multimedia player (PMP), a PlayStation Portable (PSP), a wireless communication terminal, a media player, and the like.
  • PMP portable multimedia player
  • PSP PlayStation Portable
  • the user terminal 130 executes (i) a communication device such as a communication modem for communicating with various devices or a wired / wireless network, (ii) a memory for storing various programs and data, and (iii) a program for operation and control.
  • a communication device such as a communication modem for communicating with various devices or a wired / wireless network
  • a memory for storing various programs and data
  • a program for operation and control Various devices including a microprocessor for the purpose.
  • the memory may be a computer such as random access memory (RAM), read only memory (ROM), flash memory, optical disk, magnetic disk, solid state disk (SSD), or the like. It may be a readable recording / storage medium.
  • the CPU 210 loads the image registration program 232 according to the present embodiment from the memory 230 to the main memory 220.
  • the CPU 210 receives a game user's command using the input unit 250 including a touch screen, a mouse, and a keyboard.
  • the CPU 210 performs an image matching program 232 and outputs the result to the display unit 240.
  • the CPU 210 downloads the image registration program 232 from the communication unit 260 and stores the image registration program 232 in the memory 230.
  • the image registration program 232 obtains a plurality of multi-view image information obtained by photographing a specific object from a plurality of cameras 110_1, 110_2, and 110_N.
  • the image registration program 232 recognizes an object from each of the plurality of multi-view image information, extracts a feature point for the object, and extracts a point cloud based on the feature point.
  • the image matching program 232 generates the overlapping point data by extracting the point clouds where the overlap between the point clouds occurs after performing mutual position matching between the point clouds.
  • the image matching program 232 removes the point clouds corresponding to the duplicate point data among the entire point clouds, and performs modeling to minimize the computational load between the remaining point clouds. Enable the view.
  • the communication unit 260 may include a near field communication (NFC), 2G, 3G, Long Term Evolution (LTE), time-division LTE (TD-LTE), a wireless local area network (WLAN) including Wi-Fi, and It performs wired and wireless communication including a wired LAN.
  • the communication unit 260 transmits and receives data with the plurality of control devices 120_1, 120_2, and 120_N by performing wired or wireless communication.
  • FIG. 3 is a block diagram schematically illustrating a multiview image matching device according to an embodiment.
  • the multi-view image registration device 200 refers to a device corresponding to the image registration program 232.
  • the image matching program 232 according to the present embodiment may be implemented as a separate device including hardware.
  • the multi-view image matching device 200 includes an image acquisition unit 310, an extraction unit 312, a duplication checker 314, a matching unit 316, a sensor unit 320, and a composite image acquisition unit. 322, an image synthesizer 324.
  • Components included in the multi-view image registration device 200 are not necessarily limited thereto.
  • Each component included in the multi-view image registration device 200 may be connected to a communication path connecting a software module or a hardware module inside the device to operate organically. These components communicate using one or more communication buses or signal lines.
  • Each component of the multi-view image registration device 200 illustrated in FIG. 3 refers to a unit that processes at least one function or operation, and may be implemented as a software module, a hardware module, or a combination of software and hardware. .
  • the image acquisition unit 310 obtains a plurality of multi-view image information obtained by photographing a specific object from a plurality of cameras.
  • the extractor 312 recognizes an object from each of the plurality of multi-view image information.
  • the extractor 312 extracts a feature point for the object.
  • the extractor 312 extracts a point cloud based on the feature points.
  • the duplication checker 314 performs mutual location matching between the point clouds.
  • the duplicate confirmation unit 314 extracts point clouds in which overlap between point clouds occurs, and generates duplicate point data.
  • the matching unit 316 removes point clouds corresponding to duplicate point data among all point clouds, and calculates a point cloud that is finally left.
  • the matching unit 316 performs modeling to minimize computational load between the point clouds that are finally left, thereby generating an avatar capable of 360 ° rotation to enable a 3D virtualized view.
  • the matching unit 316 generates an avatar by matching the remaining point clouds based on grid information according to the multi-view received from the image acquisition unit 310 so that the computational load is minimized among the remaining point clouds.
  • the matching unit 316 extracts neighboring point clouds among the point clouds left on the basis of grid information, and maintains mesh model structures of the neighboring point clouds intact between the mesh model structures. Quickly match adjacent point clouds (matching parts) to create an avatar.
  • the matching unit 316 extracts x, y coordinate information included on the grid information.
  • the matching unit 316 extracts point clouds located on the same grid among the remaining point clouds by comparing x and y coordinate information.
  • the matching unit 316 recognizes point clouds located on the same grid as neighboring point clouds.
  • the matching unit 316 performs new mesh modeling between adjacent points among the neighboring point clouds while maintaining the mesh model structure of the neighboring point clouds obtained without further calculation.
  • the matching unit 316 performs triangulation using only adjacent points among neighboring point clouds to perform new mesh modeling.
  • the matching unit 316 matches one point of a cloud to minimize a computational load between a point cloud corresponding to a specific body part of the avatar and a point cloud corresponding to a specific body part of another avatar.
  • the sensor unit 320 senses or receives direction information about a specific object.
  • the composite image acquisition unit 322 obtains actual image information.
  • the image synthesizing unit 324 displays the avatar on the screen simultaneously with the actual image information, and causes the avatar to rotate based on the direction information received from the sensor unit 320.
  • FIG. 4 is a view for explaining mesh modeling according to an embodiment.
  • the multiview image matching device 200 performs mutual position matching between point clouds for matching between point clouds acquired from a multiview image.
  • the multi-view image registration device 200 removes point clouds in which overlap between point clouds occurs.
  • a point cloud refers to a set of points belonging to a certain coordinate system.
  • points are usually defined as X, Y, and Z coordinates and are often used to represent the surface of an object.
  • Point clouds can be obtained by three-dimensional scanning.
  • the extraction unit 312 in the multi-view image registration device 200 automatically measures a number of points on the surface of the object for the three-dimensional scanning operation, and generates a point cloud generated as a digital file.
  • Point clouds are converted into polygon meshes, triangle meshes, NURB models, and CAD models through a surface reconstruction process.
  • the multi-view image registration device 200 does not perform mesh modeling on the entire point cloud that is finally left among all the point clouds.
  • the multi-view image registration device 200 matches using the mesh model defined as it is while removing point clouds in which overlap occurs on grid information received from the plurality of cameras 110_1, 110_2, and 110_N.
  • the multi-view image registration device 200 maintains the mesh structure on the grid so that the multi-view image registration device 200 can be easily and quickly performed at the viewpoints of the plurality of cameras 110_1, 110_2, and 110_N.
  • the multi-view image registration device 200 defines a face by connecting neighboring point clouds when generating a graphics model to perform mesh modeling.
  • triangulation is called triangulation.
  • the multi-view image registration device 200 extracts neighboring point clouds among the point clouds left on the basis of grid information, and maintains the mesh model structures of the neighboring point clouds as they are. Quickly match adjacent point clouds (matching parts) between model structures.
  • the multi-view image matching device 200 includes grid information received from a plurality of cameras 110_1, 110_2, and 110_N. Using (x, y coordinate information), you can use the obtained mesh model as it is without any calculation and perform new mesh modeling only in the connection part between two point clouds.
  • FIG. 5 is a view for explaining the appearance change and the background replacement of the avatar according to the present embodiment.
  • the multiview image matching device 200 may change or copy the appearance of the generated avatar.
  • the multi-view image registration device 200 generates a single avatar by matching a specific body part of the avatar with a specific body part of another avatar.
  • the multi-view image registration device 200 may generate a new avatar by matching the face (head) of the avatar with the face (head) of another avatar.
  • the multi-view image registration device 200 may generate a new avatar by matching the face (head) of the avatar with the body (body) of another avatar.
  • the multi-view image registration device 200 matches a point cloud corresponding to a specific body part of the avatar with a point cloud corresponding to a specific body part of another avatar so as to generate a single avatar.
  • the multi-view image registration device 200 may replace or modify the background of the generated avatar.
  • the multi-view image registration device 200 may display the avatar on the screen simultaneously with the actual image information.
  • the multi-view image registration device 200 may display an avatar in an overlay form on a beach background screen, an avatar in an overlay form on a forest background screen, or an overlay form on a living room wallpaper.
  • FIG 6 illustrates avatar rotation according to the present embodiment.
  • the multi-view image registration device 200 extracts the overlapping point cloud from the 3D data obtained from the plurality of cameras 110_1, 110_2, and 110_N to efficiently generate a 3D virtualized view capable of 360 ° free rotation in real time.
  • the multi-view video registration device 200 extracts the overlapping point cloud and minimizes the computational load required to generate the virtualized view based on the overlapping point cloud and increases the efficiency to minimize the time required to generate the virtualized view.
  • the multi-view image registration device 200 displays a complex video interface in which a 3D virtualized view and a planar 2D view having sensor-based orientation information are simultaneously controlled on one screen.
  • the multi-view image registration device 200 displays a virtual view and a real video view on the same screen in an overlay form. At this time, the multi-view image registration device 200 provides an interface structure that displays the rotation when the user wants to rotate in the corresponding direction because the actual video view includes the direction information.
  • FIG. 7 is a flowchart illustrating a method of generating an avatar using multi-view image registration according to the present embodiment.
  • the multi-view image registration device 200 obtains a plurality of multi-view image information obtained by photographing a specific object as a multi-view from a plurality of cameras (S710).
  • the multi-view image matching device 200 recognizes an object from each of the plurality of multi-view image information (S720).
  • the multi-view image registration device 200 extracts a feature point for the object and extracts a point cloud based on the feature point (S730).
  • the multi-view image matching device 200, the overlapping checker 314 performs mutual position matching between the point clouds (S740).
  • the multiview image registration device 200 extracts point clouds in which overlap between point clouds occurs, and generates duplicate point data (S750).
  • the matching unit 316 of the multi-view image matching device 200 removes point clouds corresponding to duplicate point data among all point clouds, and finally calculates the left point cloud (S760).
  • the multi-view image registration device 200 generates a avatar capable of 360 ° rotation by modeling the computational load to be minimized among the remaining point clouds, thereby enabling a 3D virtualized view (S770).
  • the multiview image matching device 200 matches the left point cloud based on grid information according to the multiview received from the image acquisition unit 310 so that the computational load is minimized between the point clouds that are finally left. To create an avatar.
  • the multi-view image registration device 200 extracts neighboring point clouds among the point clouds left on the basis of grid information, and maintains the mesh model structures of the neighboring point clouds as they are. Quickly match adjacent point clouds (matching parts) between model structures to create avatars.
  • the multi-view image registration device 200 extracts x, y coordinate information included on the grid information.
  • the multi-view image registration device 200 compares x and y coordinate information and extracts point clouds located on the same grid among the remaining point clouds.
  • the multi-view image registration device 200 recognizes point clouds located on the same grid as neighboring point clouds.
  • the multi-view image registration device 200 performs new mesh modeling between adjacent points among neighboring point clouds while maintaining the mesh model structure of the neighboring point clouds obtained without further calculation.
  • the multi-view image registration device 200 performs triangulation using only adjacent points among neighboring point clouds to perform new mesh modeling.
  • steps S710 to S770 are described as being sequentially executed, but are not necessarily limited thereto. In other words, since the steps described in FIG. 7 may be applied by changing the steps or executing one or more steps in parallel, FIG. 7 is not limited to the time series order.
  • the avatar generating method using the multi-view image registration according to the present embodiment described in FIG. 7 may be implemented in a program and recorded on a computer-readable recording medium.
  • the computer-readable recording medium having recorded thereon a program for implementing an avatar generating method using multi-view image matching according to the present embodiment includes all kinds of recording devices storing data that can be read by a computer system.
  • the present invention can be applied to the field of producing a character using the photo information, there is industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un appareil et un procédé permettant de générer un avatar à l'aide d'une correspondance d'images multivues. Un mode de réalisation de l'invention concerne un procédé et un appareil permettant de mettre en correspondance des images multivues, le procédé et l'appareil permettant une modélisation rapide de façon à minimiser une charge de calcul lorsqu'un avatar est généré à l'aide d'une correspondance d'images multivues, ce qui permet de générer rapidement des données de visualisation de virtualisation en tant qu'avatar ou bien de les transformer en avatar, ainsi que de composer avec une image d'arrière-plan ou de remplacer une partie corporelle par un autre personnage.
PCT/KR2018/007996 2018-07-16 2018-07-16 Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues WO2020017668A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2018/007996 WO2020017668A1 (fr) 2018-07-16 2018-07-16 Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2018/007996 WO2020017668A1 (fr) 2018-07-16 2018-07-16 Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues

Publications (1)

Publication Number Publication Date
WO2020017668A1 true WO2020017668A1 (fr) 2020-01-23

Family

ID=69164732

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/007996 WO2020017668A1 (fr) 2018-07-16 2018-07-16 Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues

Country Status (1)

Country Link
WO (1) WO2020017668A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320521A (zh) * 2023-03-24 2023-06-23 吉林动画学院 一种基于人工智能的三维动画直播方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120000437A (ko) * 2010-06-25 2012-01-02 국민대학교산학협력단 증강현실기반의 인터페이스 에이전트 시스템 장치 및 방법
KR101325926B1 (ko) * 2012-05-22 2013-11-07 동국대학교 산학협력단 실시간 3d 데이터 송수신을 위한 3d 데이터 처리 장치 및 방법
KR101747951B1 (ko) * 2016-02-15 2017-06-15 동서대학교 산학협력단 멀티뷰 촬영기반 3d 휴먼 캐릭터 모델링 제공 장치
KR20170112267A (ko) * 2016-03-31 2017-10-12 삼성전자주식회사 이미지 합성 방법 및 그 전자장치
KR20170130150A (ko) * 2016-05-18 2017-11-28 광운대학교 산학협력단 3차원 데이터를 획득하기 위한 카메라 리그 방법, 이를 수행하는 카메라 리그 시스템, 및 이를 저장하는 기록매체
CN107507127A (zh) * 2017-08-04 2017-12-22 深圳市易尚展示股份有限公司 多视点三维点云的全局匹配方法和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120000437A (ko) * 2010-06-25 2012-01-02 국민대학교산학협력단 증강현실기반의 인터페이스 에이전트 시스템 장치 및 방법
KR101325926B1 (ko) * 2012-05-22 2013-11-07 동국대학교 산학협력단 실시간 3d 데이터 송수신을 위한 3d 데이터 처리 장치 및 방법
KR101747951B1 (ko) * 2016-02-15 2017-06-15 동서대학교 산학협력단 멀티뷰 촬영기반 3d 휴먼 캐릭터 모델링 제공 장치
KR20170112267A (ko) * 2016-03-31 2017-10-12 삼성전자주식회사 이미지 합성 방법 및 그 전자장치
KR20170130150A (ko) * 2016-05-18 2017-11-28 광운대학교 산학협력단 3차원 데이터를 획득하기 위한 카메라 리그 방법, 이를 수행하는 카메라 리그 시스템, 및 이를 저장하는 기록매체
CN107507127A (zh) * 2017-08-04 2017-12-22 深圳市易尚展示股份有限公司 多视点三维点云的全局匹配方法和系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320521A (zh) * 2023-03-24 2023-06-23 吉林动画学院 一种基于人工智能的三维动画直播方法及装置

Similar Documents

Publication Publication Date Title
WO2021244217A1 (fr) Procédé d'entraînement d'un modèle de transfert d'expression, et procédé et appareil de transfert d'expression
KR102068993B1 (ko) 다시점 영상 정합을 이용한 아바타 생성 방법 및 장치
WO2019124726A1 (fr) Procédé et système de fourniture de service de réalité mixte
WO2021238595A1 (fr) Procédé et appareil de génération d'image basés sur l'intelligence artificielle, dispositif, et support de stockage
TW202205059A (zh) 虛擬對象的控制方法、電子設備及電腦可讀儲存介質
US20210152751A1 (en) Model training method, media information synthesis method, and related apparatuses
KR102148151B1 (ko) 디지털 커뮤니케이션 네트워크에 기반한 지능형 채팅
CN111292427B (zh) 骨骼位移信息获取方法、装置、设备及存储介质
WO2019017582A1 (fr) Procédé et système de collecte de modèles de contenu de réalité augmentée en fonction d'une source en nuage et de génération automatique d'un contenu de réalité augmentée
CN113426117B (zh) 虚拟相机拍摄参数获取方法、装置、电子设备和存储介质
CN109035415B (zh) 虚拟模型的处理方法、装置、设备和计算机可读存储介质
WO2020171621A1 (fr) Procédé de commande d'affichage d'avatar et dispositif électronique associé
CN112348937A (zh) 人脸图像处理方法及电子设备
WO2015008932A1 (fr) Créateur d'espace digilogue pour un travail en équipe à distance dans une réalité augmentée et procédé de création d'espace digilogue l'utilisant
CN112206517A (zh) 一种渲染方法、装置、存储介质及计算机设备
WO2023116801A1 (fr) Procédé et appareil permettant d'effectuer le rendu d'un effet de particules, dispositif et support
CN110573992A (zh) 使用增强现实和虚拟现实编辑增强现实体验
Chen et al. A case study of security and privacy threats from augmented reality (ar)
WO2020017668A1 (fr) Procédé et appareil permettant de générer un avatar à l'aide d'une correspondance d'images multivues
CN109126136B (zh) 三维虚拟宠物的生成方法、装置、设备及存储介质
CN112891954A (zh) 虚拟对象的模拟方法、装置、存储介质及计算机设备
Liu et al. Skeleton tracking based on Kinect camera and the application in virtual reality system
WO2018174311A1 (fr) Procédé et système de fourniture de contenu dynamique pour caméra de reconnaissance faciale
WO2019124850A1 (fr) Procédé et système de personnification et d'interaction avec un objet
US20190378335A1 (en) Viewer position coordination in simulated reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18926994

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18926994

Country of ref document: EP

Kind code of ref document: A1