US20220343613A1 - Method and apparatus for virtually moving real object in augmented reality - Google Patents

Method and apparatus for virtually moving real object in augmented reality Download PDF

Info

Publication number
US20220343613A1
US20220343613A1 US17/725,126 US202217725126A US2022343613A1 US 20220343613 A1 US20220343613 A1 US 20220343613A1 US 202217725126 A US202217725126 A US 202217725126A US 2022343613 A1 US2022343613 A1 US 2022343613A1
Authority
US
United States
Prior art keywords
region
information
moving
augmented reality
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/725,126
Other languages
English (en)
Inventor
Yong Sun Kim
Hyun Kang
Kap Kee Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Assigned to ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE reassignment ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANG, HYUN, KIM, KAP KEE, KIM, YONG SUN
Publication of US20220343613A1 publication Critical patent/US20220343613A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • G06T5/60
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present disclosure relates to a method and an apparatus for virtually moving a real object in an augmented reality.
  • An existing augmented reality may provide additional information in addition to a virtual object to an image of a real environment or provide an interaction between the virtual object and a user.
  • a 2 dimensional (D) augmented reality shows a virtual image or an image acquired by rendering the virtual object in addition to a camera image.
  • information on the real environment in the image is not utilized, and the virtual image is simply added to the real environment, so screening between a real object and the virtual object is not reflected and a difference sense for a spatial sense, etc., is generated.
  • a 3D augmented reality expresses the screening phenomenon between the real object and the virtual object by rendering the virtual object on a 3D space, the difference sense between the real object and the virtual object may be reduced.
  • the existing 3D augmented reality only the interaction between the virtual object and the user is possible in a fixed environment for the real object.
  • plane or depth information for the real environment is extracted and a virtual furniture is arranged on a background having the plane or depth information.
  • the user can change a location of the virtual furniture or rotate the virtual furniture, but even in this case, only the interaction between the user and the virtual furniture is possible without the interaction for a real furniture. As a result, various experiences such as replacing or arranging the real furniture are impossible.
  • the existing augmented reality additionally shows the virtual object to the real environment, and only the interaction between the virtual object and the user is performed.
  • the interaction between the real object and the virtual object is required. It is necessary that the user also performs the interaction for all without distinguishing the real object and the virtual object such as removing or moving and manipulating the real object on the augmented reality.
  • the present invention has been made in an effort to provide a method and an apparatus for virtually moving a real object in an augmented reality.
  • An exemplary embodiment of the present disclosure may provide a method for moving, by an apparatus for moving a real object, the real object in a 3D augmented reality.
  • the method may include: dividing a region of the real object in the 3D augmented reality; generating a 3D object model by using first information corresponding to the region of the real object; and moving the real object on the 3D augmented reality by using the 3D object model.
  • the first information may be 3D information and texture information corresponding to the region of the real object.
  • the generating may include estimating a 3D shape for an invisible region for the real object by using the 3D information, and generating the 3D object model by using the texture information and the 3D shape.
  • the method may further include synthesizing a region at which the real object is positioned before moving by using second information which is surrounding background information for the region of the real object.
  • the synthesizing may include deleting the region of the real object in the 3D augmented reality, and performing inpainting for the deleted region by using the second information.
  • the synthesizing may further include estimating a shade region generated by the real object, and the deleting may include deleting the region of the real object and the shade region in the 3D augmented reality.
  • the second information may be 3D information and texture information for a surrounding background for the region of the real object.
  • the estimating may include estimating the 3D shape by using the 3D information through a deep learning network constituted by an encode and a decode.
  • the performing of the inpainting may include performing the inpainting for the deleted region by using the second information through a deep learning network constituted by a generator and a discriminator.
  • the method may further include selecting, by a user, the real object to be moved in the 3D augmented reality.
  • Another exemplary embodiment of the present disclosure provides an apparatus for moving a real object in a 3D augmented reality.
  • the apparatus may include: an environment reconstruction thread unit performing 3D reconstruction of a real environment in for the 3D augmented reality; a moving object selection unit being input from the user with a moving object which is a real object to be moved from a 3D reconstruction image; an object region division unit dividing a region corresponding to the moving object in the 3D reconstruction image; an object model generation unit generating a 3D object model for the divided moving object; and an object movement unit moving the moving object in the 3D augmented reality by using the 3D object model.
  • the object model generation unit may generate the 3D object model by using 3D information and texture information corresponding to the divided moving object.
  • the object model generation unit may estimate a 3D shape for an invisible region for the moving object by using the 3D information, and generate the 3D object model by using the texture information and the 3D shape.
  • the apparatus may further include an object region background synthesis unit synthesizing a region at which the moving object is positioned before moving by using surrounding background information corresponding to the divided moving object.
  • the object region background synthesis unit may delete the region corresponding to the moving object, and perform inpainting for the deleted region by using the surrounding background information.
  • the object region background synthesis unit may estimate a shade region generated by the moving object, and delete the region corresponding to the moving object and the shade region in the 3D augmented reality.
  • the object region background synthesis unit may include a generator being input with the 3D reconstruction image including the deleted region, and outputting the inpainted image, and a discriminator discriminating an output of the generator.
  • the apparatus may further include: an object rendering unit rendering the 3D object model; and a synthesis background rendering unit rendering the synthesized region.
  • the object model generation unit may include a 2D encoder being input with the 3D information and outputting a shape feature vector, and a 3D decoder being input with the shape feature vector and outputting the 3D shape.
  • a real object is moved upon experiencing an augmented reality to provide an interaction between a user and the real object.
  • the real object is moved, and then synthesized by using surrounding background information to arrange a virtual object as if the real object is not originally present.
  • FIG. 1 is a block diagram illustrating a real object moving apparatus according to one exemplary embodiment.
  • FIG. 2 is a flowchart illustrating a 3D object model generating method for a moving object according to one exemplary embodiment.
  • FIG. 3 is a diagram illustrating a deep learning network structure for estimating a 3D shape according to one exemplary embodiment.
  • FIG. 4 is a flowchart illustrating a method for synthesizing a moving object region.
  • FIG. 5 is a diagram illustrating a deep learning network structure for inpainting according to one exemplary embodiment.
  • FIG. 6 is a conceptual view for a schematic operation of a real object moving apparatus according to one exemplary embodiment.
  • FIG. 7 is a diagram illustrating a computer system according to one exemplary embodiment.
  • the real object moving method may reconstruct 3D information of a real object to be moved and generate a 3D model based on 3D information (e.g., depth information) for a real environment.
  • 3D information e.g., depth information
  • the real object viewed through a camera has depth information for a visible region, but there is no information on an invisible region (e.g., a back of an object) which is not visible through the camera.
  • the real object moving method according to the exemplary embodiment estimates and reconstructs the 3D information of the invisible region based on the 3D information of the visible region.
  • the real object moving method according to the exemplary embodiment generates the 3D model of the object by using the reconstructed 3D information and color image information.
  • the generated 3D model may be regarded as the virtual object, manipulations such as movement, rotation, etc., may be performed, and the augmented reality may be implemented through rendering.
  • a deleted real object part may be changed to the background by performing inpainting the real object region. That is, in the object moving method according to the exemplary embodiment, a corresponding object region is deleted by using the 3D information (depth information) and a texture (color image) for a region corresponding to the real object, and the deleted region is inpainted and synthesized through a depth and a color of a surrounding background. In addition, the object moving method according to the exemplary embodiment, the synthesized background is rendered to achieve an effect that the real object is virtually moved, and then deleted from an original location.
  • FIG. 1 is a block diagram illustrating a real object moving apparatus 100 according to one exemplary embodiment.
  • the real object moving apparatus 100 may include an environment reconstruction thread unit 110 , a moving object selection unit 120 , an object region division unit 130 , an object model generation unit 140 , an object movement unit 150 , an object rendering unit 160 , an object region background synthesis unit 170 , a synthesized background rendering unit 180 , and a synthesis unit 190 .
  • the environment reconstruction thread unit 110 performs 3D reconstruction of the real environment.
  • a method in which the environment reconstruction thread unit 110 implements a 3D reconstruction, i.e., 3D augmented reality corresponding to the real environment may be known by those skilled in the art, so a detailed description thereof will be omitted.
  • 3D augmented reality 6 degree of freedom (DOF) tracking for estimating a camera posture may also be performed in real time.
  • the 6 DOF tracking may be performed by a camera tracking thread unit (not illustrated), and the camera tracking thread unit performs the 6 DOF tracking through multi-threading.
  • the 3D augmented reality reconstructed by the environment reconstruction thread unit 110 includes 3D information indicating the depth information and texture information indicating the color information.
  • the environment reconstruction thread unit 110 may output the 3D information and the texture information.
  • the 3D information may be expressed as PointCloud or Voxel.
  • a depth image in which information of 3D points is projected to a 2D image coordinate, such as PointCloud or Voxel may also be used jointly.
  • the moving object selection unit 120 is input with the moving object from the user.
  • the moving object is a part corresponding the real object to be moved in the 3D augmented reality and the real object to be moved is selected by a user. That is, the user selects the real object to be moved in the 3D augmented reality.
  • the real object to be moved which is selected by the user will be referred to as ‘moving object’.
  • the object region division unit 130 divides the moving object input from the moving object selection unit 120 in the 3D augmented reality.
  • the divided moving object includes the 3D information and the texture information corresponding the moving object.
  • the user may perform interactive segmentation by adding points in the moving object region and a point in the background region other than the object.
  • the following method may be used.
  • the object region division unit 130 divides the region of the moving object selected by the user in a 2D color image (texture information).
  • the object region division unit 130 performs separation of a foreground and the background on 2D and 3D by using a 2D and 3D relationship to divide the region of the moving object even on the 3D.
  • the object model generation unit 140 generates the 3D object model for the moving object divided by the object region division unit 130 .
  • Information i.e., the 3D information and the texture information of the moving object
  • the object model generation unit 140 estimates 3D information for the invisible region of the object which is not obtained through the camera, such as the back of the moving object or a part hidden by another object, and generates a full 3D mesh/texture model for an outer shape of the moving object.
  • a method in which the object model generation unit 140 generates the 3D object model for the moving object will be described in more detail in FIG. 2 below.
  • the object movement unit 150 performs movement, rotation, etc., for the moving object in the augmented reality in response to the manipulation of the user by using the 3D object model generated by the object model generation unit 140 . That is, since the object movement unit 150 generates the 3D object model for the moving object, the object movement unit 150 may arbitrarily perform movement and rotation by regarding the moving object as the virtual object.
  • the object rendering unit 160 may render the 3D object model and express the rendered 3D object model in the augmented reality when the object movement unit 150 moves the moving object in the augmented reality.
  • information from the camera tracking thread unit i.e., direction information viewed by the camera may be used at the time of rendering the 3D object model.
  • a method for rendering the 3D object model and implementing the rendered 3D object model in the augmented reality may be known by those skilled in the art in the technical field to which the present disclosure belongs, so a detailed description will be omitted.
  • the object region background synthesis unit 170 deletes a region corresponding to the moving object divided by the object region division unit 130 from the background, and performs inpainting for the deleted region by using surrounding background information.
  • the synthesis background rendering unit 180 performs rendering for the inpainted part with the surrounding background information by the object region background synthesis unit 170 .
  • the information from the camera tracking thread unit i.e., the direction information viewed by the camera may be used at the time of rendering the inpainted part.
  • the synthesis unit 190 implements the 3D augmented reality in which the real object is finally moved by synthesizing the moving object rendered by the object rendering unit 160 and the background rendered by the synthesis background rendering unit 180 .
  • FIG. 2 is a flowchart illustrating a 3D object model generating method for a moving object according to one exemplary embodiment. That is, FIG. 2 illustrates a method for generating, by the object model generation unit 140 , the 3D object model of the moving object by estimating the 3D information for the invisible region by using the data of the visible region viewed through the camera.
  • the object region division unit 130 divides the region of the moving object corresponding to the moving object selected by the user in the 3D augmented reality (S 210 ).
  • the divided moving object may include the 3D information and the texture information.
  • the object model generation unit 140 estimates a 3D shape by using the 3D information for the moving object divided in S 210 (S 220 ).
  • the 3D information for the moving object divided in S 210 is 3D data for the visible region viewed by the camera.
  • the object model generation unit 140 outputs a total 3D shape by estimating the 3D shape for the invisible region by using the 3D data for the visible region.
  • the object model generation unit 140 may output the total 3D shape by estimating the 3D shape for the invisible region of the moving object by using PointCloud of the visible region.
  • an Autoencoder which is one of the methods based on deep learning, may be used. That is, the object model generation unit 140 may be implemented in a deep learning network structure for estimating the 3D shape.
  • FIG. 3 is a diagram illustrating a deep learning network structure 300 for estimating a 3D shape according to one exemplary embodiment.
  • the deep learning network structure 300 may include a 2D encoder 310 and a 3D decoder 320 .
  • the deep learning network structure 300 may further include a 3D encoder 330 for pre-learning.
  • the deep learning network structure 300 of FIG. 3 may be pre-learned through three steps.
  • the 3D encoder 330 and the 3D decoder 320 are learned through a learning data set for a 3D model.
  • the 3D encoder 330 serves to be input with the learning data set of the 3D model to describe a feature of a shape.
  • the 3D encoder 330 outputs a shape feature vector.
  • the 3D decoder 320 is input with the shape feature vector output from the 3D encoder 330 to output the 3D shape (3D shape model).
  • the 2D encoder 310 is input with 3D information (i.e., a learning data set including only the visible region) of the vision region for learning, and outputs the shape feature vector.
  • 3D information i.e., a learning data set including only the visible region
  • the 2D encoder 310 is learned so that the shape feature vector output from the 2D encoder 310 is similar to the shape feature vector output from the 3D encoder 330 .
  • the 2D encoder 310 and the 3D decoder 320 are learned.
  • the shape feature vector output from the 2D encoder 310 is input into the 3D decoder 320 and the 3D decoder 320 outputs 3D shape information (e.g.,
  • 3D information (data) for the visible region to be estimated is input as an input of the 2D encoder 310 .
  • the 2D encoder 310 generates the shape feature vector for the input 3D information (the 3D information for the visible region), and outputs the generated shape feature vector to the 3D decoder 320 .
  • the 3D decoder 320 is input with the shape feature vector output from the 2D encoder 310 , and finally outputs the 3D shape (3D shape information) of the moving object in which the invisible region is estimated.
  • the object model generation unit 140 generates the 3D model of the moving object based on the 3D shape estimated in step S 220 and the texture information of the divided moving object (S 230 ). That is, the object model generation unit 140 soundly generates the 3D model of the moving object by using PointCloud completed in step S 220 and the texture information of the moving object in step S 210 .
  • FIG. 4 is a flowchart illustrating a method for synthesizing a moving object region. That is, FIG. 4 illustrates a method for synthesizing, by the object region background synthesis unit 170 , a background region screened by the object (moving object).
  • the object region division unit 130 divides the region of the moving object corresponding to the moving object selected by the user in the 3D augmented reality (S 410 ).
  • the divided moving object may include the 3D information and the texture information.
  • the object region background synthesis unit 170 estimates a corresponding shade region (S 420 ).
  • a natural synthesis background may be acquired only by removing the shape of the moving object jointly.
  • the object region background synthesis unit 170 estimates the shade region generated by the moving object.
  • a method for estimating the shade region a method similar to an existing Mask R-CNN method may be used.
  • a thesis ‘Instance Shadow Detection (Tianyu Wang, Xiaowei Hu, etc.)’ may be used.
  • a detailed method thereof may be known by those skilled in the art in the technical field to which the present disclosure belongs, so a detailed description will be omitted.
  • the object region background synthesis unit 170 deletes the region of the moving object divided in step S 410 and deletes the shade region estimated in step S 420 (S 430 ).
  • the object region background synthesis unit 170 deletes the texture information (color information) corresponding to the region of the moving object divided in step S 410 and the texture information (color information) corresponding to the shade region estimated in step S 420 from the 3D augmented reality.
  • the object region background synthesis unit 170 deletes the 3D information (i.e., depth information) corresponding to the region of the moving object divided in step S 410 from the 3D augmented reality.
  • the object region background synthesis unit 170 performs inpainting for the region deleted in step S 430 by using surrounding background information (S 440 ). That is, the object region background synthesis unit 170 performs inpainting (filling) for the deleted region by using the surrounding background information (including both the texture information and the 3D information) for the region deleted in step S 430 .
  • a deep learning network may be used for the inpainting method using the surrounding background information.
  • FIG. 5 is a diagram illustrating a deep learning network structure 500 for inpainting according to one exemplary embodiment.
  • the object region background synthesis unit 170 may perform the inpainting for the deleted region by using the deep learning network structure 500 illustrated in FIG. 5 .
  • the deep learning network structure 500 includes a generator 510 and a discriminator 520 . That is, the object region background synthesis unit 170 may include the generator 510 and the discriminator 520 .
  • An image (i.e., surrounding background information) including the region deleted in step S 430 is input into the generator 510 , and the generator 510 outputs an image with which the deleted region is synthesized (inpainted).
  • an input image of the generator 510 as the surrounding background information to which the deleted region is reflected includes the 3D information and the texture information.
  • the discriminator 520 discriminates whether the generator 510 synthesizes a plausible image to allow the generator 510 to synthesize the plausible image which exists in a real world.
  • Detailed operations of the generator 510 and the discriminator 520 may be known by those skilled in the art in the technical field to which the present disclosure belongs, so a detailed description will be omitted.
  • FIG. 6 is a conceptual view for a schematic operation of a real object moving apparatus 100 according to one exemplary embodiment.
  • the moving object selection unit 120 is selected with a real object 611 to be moved from the user.
  • the object region division unit 130 divides the moving object input from the moving object selection unit 120 in the 3D augmented reality.
  • the object model generation unit 140 generates a 3D object model 621 for the moving object divided by the object region division unit 130 .
  • the object movement unit 150 moves the moving object in the augmented reality.
  • an existing location of the moving object is deleted, and a part deleted in reference numeral 630 is marked with a black 631 .
  • the object region background synthesis unit 170 synthesizes the deleted part by using the surrounding background information of the deleted part. Through this, a real object 641 may be virtually moved in the augmented reality.
  • FIG. 7 is a diagram illustrating a computer system 700 according to one exemplary embodiment.
  • the real object moving apparatus 100 may be implemented by a computer system 700 illustrated in FIG.
  • each component of the real object moving apparatus 100 may be implemented by the computer system 700 illustrated in FIG. 7 .
  • the computer system 700 may include at least one of a processor 710 , a memory 730 , a user interface input device 740 , a user interface output device 750 , and a storage device 760 which communicate through a bus 720 .
  • the processor 710 may be a central processing unit (CPU), or a semiconductor device executing a command stored in the memory 730 or the storage device 760 .
  • the processor 710 may be configured to implement functions and methods described in FIGS. 1 to 6 above.
  • the memory 730 and the storage device 760 may be various types of volatile or non-volatile storage media.
  • the memory 730 may include a read-only memory (ROM) 731 and a random access memory (RAM) 732 .
  • the memory 730 may be positioned inside or outside the processor 710 and connected with the processor 730 through various already known means.
US17/725,126 2021-04-26 2022-04-20 Method and apparatus for virtually moving real object in augmented reality Abandoned US20220343613A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210053648A KR102594258B1 (ko) 2021-04-26 2021-04-26 증강현실에서 실제 객체를 가상으로 이동하는 방법 및 장치
KR10-2021-0053648 2021-04-26

Publications (1)

Publication Number Publication Date
US20220343613A1 true US20220343613A1 (en) 2022-10-27

Family

ID=83693346

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/725,126 Abandoned US20220343613A1 (en) 2021-04-26 2022-04-20 Method and apparatus for virtually moving real object in augmented reality

Country Status (2)

Country Link
US (1) US20220343613A1 (ko)
KR (1) KR102594258B1 (ko)

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020094189A1 (en) * 2000-07-26 2002-07-18 Nassir Navab Method and system for E-commerce video editing
US6759979B2 (en) * 2002-01-22 2004-07-06 E-Businesscontrols Corp. GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
US20060073454A1 (en) * 2001-01-24 2006-04-06 Anders Hyltander Method and system for simulation of surgical procedures
US20080218515A1 (en) * 2007-03-07 2008-09-11 Rieko Fukushima Three-dimensional-image display system and displaying method
US20090113349A1 (en) * 2007-09-24 2009-04-30 Mark Zohar Facilitating electronic commerce via a 3d virtual environment
US20120019612A1 (en) * 2008-06-12 2012-01-26 Spandan Choudury non virtual 3d video/photo generator rendering relative physical proportions of image in display medium (and hence also of the display medium itself) the same as the relative proportions at the original real life location
US20120264510A1 (en) * 2011-04-12 2012-10-18 Microsoft Corporation Integrated virtual environment
US20140208272A1 (en) * 2012-07-19 2014-07-24 Nitin Vats User-controlled 3d simulation for providing realistic and enhanced digital object viewing and interaction experience
US8994558B2 (en) * 2012-02-01 2015-03-31 Electronics And Telecommunications Research Institute Automotive augmented reality head-up display apparatus and method
US20150220244A1 (en) * 2014-02-05 2015-08-06 Nitin Vats Panel system for use as digital showroom displaying life-size 3d digital objects representing real products
US9129083B2 (en) * 2011-06-29 2015-09-08 Dassault Systems Solidworks Corporation Automatic computation of reflected mass and reflected inertia
US9384395B2 (en) * 2012-10-19 2016-07-05 Electronic And Telecommunications Research Institute Method for providing augmented reality, and user terminal and access point using the same
US9443353B2 (en) * 2011-12-01 2016-09-13 Qualcomm Incorporated Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
US20160307357A1 (en) * 2014-03-15 2016-10-20 Nitin Vats Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation
US20160350973A1 (en) * 2015-05-28 2016-12-01 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
US20170200313A1 (en) * 2016-01-07 2017-07-13 Electronics And Telecommunications Research Institute Apparatus and method for providing projection mapping-based augmented reality
US20180033210A1 (en) * 2014-03-17 2018-02-01 Nitin Vats Interactive display system with screen cut-to-shape of displayed object for realistic visualization and user interaction
US20180239514A1 (en) * 2015-08-14 2018-08-23 Nitin Vats Interactive 3d map with vibrant street view
US10185463B2 (en) * 2015-02-13 2019-01-22 Nokia Technologies Oy Method and apparatus for providing model-centered rotation in a three-dimensional user interface
US10380803B1 (en) * 2018-03-26 2019-08-13 Verizon Patent And Licensing Inc. Methods and systems for virtualizing a target object within a mixed reality presentation
US20200082641A1 (en) * 2018-09-10 2020-03-12 MinD in a Device Co., Ltd. Three dimensional representation generating system
US20200184217A1 (en) * 2018-12-07 2020-06-11 Microsoft Technology Licensing, Llc Intelligent agents for managing data associated with three-dimensional objects
US20200349699A1 (en) * 2017-09-15 2020-11-05 Multus Medical, Llc System and method for segmentation and visualization of medical image data
US20200357157A1 (en) * 2017-11-15 2020-11-12 Cubic Motion Limited A method of generating training data
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system
US20210012558A1 (en) * 2018-08-28 2021-01-14 Tencent Technology (Shenzhen) Company Limited Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
US20210035346A1 (en) * 2018-08-09 2021-02-04 Beijing Microlive Vision Technology Co., Ltd Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium
US20210299827A1 (en) * 2020-03-31 2021-09-30 Guangdong University Of Technology Optimization method and system based on screwdriving technology in mobile phone manufacturing
US11199903B1 (en) * 2021-03-26 2021-12-14 The Florida International University Board Of Trustees Systems and methods for providing haptic feedback when interacting with virtual objects
US11259874B1 (en) * 2018-04-17 2022-03-01 Smith & Nephew, Inc. Three-dimensional selective bone matching
US11263815B2 (en) * 2018-08-28 2022-03-01 International Business Machines Corporation Adaptable VR and AR content for learning based on user's interests
US11282404B1 (en) * 2020-12-11 2022-03-22 Central China Normal University Method for generating sense of reality of virtual object in teaching scene
US20220292543A1 (en) * 2021-03-09 2022-09-15 Alexandra Valentina Henderson Pop-up retial franchising and complex econmic system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102333768B1 (ko) * 2018-11-16 2021-12-01 주식회사 알체라 딥러닝 기반 손 인식 증강현실 상호 작용 장치 및 방법

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020094189A1 (en) * 2000-07-26 2002-07-18 Nassir Navab Method and system for E-commerce video editing
US20060073454A1 (en) * 2001-01-24 2006-04-06 Anders Hyltander Method and system for simulation of surgical procedures
US6759979B2 (en) * 2002-01-22 2004-07-06 E-Businesscontrols Corp. GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
US20080218515A1 (en) * 2007-03-07 2008-09-11 Rieko Fukushima Three-dimensional-image display system and displaying method
US20090113349A1 (en) * 2007-09-24 2009-04-30 Mark Zohar Facilitating electronic commerce via a 3d virtual environment
US20120019612A1 (en) * 2008-06-12 2012-01-26 Spandan Choudury non virtual 3d video/photo generator rendering relative physical proportions of image in display medium (and hence also of the display medium itself) the same as the relative proportions at the original real life location
US20120264510A1 (en) * 2011-04-12 2012-10-18 Microsoft Corporation Integrated virtual environment
US9129083B2 (en) * 2011-06-29 2015-09-08 Dassault Systems Solidworks Corporation Automatic computation of reflected mass and reflected inertia
US9443353B2 (en) * 2011-12-01 2016-09-13 Qualcomm Incorporated Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
US8994558B2 (en) * 2012-02-01 2015-03-31 Electronics And Telecommunications Research Institute Automotive augmented reality head-up display apparatus and method
US20140208272A1 (en) * 2012-07-19 2014-07-24 Nitin Vats User-controlled 3d simulation for providing realistic and enhanced digital object viewing and interaction experience
US9384395B2 (en) * 2012-10-19 2016-07-05 Electronic And Telecommunications Research Institute Method for providing augmented reality, and user terminal and access point using the same
US20150220244A1 (en) * 2014-02-05 2015-08-06 Nitin Vats Panel system for use as digital showroom displaying life-size 3d digital objects representing real products
US20160307357A1 (en) * 2014-03-15 2016-10-20 Nitin Vats Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation
US20180033210A1 (en) * 2014-03-17 2018-02-01 Nitin Vats Interactive display system with screen cut-to-shape of displayed object for realistic visualization and user interaction
US10185463B2 (en) * 2015-02-13 2019-01-22 Nokia Technologies Oy Method and apparatus for providing model-centered rotation in a three-dimensional user interface
US20160350973A1 (en) * 2015-05-28 2016-12-01 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
US20180239514A1 (en) * 2015-08-14 2018-08-23 Nitin Vats Interactive 3d map with vibrant street view
US20170200313A1 (en) * 2016-01-07 2017-07-13 Electronics And Telecommunications Research Institute Apparatus and method for providing projection mapping-based augmented reality
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system
US20200349699A1 (en) * 2017-09-15 2020-11-05 Multus Medical, Llc System and method for segmentation and visualization of medical image data
US20200357157A1 (en) * 2017-11-15 2020-11-12 Cubic Motion Limited A method of generating training data
US10380803B1 (en) * 2018-03-26 2019-08-13 Verizon Patent And Licensing Inc. Methods and systems for virtualizing a target object within a mixed reality presentation
US11259874B1 (en) * 2018-04-17 2022-03-01 Smith & Nephew, Inc. Three-dimensional selective bone matching
US20210035346A1 (en) * 2018-08-09 2021-02-04 Beijing Microlive Vision Technology Co., Ltd Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium
US20210012558A1 (en) * 2018-08-28 2021-01-14 Tencent Technology (Shenzhen) Company Limited Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
US11263815B2 (en) * 2018-08-28 2022-03-01 International Business Machines Corporation Adaptable VR and AR content for learning based on user's interests
US20200082641A1 (en) * 2018-09-10 2020-03-12 MinD in a Device Co., Ltd. Three dimensional representation generating system
US20200184217A1 (en) * 2018-12-07 2020-06-11 Microsoft Technology Licensing, Llc Intelligent agents for managing data associated with three-dimensional objects
US20210299827A1 (en) * 2020-03-31 2021-09-30 Guangdong University Of Technology Optimization method and system based on screwdriving technology in mobile phone manufacturing
US11282404B1 (en) * 2020-12-11 2022-03-22 Central China Normal University Method for generating sense of reality of virtual object in teaching scene
US20220292543A1 (en) * 2021-03-09 2022-09-15 Alexandra Valentina Henderson Pop-up retial franchising and complex econmic system
US11199903B1 (en) * 2021-03-26 2021-12-14 The Florida International University Board Of Trustees Systems and methods for providing haptic feedback when interacting with virtual objects

Also Published As

Publication number Publication date
KR102594258B1 (ko) 2023-10-26
KR20220146865A (ko) 2022-11-02

Similar Documents

Publication Publication Date Title
Weiss et al. Volumetric isosurface rendering with deep learning-based super-resolution
JP5108893B2 (ja) 2次元画像から3次元パーティクル・システムを復元するためのシステムおよび方法
Rematas et al. Image-based synthesis and re-synthesis of viewpoints guided by 3d models
US10460505B2 (en) Systems and methods for lightfield reconstruction utilizing contribution regions
CN112465938A (zh) 三维3d渲染方法和装置
EP3511908B1 (en) Hybrid interactive rendering of medical images with physically based rendering and direct volume rendering
EP2899689A1 (en) Method for inpainting a target area in a target video
EP3767592A1 (en) Techniques for feature-based neural rendering
BR112019027116A2 (pt) aparelho para gerar uma imagem, aparelho para gerar um sinal de imagem, método para gerar uma imagem, método para gerar um sinal de imagem e sinal de imagem
US20230394740A1 (en) Method and system providing temporary texture application to enhance 3d modeling
JP2006526834A (ja) ボリューム・レンダリング用の適応画像補間
Franke et al. Enhancing realism of mixed reality applications through real-time depth-imaging devices in x3d
JP2020109652A (ja) 既知の伝達関数を用いた体積レンダリングの最適化
Ye et al. In situ depth maps based feature extraction and tracking
US20220343613A1 (en) Method and apparatus for virtually moving real object in augmented reality
Nicolet et al. Repurposing a relighting network for realistic compositions of captured scenes
KR102493401B1 (ko) 증강현실에서 실제 객체를 지우는 방법 및 장치
Somraj et al. Temporal view synthesis of dynamic scenes through 3D object motion estimation with multi-plane images
KR101428577B1 (ko) 적외선 동작 인식 카메라를 사용하여 화면상에 네추럴 유저 인터페이스 기반 입체 지구본을 제공하는 방법
US9519997B1 (en) Perfect bounding for optimized evaluation of procedurally-generated scene data
WO2022167537A1 (en) Method and computer program product for producing a 3d representation of an object
Lechlek et al. Interactive hdr image-based rendering from unstructured ldr photographs
Mildenhall Neural Scene Representations for View Synthesis
US20240062345A1 (en) Method, apparatus, and computer-readable medium for foreground object deletion and inpainting
Scheuing Real-time hiding of physical objects in augmented reality

Legal Events

Date Code Title Description
AS Assignment

Owner name: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, YONG SUN;KANG, HYUN;KIM, KAP KEE;REEL/FRAME:059653/0795

Effective date: 20211026

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION