US20200082633A1 - Virtual item placement system - Google Patents
Virtual item placement system Download PDFInfo
- Publication number
- US20200082633A1 US20200082633A1 US16/567,518 US201916567518A US2020082633A1 US 20200082633 A1 US20200082633 A1 US 20200082633A1 US 201916567518 A US201916567518 A US 201916567518A US 2020082633 A1 US2020082633 A1 US 2020082633A1
- Authority
- US
- United States
- Prior art keywords
- virtual
- wall
- vertical
- item
- physical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000033001 locomotion Effects 0.000 claims abstract description 26
- 230000004044 response Effects 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims description 24
- 230000015654 memory Effects 0.000 claims description 22
- 238000009877 rendering Methods 0.000 claims description 5
- 238000004873 anchoring Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 238000004088 simulation Methods 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 7
- 230000008878 coupling Effects 0.000 description 7
- 238000010168 coupling process Methods 0.000 description 7
- 238000005859 coupling reaction Methods 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 239000007789 gas Substances 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 239000002023 wood Substances 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 230000008261 resistance mechanism Effects 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G06F17/5009—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/04—Architectural design, interior design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
Definitions
- the present disclosure generally relates to special-purpose machines that manage data processing and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other special-purpose machines for generating virtual item simulations.
- a user would like to simulate an object (e.g., chair, door, lamp) in a physical room without having access to the object.
- an object e.g., chair, door, lamp
- a user may be browsing a web store and see a floor lamp that may or may not match the style of the user's living room.
- the user may take a picture of his living room and overlay an image of the floor lamp in the picture to simulate what the floor lamp would look like in the living room.
- it can be difficult to adjust the floor lamp within the modeling environment using a mobile client device, which has limited resources (e.g., a small screen, limited processing power).
- FIG. 1 is a block diagram shown an example network architecture in which embodiments of the virtual item placement system can be implemented, according to some example embodiments.
- FIG. 2 illustrates example functional engines of a virtual item placement system, according to some example embodiments.
- FIG. 3 shows an example flow diagram for placement and simulation of virtual items, according to some example embodiments.
- FIG. 4 shows an example user interface displaying live video for virtual item placement, according to some example embodiments.
- FIG. 5 shows an example user interface including a guide for adding points, according to some example embodiments.
- FIG. 6 shows an example user interface for implementing virtual item placement, according to some example embodiments
- FIG. 7 shows an example user interface for implementing virtual item placement, according to some example embodiments.
- FIG. 8 shows an example user interface for updating virtual item placements, according to some example embodiments.
- FIGS. 9A and 9B show example user interface for locking and rendering a virtual item, according to some example embodiments.
- FIG. 10 shows an example user interface for implementing unconstrained virtual item placement, according to some example embodiments.
- FIGS. 11A and 11B show example virtual item anchors and groupings, according to some example embodiments.
- FIG. 12 is a block diagram illustrating a representative software architecture, which may be used in conjunction with various hardware architectures herein described.
- FIG. 13 is a block diagram illustrating components of a machine, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
- a machine-readable medium e.g., a machine-readable storage medium
- a virtual item placement system can generate virtual floors, and virtual walls that intersect with the floors based on user inputs.
- the user can input points onto a detected floor surface, and a vertical wall can be created as a vertical plane that is orthogonal (e.g., at 90 degrees) to the floor surface.
- the vertical wall can be created in this way with two constraints: the wall is aligned with the point placements and further constrained by orthogonality to the floor.
- Virtual items can then be modeled on the virtual wall, where the virtual wall is kept transparent and the virtual items are rendered on the virtual wall so at they appear as if they are applied directly to a real-world wall.
- lightweight primitives of the virtual items to be placed are used instead of full texture 3-D models of the items.
- the primitives can include simple geometric shape with a lightweight uniform texture (e.g., one color), a mesh of the model, or a collection of vertices connected by lines that outline the shape of the virtual model.
- the placed primitives are anchored or otherwise constrained with the generated virtual wall to enable rapid and accurate placement of the item to be modeled.
- a door primitive can be anchored at the bottom side of the virtual wall and slide along the wall in response to client device movement (e.g., a user moving a client device from right to left as detected by inertial sensors of the client device, such as an accelerometer and gyroscope).
- the user can select a lock element (e.g., button) that locks the item primitive in place and the system generates a full render of the object with realistic textures and lighting (e.g., an Oak door with a wood texture with virtual rays reflected off the wood texture, as calculated by graphical processing unit shaders on the client device).
- a lock element e.g., button
- the system generates a full render of the object with realistic textures and lighting (e.g., an Oak door with a wood texture with virtual rays reflected off the wood texture, as calculated by graphical processing unit shaders on the client device).
- realistic textures and lighting e.g., an Oak door with a wood texture with virtual rays reflected off the wood texture, as calculated by graphical processing unit shaders on the client device.
- the system can lock primitive sub-components to other primitive sub-components to enable the user to more readily manipulate a complex primitive model (e.g., a table) on user's mobile device.
- a complex primitive model e.g., a table
- leg primitives can be anchored to a table surface primitive, which can then be modified or snapped to a vertical wall as viewed through the mobile device.
- the user can rapidly generate models of complex 3-D models that conventionally would be modeled using higher power computational devices (e.g., a desktop workstation with a high-powered CPU and one or more dedicated graphics cards).
- a networked system 102 in the example forms of a network-based rendering platform that can provide server-side rendering via a network 104 (e.g., the Internet or wide area network (WAN)) to one or more client devices 110 .
- a network 104 e.g., the Internet or wide area network (WAN)
- client devices 110 e.g., the Internet or wide area network (WAN)
- a user e.g., user 106
- the client device 110 may execute the system 150 as a local application or a cloud-based application (e.g., through an Internet browser).
- the client device 110 comprises a computing device that includes at least a display and communication capabilities that provide access to the networked system 102 via the network 104 .
- the client device 110 comprises, but is not limited to, a remote device, work station, computer, general purpose computer, Internet appliance, hand-held device, wireless device, portable device, wearable computer, cellular or mobile phone, personal digital assistant (PDA), smart phone, tablet, ultrabook, netbook, laptop, desktop, multi-processor system, microprocessor-based or programmable consumer electronic, game consoles, set-top box, network personal computer (PC), mini-computer, and so forth.
- PDA personal digital assistant
- the client device 110 comprises one or more of a touch screen, accelerometer, gyroscope, biometric sensor, camera, microphone, Global Positioning System (GPS) device, and the like.
- the client device 110 is the recording device that generates the video recording and also the playback device that plays the modified video recording during a playback mode.
- the recording device is a different client device than the playback device, and both have instances of the virtual item placement system 150 installed.
- a first client device using a first instance of a dynamic virtual room modeler may generate a simulation
- a second client device using a second instance of a dynamic virtual room modeler may receive the simulation over a network and display the simulation via a display screen.
- the instances may be platform specific to the operating system or device in which they are installed.
- the first instance may be an iOS application and the second instance may be an Android application.
- the client device 110 communicates with the network 104 via a wired or wireless connection.
- the network 104 comprises an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a Wireless Fidelity (WI-FI®) network, a Worldwide Interoperability for Microwave Access (WiMax) network, another type of network, or any suitable combination thereof.
- VPN virtual private network
- LAN local area network
- WLAN wireless LAN
- WAN wide area network
- WWAN wireless WAN
- MAN metropolitan area network
- PSTN public switched telephone network
- PSTN public switched telephone network
- WI-FI® Wireless Fidelity
- WiMax Worldwide Interoperability for Microwave Access
- Users comprise a person, a machine, or other means of interacting with the client device 110 .
- the user 106 is not part of the network architecture 100 , but interacts with the network architecture 100 via the client device 110 or another means.
- the user 106 provides input (e.g., touch screen input or alphanumeric input) to the client device 110 and the input is communicated to the networked system 102 via the network 104 .
- the networked system 102 in response to receiving the input from the user 106 , communicates information to the client device 110 via the network 104 to be presented to the user 106 . In this way, the user 106 can interact with the networked system 102 using the client device 110 .
- the API server 120 and the web server 122 are coupled to, and provide programmatic and web interfaces respectively to, one or more application server 140 .
- the application server 140 can host a dynamic virtual environment modeler server 151 , which can comprise one or more modules or applications and each of which can be embodied as hardware, software, firmware, or any combination thereof.
- the application server 140 is, in turn, shown to be coupled to a database server 124 that facilitates access to one or more information storage repositories, such as database 126 .
- the database 126 comprises one or more storage devices that store information to be accessed by the virtual item placement system 150 .
- the model data may be cached locally on the client device 110 .
- the client-server-based network architecture 100 shown in FIG. 1 employs a client-server architecture, the present inventive subject matter is, of course, not limited to such an architecture, and can equally well find application in a distributed, or peer-to-peer, architecture system, for example.
- FIG. 2 illustrates example functional engines of a virtual item placement system 150 , according to some example embodiments.
- the virtual item placement system 150 comprises a capture engine 210 , a movement engine 220 , a render engine 230 , a placement engine 240 , a position engine 250 , and the display engine 260 .
- the capture engine 210 manages capturing one or more images, such as an image or an image sequence (e.g., a video, live video displayed in real-time on a mobile device).
- the movement engine 220 is configured to implement one or more inertial sensors (e.g.
- the render engine 230 is configured to generate and manage a 3-D modeling environment in which virtual items can be placed and rendered for output on a display (e.g., overlaid on top of images generated by the capture engine 210 to provide an augmented reality experience for the viewer of the user device).
- the placement engine 240 is configured to receive point placements from a user of the client device. The point placements can be used to construct virtual items in the virtual item environment by the render engine 230 . For example, the placement engine 240 can receive placements of corners of the physical environment for use in generating a 3-D model of the environment (e.g., virtual walls of the environment).
- the placement engine 240 is configured to detect a ground surface of the physical environment being depicted in the image(s) captured by the capture engine 210 .
- the placement engine 240 can detect image features of a physical ground depicted in the images, determine that the image features are trackable across images of the live video (e.g., using scale invariant feature transform, SIFT), and determine or assume that the detected image features are coplanar, and thusly determine an orientation of real-world ground surface depicted in the images.
- the position engine 250 is configured to manage positional updates of a virtual item in the 3-D modeling environment.
- the position engine 250 can move a virtual door geometric primitive along a virtual wall in response to physical movement detected by the movement engine 220 , as discussed in further detail below.
- the display engine 206 is configured to generate a user interface to display images (e.g., a live video view), receive user inputs (e.g. user input of points), receive manipulations of the virtual item, and render a composite augmented reality display that dynamically updates the virtual item to simulate that the virtual item actually exist in the depicted environment of the images generated by the capture engine 210 .
- FIG. 3 shows a flow diagram of an example method 300 for implementing virtual item placement, according to some example embodiments.
- the capture engine 210 initiates a view on the client device.
- the capture engine 210 generates a live video view using an image sensor on a backside of a client device.
- the render engine 230 generates a 3-D model of a room environment.
- the placement engine 240 can first detect a ground surface using image feature analysis as discussed above, and then generate a virtual horizontal plane in the 3-D model of the room environment to correspond to the detected real-world ground. Further, the placement engine 240 can receive placements of points that indicate one or more physical walls. The point placements can be used to construct virtual walls as vertical planes in the 3-D modeling environment, as discussed in further detail below.
- the position engine 250 places a primitive in the 3-D modeling environment according to placements instructions received on the client device. For example, the user of the client device can drag-and-drop a door image onto the live view video. In response to dragging and dropping the door image onto the live view, the placement engine 240 places a door primitive on the virtual wall that coincides with the physical wall onto which the user drag-and-dropped the door image.
- the placement engine 240 receives one or more manipulations or modifications to the primitive. For example, at operation 320 , the placement engine 240 receives an instruction to scale the size of the door by receiving a drag gesture on the door depicted on the client device. Responsive to the gesture, the placement engine 240 scales the size of the door so that it is larger or smaller in response to the user's gestures.
- the primitive is moved in response to the client device movement.
- the movement engine 220 detects physical movement of the client device using one or more inertial sensors, such as a gyroscope or accelerometer that are integrated into the client device.
- the movement is detected using image analysis (e.g., detect movement of wall image features between different frames of the video sequence, as in a SIFT algorithm).
- image analysis e.g., detect movement of wall image features between different frames of the video sequence, as in a SIFT algorithm.
- the virtual item is moved in the environment.
- the virtual item slides along a virtual wall in the leftward direction with the virtual item locked at the bottom of the wall, according to some example embodiments.
- a virtual camera using to render the 3-D environment is moved so that the perspective of the imaged physical environment matches the perspective of the 3-D model environment rendered by the virtual camera.
- the position engine 250 receives a lock instruction to save the placed primitive at the current location. For example, after the user finishes rotating the client device counterclockwise and the door slides in the leftward direction, the user can select a save instruction to save the coordinates of the virtual item at the current position on the virtual wall.
- the render engine 230 renders an augmented display of the virtual item and the physical environment depicted in the one or more images (e.g. the live video generated at operation 305 ).
- the method 300 is performed continuously so that in response to new physical movements of the client device, the virtual item is moved a corresponding amount, the virtual camera is likewise moved a corresponding amount, and a new augmented reality frame is displayed on a display device of the client device, thereby enabling a user viewing the client device to simulate the placed virtual item in the physical environment as viewed through the client device.
- FIG. 4 shows an example user interface for implementing virtual item placement, according to some example embodiments.
- a client device 110 displays a user interface 400 in which an image 405 (e.g., a frame of a live video generated by a backside camera on the client device 110 ) of the physical room is depicted.
- the physical room includes a first wall 410 and a floor 415 .
- the system 150 has detected the floor 415 using image feature analysis and a notification 420 is displayed on the user interface 400 to indicate that the floor 415 has been detected.
- FIG. 5 shows an example user interface including a guide for adding points, according to some example embodiments.
- the user (not depicted) has moved the client device 110 so that the image now depicts the first wall 410 the floor 415 and a second wall 500 that intersect at a corner.
- the user interface 400 includes a guide 505 , which is displayed as a reticule that the user can position over the corner of the room and select the add point button 510 .
- the user moves the client device 110 counterclockwise which creates a guideline (e.g., the arrow) extending from the point 600 .
- the guideline coincides with the interface or corner of the first wall 410 and the floor 415 (although, in the example illustrated in FIG. 6 , the guideline is shown off set from the corner of the first wall 410 and the floor 415 ).
- the user moves the client device 110 so that the reticule (guide 505 ) is over each point (e.g., corner) of the room, after which the user selects the add point button 510 so that each corner of the room can be received, and virtual walls generated for the 3-D virtual environment.
- the user only defines a single wall upon which virtual items are simulated.
- the user can place point 600 then drag the guide counterclockwise and add point 605 to create a line between the points 605 and 600 .
- the line is then used to demarcate a virtual wall upon a virtual ground (e.g., a virtual ground in the 3-D model of the room), where the virtual wall is set as a vertical plane that orthogonally intersects the virtual ground (e.g., at 90 degrees).
- the user can then place virtual items on the virtual wall, and it will appear as if the virtual items are on the physical vertical wall as discussed in further detail below.
- FIG. 7 shows an example user interface for implementing virtual item placement, according to some example embodiments.
- the user has placed a virtual door 700 (e.g., virtual door primitive) by dragging and dropping the virtual door 700 anywhere onto the image 405 .
- a virtual door 700 e.g., virtual door primitive
- primitives e.g., wireframe, collection of vertices
- the virtual items are displayed as full textured virtual items (e.g., a door with an Oak wood virtual texture).
- the virtual item placement system 150 snaps the virtual door 700 on the first wall 410 (e.g., on the virtual wall constructed in FIG.
- the edge of the virtual door is anchored to the edge or end of the vertical wall in the 3-D modeling environment managed the render engine 230 .
- the virtual door is anchored to the intersection of the virtual ground and the virtual wall.
- the virtual wall can be generated as an infinite vertical plane and the ground is generated as an infinite horizontal plane, and the intersection of the places is a line to which the bottom edge of virtual door is locked, such that the virtual door 700 slides along the intersection line as displayed in FIGS. 8 and 9 .
- FIG. 8 shows an example user interface for updating virtual item placement, according to some example embodiments.
- the client device 110 has been rotated counterclockwise. Responsive to the counterclockwise rotation, the virtual door 700 is moved along the first wall 410 by sliding the virtual door 700 such that the bottom side of the door coincides with the corner of the first wall 410 in the floor 415 .
- FIG. 9A shows a further update to the user interface 400 in response to the client device 110 being further rotated counterclockwise.
- the virtual door 700 is moved further to the left on the first wall 410 .
- two walls are defined (e.g., via point placements)
- the virtual door 700 snaps from the first virtual wall to the second virtual wall that is aligned with the second real world wall of the room. In this way, the user of the client device 110 can more easily place and model the door at different places of the depicted virtual room.
- the anchoring to the virtual wall allows the user to efficiently place virtual wall items (e.g., doors, windows) within the simulation environment on the client device, which has limited input/output controls and limited screen size.
- virtual wall items e.g., doors, windows
- the user may manually align the door using mouse clicks or dragging the door to align with the wall; in the example embodiment of FIG. 7 the anchoring allows the user of the client device 110 to efficiently place and move the virtual door 700 in the augmented reality display.
- the user may further select a lock button 900 which then locks the virtual door at the new position on the virtual wall.
- the virtual door is initially displayed as a lightweight primitive (e.g., mesh, door outline) that slides along the real-world wall (via the primitive being constrained to a transparent virtual wall).
- the render engine 230 in response to the lock button 900 being selected, the render engine 230 then renders the door as a realistic virtual item 905 , with image textures and virtual light rays reflected off the door, to accurately create an augmented reality simulation of the door on the real-world wall, as viewed through the client device 110 .
- FIG. 10 shows an example user interface for implementing a virtual item placement in which the virtual item is unconstrained, according to some example embodiments.
- the virtual item placed is a window, such as virtual window 1000 .
- the virtual window 1000 can be moved up and down along the vertical dimension (a first degree of freedom), and left and right along a horizontal dimension (a second degree of freedom).
- the virtual item place can be modified.
- a user of the client device 110 can perform a drag gesture over element 1005 to increase or decrease the width of the virtual window 1000 .
- user of the client device 110 can perform a drag gesture over element 1010 to increase or decrease the height of the virtual window 1000 .
- a virtual table can be placed in the augmented reality environment by placing four separate cylinder primitives 1105 A-D on a surface ground 1100 (e.g., dragging shapes onto an image depicting the physical ground, where the surface ground is a virtual horizontal plane).
- the separate cylinder primitives 1105 A-D correspond to four legs of a table to be modeled.
- a virtual rectangle 1110 can be snapped to the four cylinder primitives 1105 A-D to rapidly delineate an approximate space in which a virtual table is to be modeled.
- the placed primitives can be locked into groups so that in response to client device movement the group of primitives move around the 3-D modeling environment as a unit.
- one side of the table group 1115 can be locked to virtual wall 1103 , and when the client device is moved counter clockwise the table group 1115 slides left while being anchored to the wall (e.g., the side of the virtual rectangle 1110 is anchored or constrained to the virtual wall 1103 ).
- the placed primitives are locked or constrained in relation to each other.
- the bottom side of the virtual rectangle 1110 can be locked to the top surface of the four cylinder primitives 1105 A-D, and in response to client device movement, the virtual rectangle 1110 can slide on top of the cylinder primitives 1105 A-D but not be separated from the cylinder primitives 1105 A-D.
- a user of the virtual item placement system 150 can pre-grouped and pre-locked primitives to efficiently model complex objects, such as tables, chairs, lamps, in a room.
- FIG. 12 is a block diagram 1200 illustrating an architecture of software 1202 , which can be installed on any one or more of the devices described above.
- FIG. 12 is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein.
- the software 1202 is implemented by hardware such as a machine 1300 of FIG. 13 that includes processors 1310 , memory 1330 , and I/O components 1350 .
- the software 1202 can be conceptualized as a stack of layers where each layer may provide a particular functionality.
- the software 1202 includes layers such as an operating system 1204 , libraries 1206 , frameworks 1208 , and applications 1210 .
- the applications 1210 invoke application programming interface (API) calls 1212 through the software stack and receive messages 1214 in response to the API calls 1212 , consistent with some embodiments.
- API application programming interface
- the operating system 1204 manages hardware resources and provides common services.
- the operating system 1204 includes, for example, a kernel 1220 , services 1222 , and drivers 1224 .
- the kernel 1220 acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments.
- the kernel 1220 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality.
- the services 1222 can provide other common services for the other software layers.
- the drivers 1224 are responsible for controlling or interfacing with the underlying hardware, according to some embodiments.
- the drivers 1224 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth.
- USB Universal Serial Bus
- the libraries 1206 provide a low-level common infrastructure utilized by the applications 1210 .
- the libraries 1206 can include system libraries 1230 (e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like.
- the libraries 1206 can include API libraries 1232 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like.
- the libraries 1206 can also include a wide variety of other libraries 1234 to provide many other APIs to the applications 1210 .
- the frameworks 1208 provide a high-level common infrastructure that can be utilized by the applications 1210 , according to some embodiments.
- the frameworks 1208 provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth.
- GUI graphic user interface
- the frameworks 1208 can provide a broad spectrum of other APIs that can be utilized by the applications 1210 , some of which may be specific to a particular operating system or platform.
- the applications 1210 include a home application 1250 , a contacts application 1252 , a browser application 1254 , a book reader application 1256 , a location application 1258 , a media application 1260 , a messaging application 1262 , a game application 1264 , and a broad assortment of other applications such as a third-party application 1266 .
- the applications 1210 are programs that execute functions defined in the programs.
- Various programming languages can be employed to create one or more of the applications 1210 , structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language).
- the third-party application 1266 may be mobile software running on a mobile operating system such as IOSTM, ANDROIDTM, WINDOWS® Phone, or another mobile operating system.
- the third-party application 1266 can invoke the API calls 1212 provided by the operating system 1204 to facilitate functionality described herein.
- FIG. 13 illustrates a diagrammatic representation of a machine 1300 in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment.
- FIG. 13 shows a diagrammatic representation of the machine 1300 in the example form of a computer system, within which instructions 1316 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 1300 to perform any one or more of the methodologies discussed herein may be executed.
- the instructions 1316 transform the general, non-programmed machine 1300 into a particular machine 1300 programmed to carry out the described and illustrated functions in the manner described.
- the machine 1300 operates as a standalone device or may be coupled (e.g., networked) to other machines.
- the machine 1300 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine 1300 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 1316 , sequentially or otherwise, that specify actions to be taken by the machine 1300 .
- the term “machine” shall also be taken to include a collection of machines 1300 that individually or jointly execute the instructions 1316 to perform any one or more of the methodologies discussed herein.
- the machine 1300 may include processors 1310 , memory 1330 , and I/O components 1350 , which may be configured to communicate with each other such as via a bus 1302 .
- the processors 1310 e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof
- the processors 1310 may include, for example, a processor 1312 and a processor 1314 that may execute the instructions 1316 .
- processor is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously.
- FIG. 13 shows multiple processors 1310
- the machine 1300 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.
- the memory 1330 may include a main memory 1332 , a static memory 1334 , and a storage unit 1336 , both accessible to the processors 1310 such as via the bus 1302 .
- the main memory 1330 , the static memory 1334 , and storage unit 1336 store the instructions 1316 embodying any one or more of the methodologies or functions described herein.
- the instructions 1316 may also reside, completely or partially, within the main memory 1332 , within the static memory 1334 , within the storage unit 1336 , within at least one of the processors 1310 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 1300 .
- the I/O components 1350 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on.
- the specific I/O components 1350 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 1350 may include many other components that are not shown in FIG. 13 .
- the I/O components 1350 are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components 1350 may include output components 1352 and input components 1354 .
- the output components 1352 may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth.
- a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)
- acoustic components e.g., speakers
- haptic components e.g., a vibratory motor, resistance mechanisms
- the input components 1354 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like.
- alphanumeric input components e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components
- point-based input components e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument
- tactile input components e.g., a physical button,
- the I/O components 1350 may include biometric components 1356 , motion components 1358 , environmental components 1360 , or position components 1362 , among a wide array of other components.
- the biometric components 1356 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like.
- the motion components 1358 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth.
- the environmental components 1360 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment.
- illumination sensor components e.g., photometer
- temperature sensor components e.g., one or more thermometers that detect ambient temperature
- humidity sensor components e.g., pressure sensor components (e.g., barometer)
- the position components 1362 may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
- location sensor components e.g., a GPS receiver component
- altitude sensor components e.g., altimeters or barometers that detect air pressure from which altitude may be derived
- orientation sensor components e.g., magnetometers
- the I/O components 1350 may include communication components 1364 operable to couple the machine 1300 to a network 1380 or devices 1370 via a coupling 1382 and a coupling 1372 , respectively.
- the communication components 1364 may include a network interface component or another suitable device to interface with the network 1380 .
- the communication components 1364 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities.
- the devices 1370 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
- the communication components 1364 may detect identifiers or include components operable to detect identifiers.
- the communication components 1364 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals).
- RFID Radio Frequency Identification
- NFC smart tag detection components e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes
- RFID Radio Fre
- IP Internet Protocol
- Wi-Fi® Wireless Fidelity
- NFC beacon a variety of information may be derived via the communication components 1364 , such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
- IP Internet Protocol
- the various memories may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 1316 ), when executed by processor(s) 1310 , cause various operations to implement the disclosed embodiments.
- machine-storage medium As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure.
- the terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data.
- the terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors.
- machine-storage media examples include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- semiconductor memory devices e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices
- magnetic disks such as internal hard disks and removable disks
- magneto-optical disks magneto-optical disks
- CD-ROM and DVD-ROM disks examples include CD-ROM and DVD-ROM disks.
- one or more portions of the network 1380 may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks.
- POTS plain old telephone service
- the network 1380 or a portion of the network 1380 may include a wireless or cellular network
- the coupling 1382 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling.
- CDMA Code Division Multiple Access
- GSM Global System for Mobile communications
- the coupling 1382 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1 ⁇ RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology.
- RTT Single Carrier Radio Transmission Technology
- GPRS General Packet Radio Service
- EDGE Enhanced Data rates for GSM Evolution
- 3GPP Third Generation Partnership Project
- 4G fourth generation wireless (4G) networks
- Universal Mobile Telecommunications System (UMTS) Universal Mobile Telecommunications System
- HSPA High Speed Packet Access
- WiMAX Worldwide Interoperability for Microwave Access
- the instructions 1316 may be transmitted or received over the network 1380 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 1364 ) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 1316 may be transmitted or received using a transmission medium via the coupling 1372 (e.g., a peer-to-peer coupling) to the devices 1370 .
- the terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
- transmission medium and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 1316 for execution by the machine 1300 , and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
- transmission medium and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal.
- machine-readable medium means the same thing and may be used interchangeably in this disclosure.
- the terms are defined to include both machine-storage media and transmission media.
- the terms include both storage devices/media and carrier waves/modulated data signals.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Architecture (AREA)
- Structural Engineering (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Civil Engineering (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Dermatology (AREA)
- General Health & Medical Sciences (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims the benefit of priority to U.S. Provisional Application Ser. No. 62/729,930, filed Sep. 11, 2018, the content of which is incorporated herein by reference in its entirety.
- The present disclosure generally relates to special-purpose machines that manage data processing and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other special-purpose machines for generating virtual item simulations.
- Increasingly, users would like to simulate an object (e.g., chair, door, lamp) in a physical room without having access to the object. For example, a user may be browsing a web store and see a floor lamp that may or may not match the style of the user's living room. The user may take a picture of his living room and overlay an image of the floor lamp in the picture to simulate what the floor lamp would look like in the living room. However, it can be difficult to adjust the floor lamp within the modeling environment using a mobile client device, which has limited resources (e.g., a small screen, limited processing power).
- To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure (“FIG.”) number in which that element or act is first introduced.
-
FIG. 1 is a block diagram shown an example network architecture in which embodiments of the virtual item placement system can be implemented, according to some example embodiments. -
FIG. 2 illustrates example functional engines of a virtual item placement system, according to some example embodiments. -
FIG. 3 shows an example flow diagram for placement and simulation of virtual items, according to some example embodiments. -
FIG. 4 shows an example user interface displaying live video for virtual item placement, according to some example embodiments. -
FIG. 5 shows an example user interface including a guide for adding points, according to some example embodiments. -
FIG. 6 shows an example user interface for implementing virtual item placement, according to some example embodiments -
FIG. 7 shows an example user interface for implementing virtual item placement, according to some example embodiments. -
FIG. 8 shows an example user interface for updating virtual item placements, according to some example embodiments. -
FIGS. 9A and 9B show example user interface for locking and rendering a virtual item, according to some example embodiments. -
FIG. 10 shows an example user interface for implementing unconstrained virtual item placement, according to some example embodiments. -
FIGS. 11A and 11B show example virtual item anchors and groupings, according to some example embodiments. -
FIG. 12 is a block diagram illustrating a representative software architecture, which may be used in conjunction with various hardware architectures herein described. -
FIG. 13 is a block diagram illustrating components of a machine, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. - The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.
- As discussed, it can be difficult to simulate items on mobile devices due to their finite or limited resources (e.g., low processor power, low memory as compared to desk top rendering stations, small screen size lack of input/output controls). One type of difficulty includes simulation of items or textures (e.g., wallpaper) on vertical walls because whereas floors have image features that can be detected, often walls are uniform surfaces with little to no image features that can be used to generate and align a virtual vertical wall upon which vertical items can be placed and rendered. To this end, a virtual item placement system can generate virtual floors, and virtual walls that intersect with the floors based on user inputs. For example, the user can input points onto a detected floor surface, and a vertical wall can be created as a vertical plane that is orthogonal (e.g., at 90 degrees) to the floor surface. The vertical wall can be created in this way with two constraints: the wall is aligned with the point placements and further constrained by orthogonality to the floor. Virtual items can then be modeled on the virtual wall, where the virtual wall is kept transparent and the virtual items are rendered on the virtual wall so at they appear as if they are applied directly to a real-world wall. In some example embodiments, to conserve mobile device resources, lightweight primitives of the virtual items to be placed are used instead of full texture 3-D models of the items. The primitives can include simple geometric shape with a lightweight uniform texture (e.g., one color), a mesh of the model, or a collection of vertices connected by lines that outline the shape of the virtual model. In some example embodiments, the placed primitives are anchored or otherwise constrained with the generated virtual wall to enable rapid and accurate placement of the item to be modeled. For example, a door primitive can be anchored at the bottom side of the virtual wall and slide along the wall in response to client device movement (e.g., a user moving a client device from right to left as detected by inertial sensors of the client device, such as an accelerometer and gyroscope). In some example embodiments, the user can select a lock element (e.g., button) that locks the item primitive in place and the system generates a full render of the object with realistic textures and lighting (e.g., an Oak door with a wood texture with virtual rays reflected off the wood texture, as calculated by graphical processing unit shaders on the client device). In this way, resource limited mobile devices can simulate virtual items on surfaces of a real-world room, such as a bedroom wall.
- Further, in some example embodiments, the system can lock primitive sub-components to other primitive sub-components to enable the user to more readily manipulate a complex primitive model (e.g., a table) on user's mobile device. For example, leg primitives can be anchored to a table surface primitive, which can then be modified or snapped to a vertical wall as viewed through the mobile device. In this way, the user can rapidly generate models of complex 3-D models that conventionally would be modeled using higher power computational devices (e.g., a desktop workstation with a high-powered CPU and one or more dedicated graphics cards).
- With reference to
FIG. 1 , an example embodiment of a high-level client-server-basednetwork architecture 100 is shown. Anetworked system 102, in the example forms of a network-based rendering platform that can provide server-side rendering via a network 104 (e.g., the Internet or wide area network (WAN)) to one ormore client devices 110. In some implementations, a user (e.g., user 106) interacts with thenetworked system 102 using theclient device 110. Theclient device 110 may execute thesystem 150 as a local application or a cloud-based application (e.g., through an Internet browser). - In various implementations, the
client device 110 comprises a computing device that includes at least a display and communication capabilities that provide access to thenetworked system 102 via thenetwork 104. Theclient device 110 comprises, but is not limited to, a remote device, work station, computer, general purpose computer, Internet appliance, hand-held device, wireless device, portable device, wearable computer, cellular or mobile phone, personal digital assistant (PDA), smart phone, tablet, ultrabook, netbook, laptop, desktop, multi-processor system, microprocessor-based or programmable consumer electronic, game consoles, set-top box, network personal computer (PC), mini-computer, and so forth. In an example embodiment, theclient device 110 comprises one or more of a touch screen, accelerometer, gyroscope, biometric sensor, camera, microphone, Global Positioning System (GPS) device, and the like. In some embodiments, theclient device 110 is the recording device that generates the video recording and also the playback device that plays the modified video recording during a playback mode. In some embodiments, the recording device is a different client device than the playback device, and both have instances of the virtualitem placement system 150 installed. For example, a first client device using a first instance of a dynamic virtual room modeler may generate a simulation, and a second client device using a second instance of a dynamic virtual room modeler may receive the simulation over a network and display the simulation via a display screen. The instances may be platform specific to the operating system or device in which they are installed. For example, the first instance may be an iOS application and the second instance may be an Android application. - The
client device 110 communicates with thenetwork 104 via a wired or wireless connection. For example, one or more portions of thenetwork 104 comprises an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a Wireless Fidelity (WI-FI®) network, a Worldwide Interoperability for Microwave Access (WiMax) network, another type of network, or any suitable combination thereof. - Users (e.g., the user 106) comprise a person, a machine, or other means of interacting with the
client device 110. In some example embodiments, theuser 106 is not part of thenetwork architecture 100, but interacts with thenetwork architecture 100 via theclient device 110 or another means. For instance, theuser 106 provides input (e.g., touch screen input or alphanumeric input) to theclient device 110 and the input is communicated to thenetworked system 102 via thenetwork 104. In this instance, thenetworked system 102, in response to receiving the input from theuser 106, communicates information to theclient device 110 via thenetwork 104 to be presented to theuser 106. In this way, theuser 106 can interact with thenetworked system 102 using theclient device 110. - The
API server 120 and theweb server 122 are coupled to, and provide programmatic and web interfaces respectively to, one ormore application server 140. Theapplication server 140 can host a dynamic virtual environment modeler server 151, which can comprise one or more modules or applications and each of which can be embodied as hardware, software, firmware, or any combination thereof. Theapplication server 140 is, in turn, shown to be coupled to adatabase server 124 that facilitates access to one or more information storage repositories, such asdatabase 126. In an example embodiment, thedatabase 126 comprises one or more storage devices that store information to be accessed by the virtualitem placement system 150. Additionally, in some embodiments, the model data may be cached locally on theclient device 110. Further, while the client-server-basednetwork architecture 100 shown inFIG. 1 employs a client-server architecture, the present inventive subject matter is, of course, not limited to such an architecture, and can equally well find application in a distributed, or peer-to-peer, architecture system, for example. -
FIG. 2 illustrates example functional engines of a virtualitem placement system 150, according to some example embodiments. As illustrated, the virtualitem placement system 150 comprises acapture engine 210, amovement engine 220, a renderengine 230, aplacement engine 240, aposition engine 250, and thedisplay engine 260. Thecapture engine 210 manages capturing one or more images, such as an image or an image sequence (e.g., a video, live video displayed in real-time on a mobile device). Themovement engine 220 is configured to implement one or more inertial sensors (e.g. gyroscope, accelerometer) to detect movement of a user device (e.g., a client device, a mobile device, a smartphone) that is executing an instance of the virtualitem placement system 150, according to some example embodiments. The renderengine 230 is configured to generate and manage a 3-D modeling environment in which virtual items can be placed and rendered for output on a display (e.g., overlaid on top of images generated by thecapture engine 210 to provide an augmented reality experience for the viewer of the user device). Theplacement engine 240 is configured to receive point placements from a user of the client device. The point placements can be used to construct virtual items in the virtual item environment by the renderengine 230. For example, theplacement engine 240 can receive placements of corners of the physical environment for use in generating a 3-D model of the environment (e.g., virtual walls of the environment). - In some example embodiments, the
placement engine 240 is configured to detect a ground surface of the physical environment being depicted in the image(s) captured by thecapture engine 210. For example, theplacement engine 240 can detect image features of a physical ground depicted in the images, determine that the image features are trackable across images of the live video (e.g., using scale invariant feature transform, SIFT), and determine or assume that the detected image features are coplanar, and thusly determine an orientation of real-world ground surface depicted in the images. Theposition engine 250 is configured to manage positional updates of a virtual item in the 3-D modeling environment. For example, theposition engine 250 can move a virtual door geometric primitive along a virtual wall in response to physical movement detected by themovement engine 220, as discussed in further detail below. The display engine 206 is configured to generate a user interface to display images (e.g., a live video view), receive user inputs (e.g. user input of points), receive manipulations of the virtual item, and render a composite augmented reality display that dynamically updates the virtual item to simulate that the virtual item actually exist in the depicted environment of the images generated by thecapture engine 210. -
FIG. 3 shows a flow diagram of anexample method 300 for implementing virtual item placement, according to some example embodiments. Atoperation 305, thecapture engine 210 initiates a view on the client device. For example, atoperation 305, thecapture engine 210 generates a live video view using an image sensor on a backside of a client device. - At
operation 310, the renderengine 230 generates a 3-D model of a room environment. For example, theplacement engine 240 can first detect a ground surface using image feature analysis as discussed above, and then generate a virtual horizontal plane in the 3-D model of the room environment to correspond to the detected real-world ground. Further, theplacement engine 240 can receive placements of points that indicate one or more physical walls. The point placements can be used to construct virtual walls as vertical planes in the 3-D modeling environment, as discussed in further detail below. - At
operation 315, theposition engine 250 places a primitive in the 3-D modeling environment according to placements instructions received on the client device. For example, the user of the client device can drag-and-drop a door image onto the live view video. In response to dragging and dropping the door image onto the live view, theplacement engine 240 places a door primitive on the virtual wall that coincides with the physical wall onto which the user drag-and-dropped the door image. - At
operation 320, theplacement engine 240 receives one or more manipulations or modifications to the primitive. For example, atoperation 320, theplacement engine 240 receives an instruction to scale the size of the door by receiving a drag gesture on the door depicted on the client device. Responsive to the gesture, theplacement engine 240 scales the size of the door so that it is larger or smaller in response to the user's gestures. - At
operation 325, the primitive is moved in response to the client device movement. For example, atoperation 325, themovement engine 220 detects physical movement of the client device using one or more inertial sensors, such as a gyroscope or accelerometer that are integrated into the client device. In some example embodiments, the movement is detected using image analysis (e.g., detect movement of wall image features between different frames of the video sequence, as in a SIFT algorithm). In response to the movement detected using image analysis or inertial sensors, the virtual item is moved in the environment. For example, in response to the user rotating the client device counterclockwise (e.g., sweeping the client device from the user's right to the user's left), the virtual item slides along a virtual wall in the leftward direction with the virtual item locked at the bottom of the wall, according to some example embodiments. In some example embodiments, in addition to movement of the primitive in the 3-D environment, a virtual camera using to render the 3-D environment is moved so that the perspective of the imaged physical environment matches the perspective of the 3-D model environment rendered by the virtual camera. - At
operation 330, theposition engine 250 receives a lock instruction to save the placed primitive at the current location. For example, after the user finishes rotating the client device counterclockwise and the door slides in the leftward direction, the user can select a save instruction to save the coordinates of the virtual item at the current position on the virtual wall. - At
operation 335, the renderengine 230 renders an augmented display of the virtual item and the physical environment depicted in the one or more images (e.g. the live video generated at operation 305). In some example embodiments, themethod 300 is performed continuously so that in response to new physical movements of the client device, the virtual item is moved a corresponding amount, the virtual camera is likewise moved a corresponding amount, and a new augmented reality frame is displayed on a display device of the client device, thereby enabling a user viewing the client device to simulate the placed virtual item in the physical environment as viewed through the client device. -
FIG. 4 shows an example user interface for implementing virtual item placement, according to some example embodiments. InFIG. 4 , aclient device 110 displays auser interface 400 in which an image 405 (e.g., a frame of a live video generated by a backside camera on the client device 110) of the physical room is depicted. The physical room includes afirst wall 410 and afloor 415. As illustrated, thesystem 150 has detected thefloor 415 using image feature analysis and anotification 420 is displayed on theuser interface 400 to indicate that thefloor 415 has been detected. -
FIG. 5 shows an example user interface including a guide for adding points, according to some example embodiments. InFIG. 5 , the user (not depicted) has moved theclient device 110 so that the image now depicts thefirst wall 410 thefloor 415 and asecond wall 500 that intersect at a corner. Theuser interface 400 includes aguide 505, which is displayed as a reticule that the user can position over the corner of the room and select theadd point button 510. - Moving to
FIG. 6 , after the user adds thepoint 600, the user moves theclient device 110 counterclockwise which creates a guideline (e.g., the arrow) extending from thepoint 600. The guideline coincides with the interface or corner of thefirst wall 410 and the floor 415 (although, in the example illustrated inFIG. 6 , the guideline is shown off set from the corner of thefirst wall 410 and the floor 415). In some example embodiments, the user moves theclient device 110 so that the reticule (guide 505) is over each point (e.g., corner) of the room, after which the user selects theadd point button 510 so that each corner of the room can be received, and virtual walls generated for the 3-D virtual environment. - In some example embodiments, the user only defines a single wall upon which virtual items are simulated. For example, the user can place
point 600 then drag the guide counterclockwise and addpoint 605 to create a line between thepoints -
FIG. 7 shows an example user interface for implementing virtual item placement, according to some example embodiments. InFIG. 7 , the user has placed a virtual door 700 (e.g., virtual door primitive) by dragging and dropping thevirtual door 700 anywhere onto theimage 405. Although primitives (e.g., wireframe, collection of vertices), are discussed here as an example, in some example embodiments, the virtual items are displayed as full textured virtual items (e.g., a door with an Oak wood virtual texture). Continuation, in response to receiving the drag-and-drop instruction, the virtualitem placement system 150 snaps thevirtual door 700 on the first wall 410 (e.g., on the virtual wall constructed inFIG. 6 ) such that the bottom side of thevirtual door 700 is anchored 705 to the intersection or corner of thefirst wall 410 in thefloor 415. In some example embodiments, the edge of the virtual door is anchored to the edge or end of the vertical wall in the 3-D modeling environment managed the renderengine 230. In some example embodiments, the virtual door is anchored to the intersection of the virtual ground and the virtual wall. For example, the virtual wall can be generated as an infinite vertical plane and the ground is generated as an infinite horizontal plane, and the intersection of the places is a line to which the bottom edge of virtual door is locked, such that thevirtual door 700 slides along the intersection line as displayed inFIGS. 8 and 9 . -
FIG. 8 shows an example user interface for updating virtual item placement, according to some example embodiments. InFIG. 8 , theclient device 110 has been rotated counterclockwise. Responsive to the counterclockwise rotation, thevirtual door 700 is moved along thefirst wall 410 by sliding thevirtual door 700 such that the bottom side of the door coincides with the corner of thefirst wall 410 in thefloor 415. -
FIG. 9A shows a further update to theuser interface 400 in response to theclient device 110 being further rotated counterclockwise. As displayed inFIG. 9 , in response to further rotation (as detected by inertial sensor analysis or image feature analysis) thevirtual door 700 is moved further to the left on thefirst wall 410. Further, if two walls are defined (e.g., via point placements), when theclient device 110 rotates to view the other real-world wall, thevirtual door 700 snaps from the first virtual wall to the second virtual wall that is aligned with the second real world wall of the room. In this way, the user of theclient device 110 can more easily place and model the door at different places of the depicted virtual room. That is, the anchoring to the virtual wall allows the user to efficiently place virtual wall items (e.g., doors, windows) within the simulation environment on the client device, which has limited input/output controls and limited screen size. For example, whereas in conventional approaches, the user may manually align the door using mouse clicks or dragging the door to align with the wall; in the example embodiment ofFIG. 7 the anchoring allows the user of theclient device 110 to efficiently place and move thevirtual door 700 in the augmented reality display. The user may further select alock button 900 which then locks the virtual door at the new position on the virtual wall. In some example embodiments, the virtual door is initially displayed as a lightweight primitive (e.g., mesh, door outline) that slides along the real-world wall (via the primitive being constrained to a transparent virtual wall). Continuing toFIG. 9B , in response to thelock button 900 being selected, the renderengine 230 then renders the door as a realisticvirtual item 905, with image textures and virtual light rays reflected off the door, to accurately create an augmented reality simulation of the door on the real-world wall, as viewed through theclient device 110. -
FIG. 10 shows an example user interface for implementing a virtual item placement in which the virtual item is unconstrained, according to some example embodiments. As illustrated inFIG. 10 , instead of placing a virtual door, the virtual item placed is a window, such asvirtual window 1000. In contrast to a virtual door which was anchored to the wall's bottom side, thevirtual window 1000 can be moved up and down along the vertical dimension (a first degree of freedom), and left and right along a horizontal dimension (a second degree of freedom). Further, according to some example embodiments, the virtual item place can be modified. For example, a user of theclient device 110 can perform a drag gesture overelement 1005 to increase or decrease the width of thevirtual window 1000. Likewise, user of theclient device 110 can perform a drag gesture overelement 1010 to increase or decrease the height of thevirtual window 1000. - Although in the above examples, two-dimensional virtual items are placed (e.g., a virtual door, a virtual window), in some example embodiments the virtual items placed using the above approaches are three-dimensional. For example, with reference to
FIG. 11 , a virtual table can be placed in the augmented reality environment by placing fourseparate cylinder primitives 1105A-D on a surface ground 1100 (e.g., dragging shapes onto an image depicting the physical ground, where the surface ground is a virtual horizontal plane). Theseparate cylinder primitives 1105A-D correspond to four legs of a table to be modeled. Further, avirtual rectangle 1110 can be snapped to the fourcylinder primitives 1105A-D to rapidly delineate an approximate space in which a virtual table is to be modeled. In some example embodiments, the placed primitives can be locked into groups so that in response to client device movement the group of primitives move around the 3-D modeling environment as a unit. For example, with reference toFIG. 11B , one side of thetable group 1115 can be locked tovirtual wall 1103, and when the client device is moved counter clockwise thetable group 1115 slides left while being anchored to the wall (e.g., the side of thevirtual rectangle 1110 is anchored or constrained to the virtual wall 1103). - In some example embodiments, the placed primitives are locked or constrained in relation to each other. For example, the bottom side of the
virtual rectangle 1110 can be locked to the top surface of the fourcylinder primitives 1105A-D, and in response to client device movement, thevirtual rectangle 1110 can slide on top of thecylinder primitives 1105A-D but not be separated from thecylinder primitives 1105A-D. In this way, a user of the virtualitem placement system 150 can pre-grouped and pre-locked primitives to efficiently model complex objects, such as tables, chairs, lamps, in a room. -
FIG. 12 is a block diagram 1200 illustrating an architecture ofsoftware 1202, which can be installed on any one or more of the devices described above.FIG. 12 is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, thesoftware 1202 is implemented by hardware such as amachine 1300 ofFIG. 13 that includesprocessors 1310,memory 1330, and I/O components 1350. In this example architecture, thesoftware 1202 can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, thesoftware 1202 includes layers such as anoperating system 1204,libraries 1206,frameworks 1208, andapplications 1210. Operationally, theapplications 1210 invoke application programming interface (API) calls 1212 through the software stack and receivemessages 1214 in response to the API calls 1212, consistent with some embodiments. - In various implementations, the
operating system 1204 manages hardware resources and provides common services. Theoperating system 1204 includes, for example, akernel 1220,services 1222, anddrivers 1224. Thekernel 1220 acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, thekernel 1220 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. Theservices 1222 can provide other common services for the other software layers. Thedrivers 1224 are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, thedrivers 1224 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. - In some embodiments, the
libraries 1206 provide a low-level common infrastructure utilized by theapplications 1210. Thelibraries 1206 can include system libraries 1230 (e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, thelibraries 1206 can includeAPI libraries 1232 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. Thelibraries 1206 can also include a wide variety ofother libraries 1234 to provide many other APIs to theapplications 1210. - The
frameworks 1208 provide a high-level common infrastructure that can be utilized by theapplications 1210, according to some embodiments. For example, theframeworks 1208 provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. Theframeworks 1208 can provide a broad spectrum of other APIs that can be utilized by theapplications 1210, some of which may be specific to a particular operating system or platform. - In an example embodiment, the
applications 1210 include ahome application 1250, acontacts application 1252, abrowser application 1254, abook reader application 1256, alocation application 1258, amedia application 1260, amessaging application 1262, agame application 1264, and a broad assortment of other applications such as a third-party application 1266. According to some embodiments, theapplications 1210 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of theapplications 1210, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application 1266 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application 1266 can invoke the API calls 1212 provided by theoperating system 1204 to facilitate functionality described herein. -
FIG. 13 illustrates a diagrammatic representation of amachine 1300 in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG. 13 shows a diagrammatic representation of themachine 1300 in the example form of a computer system, within which instructions 1316 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing themachine 1300 to perform any one or more of the methodologies discussed herein may be executed. Theinstructions 1316 transform the general,non-programmed machine 1300 into aparticular machine 1300 programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, themachine 1300 operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, themachine 1300 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. Themachine 1300 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing theinstructions 1316, sequentially or otherwise, that specify actions to be taken by themachine 1300. Further, while only asingle machine 1300 is illustrated, the term “machine” shall also be taken to include a collection ofmachines 1300 that individually or jointly execute theinstructions 1316 to perform any one or more of the methodologies discussed herein. - The
machine 1300 may includeprocessors 1310,memory 1330, and I/O components 1350, which may be configured to communicate with each other such as via abus 1302. In an example embodiment, the processors 1310 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, aprocessor 1312 and aprocessor 1314 that may execute theinstructions 1316. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG. 13 showsmultiple processors 1310, themachine 1300 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. - The
memory 1330 may include amain memory 1332, astatic memory 1334, and astorage unit 1336, both accessible to theprocessors 1310 such as via thebus 1302. Themain memory 1330, thestatic memory 1334, andstorage unit 1336 store theinstructions 1316 embodying any one or more of the methodologies or functions described herein. Theinstructions 1316 may also reside, completely or partially, within themain memory 1332, within thestatic memory 1334, within thestorage unit 1336, within at least one of the processors 1310 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by themachine 1300. - The I/
O components 1350 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 1350 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 1350 may include many other components that are not shown inFIG. 13 . The I/O components 1350 are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components 1350 may includeoutput components 1352 andinput components 1354. Theoutput components 1352 may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. Theinput components 1354 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. - In further example embodiments, the I/
O components 1350 may includebiometric components 1356,motion components 1358,environmental components 1360, orposition components 1362, among a wide array of other components. For example, thebiometric components 1356 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. Themotion components 1358 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. Theenvironmental components 1360 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. Theposition components 1362 may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. - Communication may be implemented using a wide variety of technologies. The I/
O components 1350 may includecommunication components 1364 operable to couple themachine 1300 to anetwork 1380 ordevices 1370 via acoupling 1382 and acoupling 1372, respectively. For example, thecommunication components 1364 may include a network interface component or another suitable device to interface with thenetwork 1380. In further examples, thecommunication components 1364 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. Thedevices 1370 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). - Moreover, the
communication components 1364 may detect identifiers or include components operable to detect identifiers. For example, thecommunication components 1364 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via thecommunication components 1364, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. - The various memories (i.e., 1330, 1332, 1334, and/or memory of the processor(s) 1310) and/or
storage unit 1336 may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 1316), when executed by processor(s) 1310, cause various operations to implement the disclosed embodiments. - As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below.
- In various example embodiments, one or more portions of the
network 1380 may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, thenetwork 1380 or a portion of thenetwork 1380 may include a wireless or cellular network, and thecoupling 1382 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, thecoupling 1382 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. - The
instructions 1316 may be transmitted or received over thenetwork 1380 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 1364) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, theinstructions 1316 may be transmitted or received using a transmission medium via the coupling 1372 (e.g., a peer-to-peer coupling) to thedevices 1370. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying theinstructions 1316 for execution by themachine 1300, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. - The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/567,518 US11004270B2 (en) | 2018-09-11 | 2019-09-11 | Virtual item placement system |
US17/228,802 US11645818B2 (en) | 2018-09-11 | 2021-04-13 | Virtual item placement system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862729930P | 2018-09-11 | 2018-09-11 | |
US16/567,518 US11004270B2 (en) | 2018-09-11 | 2019-09-11 | Virtual item placement system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/228,802 Continuation US11645818B2 (en) | 2018-09-11 | 2021-04-13 | Virtual item placement system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200082633A1 true US20200082633A1 (en) | 2020-03-12 |
US11004270B2 US11004270B2 (en) | 2021-05-11 |
Family
ID=69718975
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/567,518 Active US11004270B2 (en) | 2018-09-11 | 2019-09-11 | Virtual item placement system |
US17/228,802 Active US11645818B2 (en) | 2018-09-11 | 2021-04-13 | Virtual item placement system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/228,802 Active US11645818B2 (en) | 2018-09-11 | 2021-04-13 | Virtual item placement system |
Country Status (1)
Country | Link |
---|---|
US (2) | US11004270B2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200126317A1 (en) * | 2018-10-17 | 2020-04-23 | Siemens Schweiz Ag | Method for determining at least one region in at least one input model for at least one element to be placed |
CN111841014A (en) * | 2020-07-22 | 2020-10-30 | 腾讯科技(深圳)有限公司 | Virtual article display method and device, electronic equipment and storage medium |
US11010500B2 (en) * | 2018-09-17 | 2021-05-18 | Bricsy Nv | Direct room modeling in computer-aided design |
US11157740B1 (en) * | 2020-08-11 | 2021-10-26 | Amazon Technologies, Inc. | Augmented reality object model configuration based on placement location |
CN113721911A (en) * | 2021-08-25 | 2021-11-30 | 网易(杭州)网络有限公司 | Control method, medium, and apparatus of display scale of virtual scene |
CN113792358A (en) * | 2021-09-22 | 2021-12-14 | 深圳须弥云图空间科技有限公司 | Automatic interaction layout method and device for three-dimensional furniture and electronic equipment |
US20220189127A1 (en) * | 2019-04-16 | 2022-06-16 | Nippon Telegraph And Telephone Corporation | Information processing system, information processing terminal device, server device, information processing method and program thereof |
US11481930B2 (en) * | 2020-01-21 | 2022-10-25 | Trimble Inc. | Accurately positioning augmented reality models within images |
US11645818B2 (en) | 2018-09-11 | 2023-05-09 | Houzz, Inc. | Virtual item placement system |
US20230186569A1 (en) * | 2021-12-09 | 2023-06-15 | Qualcomm Incorporated | Anchoring virtual content to physical surfaces |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220319059A1 (en) * | 2021-03-31 | 2022-10-06 | Snap Inc | User-defined contextual spaces |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170352188A1 (en) * | 2011-03-24 | 2017-12-07 | Pantomime Corporation | Support Based 3D Navigation |
JP5960796B2 (en) * | 2011-03-29 | 2016-08-02 | クアルコム,インコーポレイテッド | Modular mobile connected pico projector for local multi-user collaboration |
WO2014097181A1 (en) * | 2012-12-19 | 2014-06-26 | Basf Se | Detector for optically detecting at least one object |
US20150042640A1 (en) * | 2013-08-07 | 2015-02-12 | Cherif Atia Algreatly | Floating 3d image in midair |
US20180225885A1 (en) * | 2013-10-01 | 2018-08-09 | Aaron Scott Dishno | Zone-based three-dimensional (3d) browsing |
US10504231B2 (en) * | 2014-05-21 | 2019-12-10 | Millennium Three Technologies, Inc. | Fiducial marker patterns, their automatic detection in images, and applications thereof |
US10373381B2 (en) * | 2016-03-30 | 2019-08-06 | Microsoft Technology Licensing, Llc | Virtual object manipulation within physical environment |
AU2018261328B2 (en) * | 2017-05-01 | 2022-08-25 | Magic Leap, Inc. | Matching content to a spatial 3D environment |
GB201709199D0 (en) * | 2017-06-09 | 2017-07-26 | Delamont Dean Lindsay | IR mixed reality and augmented reality gaming system |
IL253432A0 (en) * | 2017-07-11 | 2017-09-28 | Elbit Systems Ltd | A system and method for correcting a rolling display effect |
US10304254B2 (en) * | 2017-08-08 | 2019-05-28 | Smart Picture Technologies, Inc. | Method for measuring and modeling spaces using markerless augmented reality |
US10155166B1 (en) * | 2017-09-08 | 2018-12-18 | Sony Interactive Entertainment Inc. | Spatially and user aware second screen projection from a companion robot or device |
US10026218B1 (en) * | 2017-11-01 | 2018-07-17 | Pencil and Pixel, Inc. | Modeling indoor scenes based on digital images |
US10636214B2 (en) * | 2017-12-22 | 2020-04-28 | Houzz, Inc. | Vertical plane object simulation |
US11567627B2 (en) * | 2018-01-30 | 2023-01-31 | Magic Leap, Inc. | Eclipse cursor for virtual content in mixed reality displays |
US11875012B2 (en) * | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
US10475250B1 (en) * | 2018-08-30 | 2019-11-12 | Houzz, Inc. | Virtual item simulation using detected surfaces |
US11004270B2 (en) | 2018-09-11 | 2021-05-11 | Houzz, Inc. | Virtual item placement system |
US11544900B2 (en) * | 2019-07-25 | 2023-01-03 | General Electric Company | Primitive-based 3D building modeling, sensor simulation, and estimation |
-
2019
- 2019-09-11 US US16/567,518 patent/US11004270B2/en active Active
-
2021
- 2021-04-13 US US17/228,802 patent/US11645818B2/en active Active
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11645818B2 (en) | 2018-09-11 | 2023-05-09 | Houzz, Inc. | Virtual item placement system |
US11010500B2 (en) * | 2018-09-17 | 2021-05-18 | Bricsy Nv | Direct room modeling in computer-aided design |
US20200126317A1 (en) * | 2018-10-17 | 2020-04-23 | Siemens Schweiz Ag | Method for determining at least one region in at least one input model for at least one element to be placed |
US11748964B2 (en) * | 2018-10-17 | 2023-09-05 | Siemens Schweiz Ag | Method for determining at least one region in at least one input model for at least one element to be placed |
US11721078B2 (en) * | 2019-04-16 | 2023-08-08 | Nippon Telegraph And Telephone Corporation | Information processing system, information processing terminal device, server device, information processing method and program thereof |
US20220189127A1 (en) * | 2019-04-16 | 2022-06-16 | Nippon Telegraph And Telephone Corporation | Information processing system, information processing terminal device, server device, information processing method and program thereof |
US11481930B2 (en) * | 2020-01-21 | 2022-10-25 | Trimble Inc. | Accurately positioning augmented reality models within images |
CN111841014A (en) * | 2020-07-22 | 2020-10-30 | 腾讯科技(深圳)有限公司 | Virtual article display method and device, electronic equipment and storage medium |
US11157740B1 (en) * | 2020-08-11 | 2021-10-26 | Amazon Technologies, Inc. | Augmented reality object model configuration based on placement location |
CN113721911A (en) * | 2021-08-25 | 2021-11-30 | 网易(杭州)网络有限公司 | Control method, medium, and apparatus of display scale of virtual scene |
CN113792358A (en) * | 2021-09-22 | 2021-12-14 | 深圳须弥云图空间科技有限公司 | Automatic interaction layout method and device for three-dimensional furniture and electronic equipment |
US20230186569A1 (en) * | 2021-12-09 | 2023-06-15 | Qualcomm Incorporated | Anchoring virtual content to physical surfaces |
US11682180B1 (en) * | 2021-12-09 | 2023-06-20 | Qualcomm Incorporated | Anchoring virtual content to physical surfaces |
Also Published As
Publication number | Publication date |
---|---|
US11004270B2 (en) | 2021-05-11 |
US20210233320A1 (en) | 2021-07-29 |
US11645818B2 (en) | 2023-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11645818B2 (en) | Virtual item placement system | |
KR102534637B1 (en) | augmented reality system | |
US10909768B2 (en) | Virtual item simulation using detected surfaces | |
US11263457B2 (en) | Virtual item display simulations | |
US10755460B2 (en) | Generating enhanced images using dimensional data | |
US10846938B2 (en) | User device augmented reality based item modeling | |
US10636214B2 (en) | Vertical plane object simulation | |
US11989837B2 (en) | Method and system for matching conditions for digital objects in augmented reality | |
US11631216B2 (en) | Method and system for filtering shadow maps with sub-frame accumulation | |
CN113168735A (en) | Method and system for processing and partitioning parts of the real world for visual digital authoring in a mixed reality environment | |
KR20240006669A (en) | Dynamic over-rendering with late-warping | |
KR20240005953A (en) | Reduce startup time for augmented reality experiences | |
EP4100918B1 (en) | Method and system for aligning a digital model of a structure with a video stream | |
KR20240008370A (en) | Late warping to minimize latency for moving objects | |
KR20240007245A (en) | Augmented Reality Guided Depth Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: HERCULES CAPITAL, INC., AS COLLATERAL AND ADMINISTRATIVE AGENT, CALIFORNIA Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:HOUZZ INC.;REEL/FRAME:050928/0333 Effective date: 20191029 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: HOUZZ, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROM, SHAY;KONKY, ELI;REEL/FRAME:055892/0021 Effective date: 20190923 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: HOUZZ INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HERCULES CAPITAL, INC., AS COLLATERAL AND ADMINISTRATIVE AGENT;REEL/FRAME:059191/0501 Effective date: 20220216 |