US20190156410A1 - Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape - Google Patents
Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape Download PDFInfo
- Publication number
- US20190156410A1 US20190156410A1 US16/189,849 US201816189849A US2019156410A1 US 20190156410 A1 US20190156410 A1 US 20190156410A1 US 201816189849 A US201816189849 A US 201816189849A US 2019156410 A1 US2019156410 A1 US 2019156410A1
- Authority
- US
- United States
- Prior art keywords
- participants
- item
- avatar
- individual
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G06F15/18—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/087—Inventory or stock management, e.g. order filling, procurement or balancing against orders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0267—Wireless devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0278—Product appraisal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0603—Catalogue ordering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0611—Request for offers or quotes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0621—Item configuration or customization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0623—Item investigation
- G06Q30/0625—Directed, with specific intent or strategy
- G06Q30/0627—Directed, with specific intent or strategy using item specifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0633—Lists, e.g. purchase orders, compilation or processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/08—Auctions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Definitions
- a conventional online auction system may provide users with a mere indication of how many other users have previously viewed a particular item that is currently being auctioned and/or how many other users have previously added the particular item to their “watch lists” (e.g., to receive updates after bids are submitted).
- Watch lists e.g., to receive updates after bids are submitted.
- merely knowing how many other users have previously viewed and/or “watch-listed” a particular item does not provide meaningful insight into whether any specific users are likely to competitively bid on the particular item. This is because many users casually browse through and even “watch-list” a multitude of online auctions without any intention whatsoever of actually submitting competitive bids in an aggressive effort to win a particular online auction.
- the disclosed technologies can efficiently translate user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape.
- a plurality of avatars can be rendered in a virtual environment such as, for example, a three-dimensional (3D) immersive environment that is associated with the online auction for an item.
- Individual avatars may be rendered in accordance with avatar modification states that specifically correspond to acquisition interest levels for participants of the online auction. Acquisition interest levels may be determined for individual participants based on user activity of these individual participants in association with the online auction for the item.
- this particular participant's avatar can be rendered in the virtual environment in a manner such that the particular participant's interest in the item is visually perceptible to other participants.
- an avatar that represents the particular participant within the virtual environment may be rendered with excited and/or enthusiastic facial expressions directed toward the item being auctioned.
- the disclosed techniques can effectively retain participants' interests in an online auction by providing meaningful insight into the competitive landscape of the online auction. This can reduce or even eliminate the lure for these participants to aimlessly browse through other online auctions.
- the disclosed technologies tangibly improve computing efficiencies with respect to a wide variety of computing resources that would otherwise be wastefully consumed and/or utilized. This is because reducing the lure for participants to leave a “competitive” auction that is currently being viewed in order to browse through other auctions directly results in reduced network bandwidth usage and processing cycles consumed by server(s) that are hosting the online auctions.
- Technical benefits other than those specifically identified herein might also be realized through implementations of the disclosed technologies.
- activity data that defines user activity that various participants perform in association with an online auction for an item is received.
- the online auction may be conducted by an online auctioneer to facilitate competitive bidding by the various participants for the item.
- the online auctioneer may utilize one or both of a client-server computing architecture or a peer-to-peer computing architecture.
- the activity data may define the user activity on a per-user basis. For example, the activity data may indicate that a particular participant has viewed the online auction for the item several times per hour for the last several hours whereas one or more other participants have viewed the online auction only once and have not returned thereto.
- the activity data may further indicate that the particular participant has added the item to their “watch list” to trigger updates any time a bid is submitted for the item whereas the one or more other participants are not “watching” the item.
- An analysis of the activity data may be performed to identify, on a per-user basis, user signals that are indicative of acquisition interest levels for the various participants.
- individual acquisition interest levels may indicate strengths of intentions of corresponding participants to acquire the item through the competitive bidding.
- the particular participant having added the item to their watchlist and continuing to view the online auction for the item several times per hour may indicate that the particular participant has very strong intentions of enter a winning bid toward the end of the auction. Therefore, based on these user signals, an acquisition interest level may be determined for the particular participant that is relatively higher than for other participants for which corresponding user signals indicate to be relatively less motivated to acquire the item through the competitive bidding.
- Avatar profile data that defines avatar profiles for the various participants may also be received and utilized to determine how to graphically render avatars for the various participants within the virtual environment.
- the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants. For example, a 3D model for a particular user may be dynamically modified as user signals are received that indicate that the particular user is more (or less) motivated to acquire the item being auctioned.
- individual participants may be enabled to define or otherwise control certain aspects of their corresponding avatars.
- individual participants may be enabled to define various parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- a hair color e.g., a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- a hair color e.g., a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- a build e.g., a muscular body
- avatar modification states can be determined for the various participants' avatars that correspond on a per-user basis to the various participants' acquisition interest levels.
- an avatar modification state may be determined for the particular participant's avatar to make the particular participant's intentions visually perceptible to others via the appearance of the particular participant's avatar.
- user activity associated with another participant indicates that this other participant is generally interested in the item but does yet indicate a strong intention to acquire the item, a different modification state can be determined for another avatar that represents this other participant in the virtual environment.
- one or more computing devices may be caused to display the avatars for the various participants in accordance with the avatar modification states that correspond to the various participants' acquisition interest levels.
- the avatars may be displayed within the virtual environment alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction are made immediately and visually apparent.
- the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction.
- a wearable computing device such as an augmented reality (“AR”) device or virtual reality (“VR”) device.
- AR augmented reality
- VR virtual reality
- a participant of an online auction might don the wearable computing device to view the virtual reality environment associated with the online auction. Then, the wearable device can render the avatars of the various participants of the online auction so that the excitement and/or motivation of the various participants—as indicated by their corresponding user activities—is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems.
- FIG. 1 illustrates aspects of an exemplary system for analyzing activity data that is received in association with online auctions to render a virtual environment that has a visually perceptible competitive landscape.
- FIG. 2A illustrates an exemplary virtual environment in which an avatar that represents a participant is rendered to visually communicate an acquisition interest level of the participant.
- FIG. 2B illustrates the exemplary virtual environment 106 of FIG. 2A with an additional avatar being rendered to represent another participant that has performed user activity consistent with a high probability of competitively bidding on the item.
- FIG. 2C illustrates the exemplary virtual environment of FIGS. 2A and 2B with the avatar this is initially shown in FIG. 2A being rendered in accordance with an avatar modification state corresponding to a “heightened” acquisition interest level.
- FIG. 3 illustrates an alternate embodiment of a virtual environment via which aspects of an online auction are made to be visually perceptible to a participant of the online auction.
- FIG. 4 is a flow diagram that illustrates an example process describing aspects of the technologies disclosed herein for efficient rendering of 3D models using model placement metadata.
- FIG. 5 shows an illustrative configuration of a wearable device capable of implementing aspects of the technologies disclosed herein.
- FIG. 6 illustrates additional details of an example computer architecture for a computer capable of implementing aspects of the technologies described herein.
- This Detailed Description describes technologies for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape.
- avatars are rendered in a virtual environment that is generated to communicate a competitive landscape associated with an online auction that facilitates competitive bidding for an item.
- Various participants' acquisition interest levels may be determined by analyzing the participants' user activity in association with the online auction.
- an acquisition interest level for a particular participant in association with an online auction is indicative of a probability that the particular participant will competitively bid for an item being auctioned off in the online auction.
- the participants' avatars may be rendered differently based on the participants' level of interest in (e.g., motivation toward) acquiring the item that is being auctioned.
- the individual participants' avatars can be rendered in a three-dimensional (“3D”) immersive environment in a manner such that the individual participants' level of interest in acquiring the item is visually perceptible.
- avatars may be rendered to appear more (or less) excited about the item as their corresponding user activity indicates that they are more (or less) likely to competitively bid on the item in a genuine attempt to win the online auction.
- the disclosed techniques provide meaningful insight into the competitive landscape of the online auction and, by doing so, excite the participants' competitive nature so as to effectively retain participants' interests in the online auction. This can reduce or even eliminate the lure for these participants to aimlessly browse through other online auctions.
- the disclosed technologies tangibly improve human interaction with computing devices in a manner that improves computing efficiencies with respect to a wide variety of computing resources that would otherwise be wastefully consumed and/or utilized. This is because reducing the lure for participants to leave a “competitive” auction that is currently being viewed in order to browse through other auctions directly reduces both the network bandwidth and processing cycles consumed by server(s) that are hosting the online auctions.
- Technical benefits other than those specifically identified herein might also be realized through implementations of the disclosed technologies.
- a wearable computing device such as, for example, an augmented reality (“AR”) device or virtual reality (“VR”) device.
- AR augmented reality
- VR virtual reality
- a participant of an online auction might don a wearable computing device to view a virtual reality environment that is specifically tailored to visually communicate aspects of the competitive landscape of the online auction.
- the wearable device can render avatars of various participants of the online auction so that the excitement and/or motivation of the various participants is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems.
- virtual environment refers to any environment in which one or more user perceptible objects (e.g., avatars, display menus, price icons, etc.) are rendered virtually as opposed to existing within a real-world environment surrounding a user.
- user perceptible objects e.g., avatars, display menus, price icons, etc.
- an AR device may be effective at generating a virtual environment within the context of the present disclosure—even if some real-world objects remain perceptible to a user.
- the technologies described herein can be implemented on a variety of different types of wearable devices configured with a variety of different operating systems, hardware components, and/or installed applications.
- the wearable device can be implemented by the following example wearable devices: GOOGLE GLASS, MAGIC LEAP ONE, MICROSOFT HOLOLENS, META 2, SONY SMART EYEGLASS, HTC VIVE, OCULUS GO, PLAYSTATION VR, or WINDOWS mixed reality headsets.
- embodiments of the present disclosure can be implemented in any AR-capable device, which is different than goggles or glasses that obstruct a user's view of real-world objects, e.g., actual reality.
- the techniques described herein are device and/or operating system agnostic.
- FIG. 1 various aspects are illustrated of an exemplary system for analyzing activity data 102 that is received in association with a first online auction 104 ( 1 ) to render a virtual environment 106 that has a visually perceptible competitive landscape.
- activity data 102 is received from client devices 108 that correspond to various individual participants 110 of the first online auction 104 ( 1 ). More specifically, first activity data 102 ( 1 ) is received via a first client device 108 ( 1 ) that is being used by a first participant 110 ( 1 ), second activity data 102 ( 2 ) is received via a second client device 108 ( 2 ) that is being used by a second participant 110 ( 2 ), and so on.
- an online auctioneer system 112 is utilizing at least one database 114 to host a plurality of online auctions 104 (e.g., online auctions 104 ( 1 ) through 104 (N)).
- the online auctioneer system 112 is configured in accordance with a client-server computing architecture in which activity data 102 is transferred between the online auctioneer system 112 and one or more client devices via at least one network 116 .
- the online auctioneer system 112 may be configured in accordance with a peer-to-peer computing architecture.
- the online auctioneer system 112 may monitor various instances of the activity data 102 on a per-user basis. For example, first activity data 102 ( 1 ) may be monitored for the first participant 110 ( 1 ), second activity data 102 ( 2 ) may be monitored for the second participant 110 ( 2 ), and so on. For purposes of the discussion of FIG. 1 , presume that the first activity data 102 ( 1 ) indicates that the first participant 110 ( 1 ) has added the item associated with the first auction 104 ( 1 ) to her watchlist and that she has also opened a web browser to view the item several times per hour for the last several hours.
- the second activity data 102 ( 2 ) indicates that the second participant 110 ( 2 ) has added the item associated with the first auction 104 ( 1 ) to his watchlist and that he has also periodically opened a web browser to view the item—albeit not as frequently as the first participant 110 ( 1 ).
- the online auctioneer system 112 may then analyze the activity data 102 on a per-user basis to identify user signals that are indicative of acquisition interest levels for the various participants 110 .
- the acquisition interest level determined for each particular participant may generally indicate a strength of that user's intentions to acquire the item through the competitive bidding.
- the user activities of the first participant 110 ( 1 ) having added the item associated with the first auction 104 ( 1 ) to her watchlist may be identified as a user signal(s) that indicates an intention of the first participant 110 ( 1 ) to acquire the item through the competitive bidding. That is, the first participant 110 ( 1 ) having added the item to her watchlist serves as evidence that the first participant 110 ( 1 ) will competitively bid on the item in the sense that her “watching” the item makes it objectively appear more probable that she intends to bid on the item than it would objectively appear had she not “watched” the item.
- the user activities of the first participant 110 ( 1 ) having frequently opened the web browser to view the item over the last several hours may be identified as another user signal that indicates an intention of the first participant 110 ( 1 ) to acquire the item through the competitive bidding.
- a “first” acquisition interest level may be determined for the first participant 110 ( 1 ).
- the user activities of the second participant 110 ( 2 ) having added the item associated with the first auction 104 ( 1 ) to his watchlist and also having frequently opened the web browser to view the item may be identified as user signals that indicate an intention of the second participant 110 ( 2 ) to acquire the item through the competitive bidding.
- a “second” acquisition interest level may be determined for the second participant 110 ( 2 ).
- the “second” acquisition interest level that is determined for the second participant 110 ( 2 ) may be slightly lower than the “first” acquisition interest level that is determined for the first participant 110 ( 1 ).
- the identified user signals may indicate that both the first participant 110 ( 1 ) and the second participant 110 ( 2 ) intend to competitively bid on the item but that the first participant is slightly more enthusiastic and/or motivated to do so.
- determining the acquisition interest levels for the various participants may be based on historical activity data 120 associated with the individual participants 110 .
- the historical activity data 120 may define historical user activities of at least some of the plurality of participants in association with previous online auctions 104 —i.e., online auctions that have already occurred.
- the historical user activity date 120 may indicate trends of how users (either individually or as a general populous) tend to behave with respect to particular online auctions prior to bidding on those online auctions.
- the historical activity data 120 may reveal that it is commonplace for users to add an item to their watchlist and then somewhat compulsively view and re-view the item prior to beginning to enter competitive bids on the item.
- the historical activity data 120 may be “user specific” historical activity data that defines historical user activities of a specific participant in association with previous online auctions 104 —i.e., online auctions that have already occurred. For example, if historical user activity that is stored in association with a specific user profile 125 indicates that this particular user frequently adds items to her watchlist without later bidding on the item, then this particular user adding an item to her watchlist may be given little or no weight with respect to determining this particular user's acquisition interest level for this item.
- this particular user adding an item to her watchlist may be weighed heavily in determining this particular user's acquisition interest level for this item.
- the online auctioneer system 112 may utilize a machine learning engine 124 to identify correlations between certain types of user activities and competitively bidding on an item.
- the machine learning engine 124 may build and/or continually refine an acquisition interest model 118 based upon the identified correlations. For example, as illustrated, the online auctioneer system 112 may provide the activity data 102 (and/or the historical activity data 120 ) to the machine learning engine 124 . The machine learning engine 124 may then use this data to build the acquisition interest model 118 which is a model that is usable to predict and/or output acquisition interest levels for individual participants based on the types of user activities that those participants perform in association with an online auction 104 .
- Exemplary types of user activities that the machine learning engine 124 might identify as correlating with users competitively bidding on an item may include, but are not limited to, users adding items to their watchlists, users frequently checking a status of particular auctions, users leaving a particular auction open (e.g., in a web browser or other application) on their client device for long durations of time, users monitoring a particular auction without browsing through other auctions, and/or any other suitable activity that might be generally indicative an increased likelihood of a participant competitively bidding on an item.
- any appropriate machine learning techniques may also be utilized, such as unsupervised learning, semi-supervised learning, classification analysis, regression analysis, clustering, etc.
- One or more predictive models may also be utilized, such as a group method of data handling, Na ⁇ ve Bayes, k-nearest neighbor algorithm, majority classifier, support vector machines, random forests, boosted trees, Classification and Regression Trees (CART), neural networks, ordinary least square, and so on.
- the user profiles 124 may further include avatar profile data 122 that defines avatar profiles for the participants 110 .
- the avatar profile data 122 may be utilized by the online auctioneer system 112 to determine how to graphically render avatars for the participants 110 within the virtual environment 106 .
- the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants 110 .
- 3D three-dimensional
- each of the first participant 110 ( 1 ), the second participant 110 ( 2 ), and the N-th participant 110 (N) may have corresponding 3D models that may be rendered to graphically represent these participants' presence within the virtual environment 106 that is associated with the first auction 104 ( 1 ).
- individual participants may define or otherwise control certain aspects of their corresponding avatars.
- individual participants may be enabled to define a variety of parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- a hair color e.g., a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- the first participant 110 ( 1 ) may define parameters within her avatar profile so that the avatar that graphically represents her presence within the virtual environment generally resembles how she appears in real life.
- the other participants 110 may also define parameters within their own avatar profiles so that their respective avatars also resemble them or, if they so choose, some sort of alternate ego.
- various user's may define parameters to cause their avatar to appear as a dinosaur, a team mascot for a college football team, a robot, or any other suitable configuration.
- the online auctioneer system 112 may use the avatar profile data 122 to determine avatar modification states for the various participants' avatars.
- the determined avatar modification states may correspond on a per-user basis to the various participants' acquisition interest levels.
- a “first” avatar modification state may be determined for use with the first participant's 110 ( 1 ) avatar profile based on the first participant 110 ( 1 ) having added the item to her watchlist and also frequently checking the status of the first auction 104 ( 1 ).
- a first avatar 126 ( 1 ) that represents the first participant 110 ( 1 ) within the virtual environment 106 may be rendered so that the first participant's 110 ( 1 ) acquisition interest level relative to other participants is visually perceptible.
- the other participants' 110 avatars may also be rendered according to those participants' acquisition interest levels so that their relative acquisition interest levels are also visually perceptible.
- a second avatar 126 ( 2 ) that represents the second participant 110 ( 2 ) is rendered so as to appear highly motivated—albeit slightly less so than the first avatar 126 ( 1 )—to acquire the item.
- an N-th participant 110 (N) is viewing the virtual environment 106 via a wearable device 128 such as, for example, an augmented reality (“AR”) device or virtual reality (“VR”) device. More specifically, the N-th participant 110 (N) is wearing the wearable device 128 on his head and is viewing a virtual environment 160 associated with the first online auction 104 ( 1 ).
- the virtual environment 106 is a VR environment in which the wearable device 128 is rendering the first avatar 126 ( 1 ) and the second avatar 126 ( 2 ) that represent the presence of the first participant 110 ( 1 ) and the second participant 110 ( 2 ), respectively. It can be appreciated from the illustrated avatars 126 that the excitement and/or motivation of the various participants 110 —as indicated by their corresponding user activities—is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems.
- the virtual environment 106 is rendered in accordance with a first-person view.
- a first-person view For example, as illustrated in FIG. 1 , when viewing the virtual environment 106 the N-th participant is able to see avatars associated with the other participants of the auction (e.g., the first participant 110 ( 1 ) and the second participant 110 ( 2 ) but not an avatar associated with himself.
- the virtual environment 106 is rendered in accordance with a second-person view or a third-person view.
- the avatars 126 may be displayed within the virtual environment 106 alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars 126 in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants 110 , aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction 104 are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction.
- FIGS. 2A though 2 C illustrate as aspects of an implementation of the techniques described herein in which the virtual environment is a three-dimensional immersive environment.
- the participant that is viewing (e.g., peering into) the virtual environment may be enabled to walk around similar to a virtual gaming environment.
- FIG. 2A illustrated is an exemplary virtual environment 106 in which an avatar 126 that represents a participant 110 is rendered to visually communicate an acquisition interest level of the participant 110 .
- the illustrated avatar is the second avatar 126 ( 2 ) that represents the second participant 110 ( 2 ) of FIG. 1 .
- the second avatar 126 ( 2 ) is being rendered in an avatar modification state that is designed to communicate that the second participant 110 ( 2 ) is generally interested in acquiring an item 202 .
- a graphical representation of the item 202 may be shown within the virtual environment alongside the second avatar 126 ( 2 ).
- the graphical representation of the item 202 may be a two-dimensional image of the item 202 .
- a seller may take a picture of the item for sale and upload the picture onto the online auctioneer system 112 .
- the graphical representation of the item 202 may be a three-dimensional model of the item 202 .
- the seller may generate or otherwise obtain object data that defines a 3D model that is associated with the item that is being auction.
- Exemplary object data may include, but is not limited to, STEP files (i.e., 3D model files formatted according to the “Standard for the Exchange of Product Data”), IGES files (i.e., 3D model files formatted according to the “Initial Graphics Exchange Format”), glTF files (i.e., 3D model files formatted according to the “GL Transmission Format”), and/or any other suitable format for defining 3D models.
- STEP files i.e., 3D model files formatted according to the “Standard for the Exchange of Product Data”
- IGES files i.e., 3D model files formatted according to the “Initial Graphics Exchange Format”
- glTF files i.e., 3D model files formatted according to the “GL Transmission Format”
- the second avatar 126 ( 2 ) is being rendered in accordance with a particular acquisition interest level that indicates that the second participant 110 ( 2 ) has performed some user activity with respect to the item 202 which indicates he is at least generally interested in the item 202 .
- the second participant 110 ( 2 ) has not performed user activity that indicates he is strongly motivated to acquire the item 202 through the competitive bidding.
- the avatar modification state causes the second avatar 126 ( 2 ) to be rendered with a facial expression and a hand gesture that visually communicates at least some interest in the item 202 .
- the N-th participant is donning the wearable device 128 which is rendering the virtual environment 106 .
- the N-th participant can “peer” into a virtual auction hall that corresponds to at least the online auction for the item 202 .
- the N-th participant can immediately obtain insight into the competitive landscape of the online auction for the item 202 .
- the landscape is not as competitive with respect to the item 202 as compared to the points in time illustrated in FIGS. 2B and 2C .
- the second participant 110 ( 2 ) has performed user activities which indicate an interest in the item 202 .
- no participants 110 have performed user activities that indicate a high probability that they will aggressively bid on the item 202 .
- individual virtual environments may be specifically tailed to individual participants 110 .
- the virtual environment 106 illustrated in FIG. 2A is shown to include the “Sports Tickets” item 202 along with a “Watch” item 204 and a “Shoe” item 206 .
- the virtual environment 106 may be uniquely generated for the N-th participant 110 (N) to include items that the N-th participant 110 (N) has demonstrated an interest in.
- the N-th participant 110 (N) may have added each of the “Sports Tickets” item 202 , the “Watch” item 204 , and the “Shoe” item 206 to his watchlist.
- the virtual environment 106 is generated to include all of the items which the N-th participant is currently “watching.”
- the virtual environment 106 for any particular participant 110 may include an indication of that virtual environment 106 being at least partially customized or tailored to the particular participant 110 .
- the virtual environment 106 includes the text of “Steve's Virtual Auction Hall” to indicate to the N-th participant 110 (N) that the virtual environment 106 is his own.
- the virtual environment 106 may include on or more text fields 208 that displays various types of information regarding the online auction.
- a first text field 208 ( 1 ) displays some specific information regarding the item 202 such as, for example, notes from the seller, which teams will compete, a section of the seats, how many tickets are included, a date that event will take place, and any other suitable type of information.
- the virtual environment 106 includes a second text field 208 ( 2 ) that displays information regarding the competitive landscape of the online auction.
- the text field 208 ( 2 ) indicates that 5 participants have added the item 202 to their watchlists, 121 participants have viewed the item, and furthermore that a particular participant (i.e., the second participant 110 ( 2 )) with a username of “Super_Fan_#1” has performed user activity that demonstrates a general interest in the item. It can be appreciated that second avatar 126 ( 2 ) corresponds to this user.
- the various modification states may be designed to dynamically change a size with which the avatars 126 are rendered based on the acquisition interest levels of the participants being represented. For example, as illustrated in FIG. 2B , the first avatar 126 ( 1 ) is rendered relatively larger than the second avatar 126 ( 2 ) due to the analyzed user activity indicating that the first participant 110 ( 1 ) is more likely to competitively bid on the item 202 than the second participant 110 ( 2 ). Thus, the first avatar 126 ( 1 ) appears more prominent within the virtual environment 106 than the second avatar 126 ( 2 ).
- the excitement of the first participant 110 ( 1 ) toward the item may be contagious within the virtual environment 106 and “rub off” on the other participants by sparking various feelings such as, for example, urgency for the item 202 , scarcity of the item 202 , or competition for the item 202 .
- FIG. 2C the exemplary virtual environment 106 of FIGS. 2A and 2B is illustrated with the second avatar 126 ( 2 ) being rendered in accordance with an avatar modification state corresponding to a “heightened” acquisition interest level. That is, the acquisition interest level of the second participant 110 ( 2 ) has been heightened in comparison to FIGS. 2A and 2B .
- the second participant 110 ( 2 ) may have also been viewing the virtual environment 106 and, therefore, may have seen the first avatar 126 ( 1 ) that represents how excited the first participant 110 ( 1 ) is about acquiring the “Sports Tickets” item 202 .
- the N-th participant 110 is enabled to visually perceive the increasingly competitive landscape of the online auction over time.
- the N-th participant's interest in the “Sports Tickets” item 202 may be better acquired and retained as opposed to conventional online auction systems for which a competitive landscape in visually imperceptible.
- the online auctioneer system 112 may monitor a status of the online auction in order to determine which participant currently holds a high bid for the item being auctioned.
- the online auctioneer system 112 may control and/or modify various aspects of the virtual environment 106 based upon which particular participant currently has the highest bid.
- an acquisition interest level for a particular participant may be determined based on that participant currently and/or frequently having the high bid for an item.
- various aspects of a modification state for an avatar of a particular participant may be determined based on that particular participant currently and/or frequently having the high bid.
- an avatar for the current high bidder may be rendered so as to appear more excited and/or more happy than other avatars associated with participants that have not bid on the item and/or have lost “high bidder” status.
- the online auctioneer system 112 may exclusively provide a current high bidder with various avatar abilities with respect to the item. Such abilities may be exclusively provided to the particular participant in the sense that the other participants are not provided with the same abilities.
- a participant that is currently the high bidder for the item may be provided with the ability for their avatar to hold the item within the virtual environment and/or to taunt the other participant's avatars with the item.
- the particular participant's avatar may hold a 3D model of the item and may walk up to other participants' avatars and hold the 3D model up to their face and then pull it away quickly while laughing.
- These avatar abilities that are exclusively provided to the particular participant may, in some implementations, be revoked in the event that some other participant outbids the particular participant. Then, these avatar abilities may be provided to this other participant so long as that participant continues to have the high bid.
- FIG. 3 illustrated is an alternate embodiment of a virtual environment 302 via which aspects of an online auction are made to be visually perceptible to a participant 110 (N) of the online auction.
- a participant 110 that is wearing the wearable device 128 enters a virtual live auction experience in which an item (e.g., a guitar 304 ) is being auctioned.
- the virtual live auction experience displays images and/or identifiers of other participants bidding on the item 304 .
- the virtual live auction experience illustrates an avatar, a user photo, or some other representation for the other participants that are watching, bidding, or are likely to bid on the item 304 .
- a username e.g., Jane D., Sam W., Beth L., John M, Expert Bidder, Music Guy86, Strummin Gal, etc.
- the virtual live auction experience also displays the guitar 304 along with item information 306 about the guitar 304 (e.g., a manufacturer, a model, a description, etc.).
- the virtual auction experience further displays auction information 308 such as: a minimum bid (e.g., $150), a minimum bid increment (e.g., $5), a current high bid (e.g., $245 which belongs to Strummin Gal), time remaining in the live auction (e.g., two minutes and twenty-four seconds), total number of bids (e.g., 16), total number of bidders (e.g., 8—the seven shown in the virtual auction experience and the participant 110 ), a bid history, and so forth.
- a minimum bid e.g., $150
- a minimum bid increment e.g., $5
- a current high bid e.g., $245 which belongs to Strummin Gal
- time remaining in the live auction e.g., two minutes and twenty-four seconds
- total number of bids e.
- Beth L. is in the process of placing or has recently placed a new high bid of $250—which has yet to but will soon be reflected within the auction information 308 once it is updated.
- the wearable device 128 provides the participant with an option 310 to bid $255 or another amount that is higher than the current high bid.
- the virtual live auction experience can provide sound and other effects (e.g., a visual celebration for a winning bidder), as well.
- a computer system can receive activity data defining user activity from other members of an auction.
- the activity data can include computing activity such as watching, frequently viewing, or even bidding on the item.
- the activity data may also include gesture indicators that can be translated into an audio, haptic, and/or visual indicator for the participant.
- Each gesture of other participants can be ranked or can be related to a priority value and/or an acquisition interest level. For instance, when activity data received from other participants of an auction indicates that another participant is talking with a loud voice or they indicate a high bid, a high priority signal may be generated and translated into a computer-generated voice that is played to the participant.
- the computer-generated voice may indicate the intent of other participants. As described in more detail in relation to FIGS.
- such signals and associated priority values can be translated into body language that is displayed via an avatar to the participant on a graphical user interface.
- These types of signals are not usually present in online auctions and the techniques of this invention enable participants and auction platforms to benefit from these types of signals that are only available at live auction environments.
- the participants of an online auction can also be ranked or prioritized with respect to one another. For instance, if a first participant is in auction with three highly ranked participants, then the techniques described herein may cause audio, visual, and/or haptic indicators generated by the highly ranked participants may be more prominent to the user than audio, visual and/or haptic indicators generated by lower ranked participants.
- the rankings may be based on prior user history and activity levels.
- FIG. 4 is a flow diagram that illustrates an example process 400 describing aspects of the technologies presented herein with reference to FIGS. 1-3 for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape.
- the process 400 is illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof.
- the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations.
- a system receives activity data that defines user activity of a plurality of participants in association with an online auction for an item.
- the user activity may indicate on a per-user basis which participants of the online auction have viewed the online auction, a frequency with which the participants have viewed the online auction, which participants have bid on the item, which users have scheduled bids for the item, which participants have added the item to their watchlist, and any other suitable user activity that may be indicative of whether specific participants will likely attempt to acquire the item via competitive bidding.
- the activity data may define the user activity on a per-user basis. For example, the activity data may indicate that a particular participant has viewed the online auction for the item several times per hour for the last several hours whereas one or more other participants have viewed the online auction only once and have not returned thereto. The activity data may further indicate that the particular participant has added the item to their “watch list” to trigger updates any time a bid is submitted for the item whereas the one or more other participants are not “watching” the item.
- the activity data may define physical activities performed by the individual participants within their corresponding real-world environments. For example, suppose that a particular participant is donning a virtual reality headset to become immersed into a three-dimensional immersive environment as described herein. Further suppose that while immersed therein, the participant verbally states “Wow, I'd give anything for those ‘Sports Event’ Tickets.”
- the virtual reality headset may detect and analyze the participant's statement. Additionally, or alternatively, the virtual reality device may detect gestures (e.g., via a camera or other type of sensor) of the participant and analyze these gestures to determine how excited and/or motivated the participant is to acquire the item.
- the activity data may be analyzed to identify user signals that indicate acquisition interest levels for the plurality of participants.
- the analysis of the activity data may be performed on a per-user basis so that identified user signals are indicative of acquisition interest levels for the various participants on a per-user basis.
- individual acquisition interest levels may indicate strengths of intentions of corresponding participants to acquire the item through the competitive bidding. For example, the particular participant having added the item to their watchlist and continuing to view the online auction for the item several times per hour may indicate that the particular participant has very strong intentions of enter a winning bid toward the end of the auction. Therefore, based on these user signals, an acquisition interest level may be determined for the particular participant that is relatively higher than for other participants for which corresponding user signals indicate to be relatively less motivated to acquire the item through the competitive bidding.
- avatar profile data may be received that defines avatar profiles for the various participants.
- the avatar profile data may be utilized to determine how to graphically render avatars for the various participants within the virtual environment.
- the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants. For example, a 3D model for a particular user may be dynamically modified as user signals are received that indicate that the particular user is more (or less) motivated to acquire the item being auctioned.
- 3D three-dimensional
- individual participants may be enabled to define or otherwise control certain aspects of their corresponding avatars.
- individual participants may be enabled to define various parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- a hair color e.g., a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter.
- avatar modification states are determined for the various participants' avatars.
- the avatar modification states may specifically correspond on a per-user basis to the various participants' acquisition interest levels. For example, due to the particular participant having the very strong intentions to acquire the item via the competitive bidding, an avatar modification state may be determined for the particular participant's avatar to make the particular participant's intentions visually perceptible to others via the appearance of the particular participant's avatar.
- one or more computing devices may be caused to display the avatars for the various participants in accordance with the avatar modification states that correspond to the various participants' acquisition interest levels.
- the avatars may be displayed within the virtual environment alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction are made immediately and visually apparent.
- the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction.
- FIG. 5 shows an illustrative configuration of a wearable device 500 (e.g., a headset system, a head-mounted display, etc.) capable of implementing aspects of the technologies disclosed herein.
- the wearable device 500 includes an optical system 502 with an illumination engine 504 to generate electro-magnetic (“EM”) radiation that includes both a first bandwidth for generating computer-generated (“CG”) images and a second bandwidth for tracking physical objects.
- EM electro-magnetic
- the first bandwidth may include some or all of the visible-light portion of the EM spectrum whereas the second bandwidth may include any portion of the EM spectrum that is suitable to deploy a desired tracking protocol.
- the optical system 502 further includes an optical assembly 506 that is positioned to receive the EM radiation from the illumination engine 504 and to direct the EM radiation (or individual bandwidths of thereof) along one or more predetermined optical paths.
- the illumination engine 504 may emit the EM radiation into the optical assembly 506 along a common optical path that is shared by both the first bandwidth and the second bandwidth.
- the optical assembly 506 may also include one or more optical components that are configured to separate the first bandwidth from the second bandwidth (e.g., by causing the first and second bandwidths to propagate along different image-generation and object-tracking optical paths, respectively).
- the optical assembly 506 includes components that are configured to direct the EM radiation with respect to one or more components of the optical assembly 506 and, more specifically, to direct the first bandwidth for image-generation purposes and to direct the second bandwidth for object-tracking purposes.
- the optical system 502 further includes a sensor 508 to generate object data in response to a reflected-portion of the second bandwidth, i.e. a portion of the second bandwidth that is reflected off an object that exists within a real-world environment.
- the wearable device 500 may utilize the optical system 502 to generate a composite view (e.g., from a perspective of a user 128 that is wearing the wearable device 500 ) that includes both one or more CG images and a view of at least a portion of the real-world environment that includes the object.
- the optical system 502 may utilize various technologies such as, for example, AR technologies to generate composite views that include CG images superimposed over a real-world view 126 .
- the optical system 502 may be configured to generate CG images via a display panel.
- the display panel can include separate right eye and left eye transparent display panels.
- the display panel can include a single transparent display panel that is viewable with both eyes and/or a single transparent display panel that is viewable by a single eye only. Therefore, it can be appreciated that the technologies described herein may be deployed within a single-eye Near Eye Display (“NED”) system (e.g., GOOGLE GLASS) and/or a dual-eye NED system (e.g., OCULUS RIFT).
- NED Near Eye Display
- the wearable device 500 is an example device that is used to provide context and illustrate various features and aspects of the user interface display technologies and systems disclosed herein. Other devices and systems, such as VR systems, may also use the interface display technologies and systems disclosed herein.
- the display panel may be a waveguide display that includes one or more diffractive optical elements (“DOEs”) for in-coupling incident light into the waveguide, expanding the incident light in one or more directions for exit pupil expansion, and/or out-coupling the incident light out of the waveguide (e.g., toward a user's eye).
- DOEs diffractive optical elements
- the wearable device 500 may further include an additional see-through optical component.
- a controller 510 is operatively coupled to each of the illumination engine 504 , the optical assembly 506 (and/or scanning devices thereof,) and the sensor 508 .
- the controller 510 includes one or more logic devices and one or more computer memory devices storing instructions executable by the logic device(s) to deploy functionalities described herein with relation to the optical system 502 .
- the controller 510 can comprise one or more processing units 512 , one or more computer-readable media 514 for storing an operating system 516 and data such as, for example, image data that defines one or more CG images and/or tracking data that defines one or more object tracking protocols.
- the computer-readable media 514 may further include an image-generation engine 518 that generates output signals to modulate generation of the first bandwidth of EM radiation by the illumination engine 504 and also to control the scanner(s) to direct the first bandwidth within the optical assembly 506 .
- the scanner(s) direct the first bandwidth through a display panel to generate CG images that are perceptible to a user, such as a user interface.
- the computer-readable media 514 may further include an object-tracking engine 520 that generates output signals to modulate generation of the second bandwidth of EM radiation by the illumination engine 504 and also the scanner(s) to direct the second bandwidth along an object-tracking optical path to irradiate an object.
- the object tracking engine 520 communicates with the sensor 508 to receive the object data that is generated based on the reflected-portion of the second bandwidth.
- the object tracking engine 520 then analyzes the object data to determine one or more characteristics of the object such as, for example, a depth of the object with respect to the optical system 502 , an orientation of the object with respect to the optical system 502 , a velocity and/or acceleration of the object with respect to the optical system 502 , or any other desired characteristic of the object.
- the components of the wearable device 500 are operatively connected, for example, via a bus 522 , which can include one or more of a system bus, a data bus, an address bus, a PCI bus, a Mini-PCI bus, and any variety of local, peripheral, and/or independent buses.
- the wearable device 500 may further include various other components, for example cameras (e.g., camera 524 ), microphones (e.g., microphone 526 ), accelerometers, gyroscopes, magnetometers, temperature sensors, touch sensors, biometric sensors, other image sensors, energy-storage components (e.g. battery), a communication facility, a GPS receiver, etc.
- the wearable device 500 can include one or more eye gaze sensors 528 .
- an eye gaze sensor 528 is user facing and is configured to track the position of at least one eye of a user.
- eye position data e.g., determined via use of eye gaze sensor 528
- image data e.g., determined via use of the camera 524
- other data can be processed to identify a gaze path of the user. That is, it can be determined that the user is looking at a particular section of a hardware display surface, a particular real-world object or part of a real-world object in the view of the user, and/or a rendered object or part of a rendered object displayed on a hardware display surface.
- the wearable device 500 can include an actuator 529 .
- the processing units 512 can cause the generation of a haptic signal associated with a generated haptic effect to actuator 529 , which in turn outputs haptic effects such as vibrotactile haptic effects, electrostatic friction haptic effects, or deformation haptic effects.
- Actuator 529 includes an actuator drive circuit.
- the actuator 529 may be, for example, an electric motor, an electro-magnetic actuator, a voice coil, a shape memory alloy, an electro-active polymer, a solenoid, an eccentric rotating mass motor (“ERM”), a linear resonant actuator (“LRA”), a piezoelectric actuator, a high bandwidth actuator, an electroactive polymer (“EAP”) actuator, an electrostatic friction display, or an ultrasonic vibration generator.
- an electric motor an electro-magnetic actuator, a voice coil, a shape memory alloy, an electro-active polymer, a solenoid, an eccentric rotating mass motor (“ERM”), a linear resonant actuator (“LRA”), a piezoelectric actuator, a high bandwidth actuator, an electroactive polymer (“EAP”) actuator, an electrostatic friction display, or an ultrasonic vibration generator.
- wearable device 500 can include one or more additional actuators 529 .
- the actuator 529 is an example of a haptic output device, where a haptic output device is a device configured to output haptic effects, such as vibrotactile haptic effects, electrostatic friction haptic effects, or deformation haptic effects, in response to a drive signal.
- the actuator 529 can be replaced by some other type of haptic output device.
- wearable device 500 may not include actuator 529 , and a separate device from wearable device 500 includes an actuator, or other haptic output device, that generates the haptic effects, and wearable device 500 sends generated haptic signals to that device through a communication device.
- the processing unit(s) 512 can represent, for example, a CPU-type processing unit, a GPU-type processing unit, a field-programmable gate array (“FPGA”), another class of digital signal processor (“DSP”), or other hardware logic components that may, in some instances, be driven by a CPU.
- FPGA field-programmable gate array
- DSP digital signal processor
- illustrative types of hardware logic components that can be used include Application-Specific Integrated Circuits (“ASICs”), Application-Specific Standard Products (“ASSPs”), System-on-a-Chip Systems (“SOCs”), Complex Programmable Logic Devices (“CPLDs”), etc.
- computer-readable media such as computer-readable media 514
- Computer-readable media can also store instructions executable by external processing units such as by an external CPU, an external GPU, and/or executable by an external accelerator, such as an FPGA type accelerator, a DSP type accelerator, or any other internal or external accelerator.
- external processing units such as by an external CPU, an external GPU, and/or executable by an external accelerator, such as an FPGA type accelerator, a DSP type accelerator, or any other internal or external accelerator.
- an external accelerator such as an FPGA type accelerator, a DSP type accelerator, or any other internal or external accelerator.
- at least one CPU, GPU, and/or accelerator is incorporated in a computing device, while in some examples one or more of a CPU, GPU, and/or accelerator is external to a computing device.
- the wearable device 500 is configured to interact, via network communications, with a network device (e.g., a network server or a cloud server) to implement the configurations described herein.
- a network device e.g., a network server or a cloud server
- the wearable device 500 may collect data and send the data over network(s) to the network device.
- the network device may then implement some of the functionality described herein (e.g., analyze passive signals, determine user interests, select a recommended item, etc.). Subsequently, the network device can cause the wearable device 500 to display an item and/or instruct the wearable device 500 to perform a task.
- Computer-readable media can include computer storage media and/or communication media.
- Computer storage media can include one or more of volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
- computer storage media includes tangible and/or physical forms of media included in a device and/or hardware component that is part of a device or external to a device, including but not limited to random access memory (“RAM”), static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), phase change memory (“PCM”), read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), flash memory, rotating media, optical cards or other optical storage media, magnetic storage, magnetic cards or other magnetic storage devices or media, solid-state memory devices, storage arrays, network attached storage, storage area networks, hosted computer storage or any other storage memory, storage device, and/or storage medium that can be used to store and maintain information for access by a computing device.
- RAM random access memory
- SRAM static random-access memory
- DRAM dynamic random-access memory
- PCM phase change memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- communication media can embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism.
- a modulated data signal such as a carrier wave, or other transmission mechanism.
- computer storage media does not include communication media. That is, computer storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se.
- FIG. 6 shows additional details of an example computer architecture for a computer capable of executing the functionalities described herein such as, for example, those described with reference to FIGS. 1A-4 , or any program components thereof as described herein.
- the computer architecture 600 illustrated in FIG. 6 illustrates an architecture for a server computer, or network of server computers, or any other type of computing device suitable for implementing the functionality described herein.
- the computer architecture 600 may be utilized to execute any aspects of the software components presented herein, such as software components for implementing the e-commerce system 116 and the item listing tool 102 .
- the computer architecture 600 illustrated in FIG. 6 includes a central processing unit 602 (“CPU”), a system memory 604 , including a random-access memory 606 (“RAM”) and a read-only memory (“ROM”) 608 , and a system bus 610 that couples the memory 604 to the CPU 602 .
- the computer architecture 600 further includes a mass storage device 612 for storing an operating system 614 , other data, and one or more application programs.
- the mass storage device 612 may further include one or more of the activity data 102 , auction data 104 , user profiles 125 , or the machine learning engine 125 , and/or any of the other software or data components described herein.
- the mass storage device 612 is connected to the CPU 602 through a mass storage controller (not shown) connected to the bus 610 .
- the mass storage device 612 and its associated computer-readable media provide non-volatile storage for the computer architecture 600 .
- computer-readable media can be any available computer storage media or communication media that can be accessed by the computer architecture 600 .
- the computer architecture 600 may operate in a networked environment using logical connections to remote computers through a network 650 and/or another network (not shown).
- the computer architecture 600 may connect to the network 650 through a network interface unit 616 connected to the bus 610 . It should be appreciated that the network interface unit 616 also may be utilized to connect to other types of networks and remote computer systems.
- the computer architecture 600 also may include an input/output controller 618 for receiving and processing input from a number of other devices, including a keyboard, mouse, or electronic stylus (not shown in FIG. 6 ). Similarly, the input/output controller 618 may provide output to a display screen, a printer, or other type of output device (also not shown in FIG. 6 ). It should also be appreciated that a computing system implemented using the disclosed computer architecture 600 to communicate with other computing systems.
- the software components described herein may, when loaded into the CPU 602 and executed, transform the CPU 602 and the overall computer architecture 600 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein.
- the CPU 602 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the CPU 602 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the CPU 602 by specifying how the CPU 602 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the CPU 602 .
- Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein.
- the specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like.
- the computer-readable media is implemented as semiconductor-based memory
- the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory.
- the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory.
- the software also may transform the physical state of such components in order to store data thereupon.
- the computer-readable media disclosed herein may be implemented using magnetic or optical technology.
- the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.
- the computer architecture 600 may include other types of computing devices, including smartphones, embedded computer systems, tablet computers, other types of wearable computing devices, and other types of computing devices known to those skilled in the art. It is also contemplated that the computer architecture 600 may not include all of the components shown in FIG. 6 , may include other components that are not explicitly shown in FIG. 6 , or may utilize an architecture completely different than that shown in FIG. 6 .
- Example Clause A a computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item, wherein the online auction is being conducted by an online auctioneer system to facilitate competitive bidding by the plurality of participants for the item; analyzing the activity data to identify user signals that indicate acquisition interest levels for the plurality of participants, wherein individual acquisition interest levels are indicative of intentions of individual participants to acquire the item through the competitive bidding; receiving avatar profile data defining avatar profiles that facilitate dynamic modifications for three-dimensional model for the plurality of participants; determining, based on the avatar profile data, avatar modification states that correspond to the individual acquisition interest levels for the individual participants; and causing at least one computing device to display, in a virtual environment associated with the online auction, a graphical representation of the item and a plurality of avatars, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent the individual acquisition interest levels for the individual participants.
- Example Clause B the computer-implemented method of Example Clause A, further comprising receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions, wherein identifying the user signals that indicate the acquisition interest levels is based at least in part on the historical activity data.
- Example Clause C the computer-implemented method of any one of Example Clauses A through B, further comprising: receiving user specific historical activity data defining historical user activity of a particular participant, of the plurality of participants, in association with previous online auctions, wherein a particular interest acquisition level for the particular participant is determined based at least in part on the user specific historical activity data.
- Example Clause D the computer-implemented method of any one of Example Clauses A through C, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
- Example Clause E the computer-implemented method of any one of Example Clauses A through D, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and determining at least some aspects for a particular modification state for, a particular avatar that corresponds to the particular participant, based on the particular participant currently having the high bid for the item.
- Example Clause F the computer-implemented method of any one of Example Clauses A through E, wherein the virtual environment is a three-dimensional immersive environment in which one or more three-dimensional objects are rendered.
- Example Clause G the computer-implemented method of any one of Example Clauses A through F, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; providing the particular participant with at least some avatar abilities with respect to the item within the three-dimensional immersive environment.
- Example Clause H the computer-implemented method of any one of Example Clauses A through G, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
- AR augmented reality
- VR virtual reality
- Example Clause I a system, comprising: one or more processors; and a memory in communication with the one or more processors, the memory having computer-readable instructions stored thereupon that, when executed by the one or more processors, cause the one or more processors to: receive activity data defining user activity of a plurality of participants in association with an online auction that facilitates competitive bidding, by the plurality of participants, for an item; analyze the activity data to identify user signals that indicate acquisition interest levels associated with intentions of the plurality of participants to acquire the item through the competitive bidding; receive avatar profile data defining avatar profiles associated with the plurality of participants; determine, based on the avatar profile data, a plurality of avatar modification states that correspond to the acquisition interest levels for individual participants; and cause at least one computing device to display a virtual environment that includes a plurality of avatars being rendered adjacent to a graphical representation of the item, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent individual acquisition interest levels for the individual participants.
- Example Clause J the system of Example Clause I, wherein the computer-readable instructions further cause the one or more processors to: monitor the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and control at least some abilities a particular avatar that corresponds to the particular participant based on the particular participant currently having the high bid for the item.
- Example Clause K the system of any one of Example Clauses I through J, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
- Example Clause L the system of any one of Example Clauses I through K, wherein the computer-readable instructions further cause the one or more processors to: receive historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and deploy a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
- Example Clause M the system of any one of Example Clauses I through L, wherein the acquisition interest levels are determined by analyzing the activity data with respect to the acquisition interest model.
- Example Clause N the system of any one of Example Clauses I through M, wherein the virtual environment is a three-dimensional immersive environment.
- Example Clause O the system of any one of Example Clauses I through N, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
- AR augmented reality
- VR virtual reality
- Example Clause P a computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item; analyzing the activity data to identify user signals that indicate acquisition interest levels for individual participants of the plurality of participants; determining avatar modification states that correspond to individual acquisition interest levels for individual participants of the plurality of participants; and causing at least one computing device to display a virtual environment that includes a graphical representation of the item and a plurality of avatars that are rendered in accordance with the avatar modification states that correspond to the individual acquisition interest levels for individual participants.
- Example Clause Q the computer-implemented method of any one of Example Clause P, wherein the individual acquisition interest levels for the individual participants are determined based on an acquisition interest model.
- Example Clause R the computer-implemented method of any one of Example Clauses P through Q, further comprising: receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and deploying a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
- Example Clause S the computer-implemented method of any one of Example Clauses P through R, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with different expressive characteristics based on the acquisition interest levels.
- Example Clause T the computer-implemented method of any one of Example Clauses P through S, wherein the virtual environment is a three-dimensional immersive environment.
- any reference to “first,” “second,” etc. items and/or abstract concepts within the Summary and/or Detailed Description is not intended to and should not be construed to necessarily correspond to any reference of “first,” “second,” etc. elements of the claims.
- items and/or abstract concepts such as, for example, modification states and/or avatars and/or acquisition interest levels may be distinguished by numerical designations without such designations corresponding to the claims or even other paragraphs of the Summary and/or Detailed Description.
- any designation of a “first acquisition interest level” and “second acquisition interest level” of the participants within any specific paragraph of this the Summary and/or Detailed Description is used solely to distinguish two different acquisition interest levels within that specific paragraph—not any other paragraph and particularly not the claims.
Abstract
Techniques for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. Various participants' acquisition interest levels are determined by analyzing the participants' user activity in association with the online auction. Avatars that represent the participants are rendered differently based on the participants' level of interest in (e.g., motivation toward) acquiring the item that is being auctioned. In this way, the individual participants' avatars are rendered in the virtual environment in a manner such that the individual participants' level of interest in acquiring the item is visually perceptible. As a specific example, avatars may be rendered to appear more (or less) excited about the item as their corresponding user activity indicates that they are more (or less) likely to competitively bid on the item in a genuine attempt to win the online auction.
Description
- This application claims the benefit of and priority to U.S. Provisional Application No. 62/588,189, filed Nov. 17, 2017 and entitled “Augmented Reality, Mixed Reality, and Virtual Reality Experiences,” the entire contents of which are incorporated herein by reference.
- Conventional online auction systems fail to provide meaningful and real-time insight into the competitive landscape for active online auctions. As an example, a conventional online auction system may provide users with a mere indication of how many other users have previously viewed a particular item that is currently being auctioned and/or how many other users have previously added the particular item to their “watch lists” (e.g., to receive updates after bids are submitted). However, merely knowing how many other users have previously viewed and/or “watch-listed” a particular item does not provide meaningful insight into whether any specific users are likely to competitively bid on the particular item. This is because many users casually browse through and even “watch-list” a multitude of online auctions without any intention whatsoever of actually submitting competitive bids in an aggressive effort to win a particular online auction.
- The lack of insight into the competitive landscape surrounding particular online auctions unfortunately leads some users to lose interest in the particular auctions and then continue to browse through other auctions. For example, since each auction appears similar to other auctions, particular auctions may fail to grab the users' attention regardless of how competitive those particular auctions may actually be. Furthermore, even users that remain interested in the particular item are all too often lured into browsing through other online actions for fungible or similar items in a futile effort to ascertain how aggressively they should be bidding for the particular item.
- The unfortunate result of users lacking insight into the competitive landscape of online auctions is a significant increase in web traffic as users continue to browse—often aimlessly—through a multitude of auction webpages. The increased web traffic that stems from the aforementioned scenarios of course results in increased network bandwidth usage. For example, each additional auction webpage that users view while browsing through an online auctioneer's website results in an incremental increase in an amount of data that is transferred over various networks to and/from the server(s) that are hosting the online auctioneer's web site. This increased web traffic also results in unnecessary utilization of other computing resources such as processing cycles, memory, and battery.
- It is with respect to these and other technical challenges that the disclosure made herein is presented.
- In order to address the technical problems described briefly above, and potentially others, the disclosed technologies can efficiently translate user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. Through implementations of the disclosed technologies, a plurality of avatars can be rendered in a virtual environment such as, for example, a three-dimensional (3D) immersive environment that is associated with the online auction for an item. Individual avatars may be rendered in accordance with avatar modification states that specifically correspond to acquisition interest levels for participants of the online auction. Acquisition interest levels may be determined for individual participants based on user activity of these individual participants in association with the online auction for the item. Thus, if a particular participant exhibits user activity that indicates a high probability of an intention to competitively bid on the item in an aggressive effort to win the online auction, then this particular participant's avatar can be rendered in the virtual environment in a manner such that the particular participant's interest in the item is visually perceptible to other participants. As a specific example, an avatar that represents the particular participant within the virtual environment may be rendered with excited and/or enthusiastic facial expressions directed toward the item being auctioned.
- The disclosed techniques can effectively retain participants' interests in an online auction by providing meaningful insight into the competitive landscape of the online auction. This can reduce or even eliminate the lure for these participants to aimlessly browse through other online auctions. Thus, by improving human-computer interaction with computing devices, the disclosed technologies tangibly improve computing efficiencies with respect to a wide variety of computing resources that would otherwise be wastefully consumed and/or utilized. This is because reducing the lure for participants to leave a “competitive” auction that is currently being viewed in order to browse through other auctions directly results in reduced network bandwidth usage and processing cycles consumed by server(s) that are hosting the online auctions. Technical benefits other than those specifically identified herein might also be realized through implementations of the disclosed technologies.
- In one illustrative example, activity data that defines user activity that various participants perform in association with an online auction for an item is received. In some instances, the online auction may be conducted by an online auctioneer to facilitate competitive bidding by the various participants for the item. The online auctioneer may utilize one or both of a client-server computing architecture or a peer-to-peer computing architecture. Unlike conventional online auction systems which monitor user activity in the aggregate for multiple users, in accordance with the present techniques the activity data may define the user activity on a per-user basis. For example, the activity data may indicate that a particular participant has viewed the online auction for the item several times per hour for the last several hours whereas one or more other participants have viewed the online auction only once and have not returned thereto. The activity data may further indicate that the particular participant has added the item to their “watch list” to trigger updates any time a bid is submitted for the item whereas the one or more other participants are not “watching” the item.
- An analysis of the activity data may be performed to identify, on a per-user basis, user signals that are indicative of acquisition interest levels for the various participants. Stated in plain terms, individual acquisition interest levels may indicate strengths of intentions of corresponding participants to acquire the item through the competitive bidding. Continuing with the example from above, the particular participant having added the item to their watchlist and continuing to view the online auction for the item several times per hour may indicate that the particular participant has very strong intentions of enter a winning bid toward the end of the auction. Therefore, based on these user signals, an acquisition interest level may be determined for the particular participant that is relatively higher than for other participants for which corresponding user signals indicate to be relatively less motivated to acquire the item through the competitive bidding.
- Avatar profile data that defines avatar profiles for the various participants may also be received and utilized to determine how to graphically render avatars for the various participants within the virtual environment. In some embodiments, the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants. For example, a 3D model for a particular user may be dynamically modified as user signals are received that indicate that the particular user is more (or less) motivated to acquire the item being auctioned. In some implementations, individual participants may be enabled to define or otherwise control certain aspects of their corresponding avatars. For example, individual participants may be enabled to define various parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter. It can be appreciated, therefore, that an individual participant may define parameters for their corresponding avatar to cause the avatar to generally resemble what the individual participant looks like in real life.
- Based on the avatar profile data, avatar modification states can be determined for the various participants' avatars that correspond on a per-user basis to the various participants' acquisition interest levels. Continuing again with the example from above, due to the particular participant having the very strong intentions to acquire the item via the competitive bidding, an avatar modification state may be determined for the particular participant's avatar to make the particular participant's intentions visually perceptible to others via the appearance of the particular participant's avatar. Furthermore, if user activity associated with another participant indicates that this other participant is generally interested in the item but does yet indicate a strong intention to acquire the item, a different modification state can be determined for another avatar that represents this other participant in the virtual environment.
- Then, one or more computing devices may be caused to display the avatars for the various participants in accordance with the avatar modification states that correspond to the various participants' acquisition interest levels. In some embodiments, the avatars may be displayed within the virtual environment alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction.
- Aspects of the technologies disclosed herein can be implemented by a wearable computing device, such as an augmented reality (“AR”) device or virtual reality (“VR”) device. For example, a participant of an online auction might don the wearable computing device to view the virtual reality environment associated with the online auction. Then, the wearable device can render the avatars of the various participants of the online auction so that the excitement and/or motivation of the various participants—as indicated by their corresponding user activities—is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems.
- The above-described subject matter can be implemented as a computer-controlled apparatus, a computer-implemented method, a computing device, or as an article of manufacture such as a computer-readable medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
- The Detailed Description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.
-
FIG. 1 illustrates aspects of an exemplary system for analyzing activity data that is received in association with online auctions to render a virtual environment that has a visually perceptible competitive landscape. -
FIG. 2A illustrates an exemplary virtual environment in which an avatar that represents a participant is rendered to visually communicate an acquisition interest level of the participant. -
FIG. 2B illustrates the exemplaryvirtual environment 106 ofFIG. 2A with an additional avatar being rendered to represent another participant that has performed user activity consistent with a high probability of competitively bidding on the item. -
FIG. 2C illustrates the exemplary virtual environment ofFIGS. 2A and 2B with the avatar this is initially shown inFIG. 2A being rendered in accordance with an avatar modification state corresponding to a “heightened” acquisition interest level. -
FIG. 3 illustrates an alternate embodiment of a virtual environment via which aspects of an online auction are made to be visually perceptible to a participant of the online auction. -
FIG. 4 is a flow diagram that illustrates an example process describing aspects of the technologies disclosed herein for efficient rendering of 3D models using model placement metadata. -
FIG. 5 shows an illustrative configuration of a wearable device capable of implementing aspects of the technologies disclosed herein. -
FIG. 6 illustrates additional details of an example computer architecture for a computer capable of implementing aspects of the technologies described herein. - This Detailed Description describes technologies for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. In various implementations, avatars are rendered in a virtual environment that is generated to communicate a competitive landscape associated with an online auction that facilitates competitive bidding for an item. Various participants' acquisition interest levels may be determined by analyzing the participants' user activity in association with the online auction. In general terms, an acquisition interest level for a particular participant in association with an online auction is indicative of a probability that the particular participant will competitively bid for an item being auctioned off in the online auction. By determining the participants' acquisition interest levels, the participants' avatars may be rendered differently based on the participants' level of interest in (e.g., motivation toward) acquiring the item that is being auctioned. In this way, the individual participants' avatars can be rendered in a three-dimensional (“3D”) immersive environment in a manner such that the individual participants' level of interest in acquiring the item is visually perceptible. As a specific example, avatars may be rendered to appear more (or less) excited about the item as their corresponding user activity indicates that they are more (or less) likely to competitively bid on the item in a genuine attempt to win the online auction.
- The disclosed techniques provide meaningful insight into the competitive landscape of the online auction and, by doing so, excite the participants' competitive nature so as to effectively retain participants' interests in the online auction. This can reduce or even eliminate the lure for these participants to aimlessly browse through other online auctions. In this way, the disclosed technologies tangibly improve human interaction with computing devices in a manner that improves computing efficiencies with respect to a wide variety of computing resources that would otherwise be wastefully consumed and/or utilized. This is because reducing the lure for participants to leave a “competitive” auction that is currently being viewed in order to browse through other auctions directly reduces both the network bandwidth and processing cycles consumed by server(s) that are hosting the online auctions. Technical benefits other than those specifically identified herein might also be realized through implementations of the disclosed technologies.
- As described in more detail below, aspects of the technologies disclosed herein can be implemented by a wearable computing device such as, for example, an augmented reality (“AR”) device or virtual reality (“VR”) device. For example, a participant of an online auction might don a wearable computing device to view a virtual reality environment that is specifically tailored to visually communicate aspects of the competitive landscape of the online auction. For example, the wearable device can render avatars of various participants of the online auction so that the excitement and/or motivation of the various participants is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems. As used herein, the term “virtual environment” refers to any environment in which one or more user perceptible objects (e.g., avatars, display menus, price icons, etc.) are rendered virtually as opposed to existing within a real-world environment surrounding a user. Thus, it can be appreciated that an AR device may be effective at generating a virtual environment within the context of the present disclosure—even if some real-world objects remain perceptible to a user.
- It is to be further appreciated that the technologies described herein can be implemented on a variety of different types of wearable devices configured with a variety of different operating systems, hardware components, and/or installed applications. In various configurations, for example, the wearable device can be implemented by the following example wearable devices: GOOGLE GLASS, MAGIC LEAP ONE, MICROSOFT HOLOLENS,
META 2, SONY SMART EYEGLASS, HTC VIVE, OCULUS GO, PLAYSTATION VR, or WINDOWS mixed reality headsets. Thus, embodiments of the present disclosure can be implemented in any AR-capable device, which is different than goggles or glasses that obstruct a user's view of real-world objects, e.g., actual reality. The techniques described herein are device and/or operating system agnostic. - Turning now to
FIG. 1 , various aspects are illustrated of an exemplary system for analyzingactivity data 102 that is received in association with a first online auction 104(1) to render avirtual environment 106 that has a visually perceptible competitive landscape. As illustrated,activity data 102 is received fromclient devices 108 that correspond to variousindividual participants 110 of the first online auction 104(1). More specifically, first activity data 102(1) is received via a first client device 108(1) that is being used by a first participant 110(1), second activity data 102(2) is received via a second client device 108(2) that is being used by a second participant 110(2), and so on. - In the illustrated embodiment, an
online auctioneer system 112 is utilizing at least onedatabase 114 to host a plurality of online auctions 104 (e.g., online auctions 104(1) through 104(N)). In this embodiment, theonline auctioneer system 112 is configured in accordance with a client-server computing architecture in whichactivity data 102 is transferred between theonline auctioneer system 112 and one or more client devices via at least onenetwork 116. In some embodiments, theonline auctioneer system 112 may be configured in accordance with a peer-to-peer computing architecture. - The
online auctioneer system 112 may monitor various instances of theactivity data 102 on a per-user basis. For example, first activity data 102(1) may be monitored for the first participant 110(1), second activity data 102(2) may be monitored for the second participant 110(2), and so on. For purposes of the discussion ofFIG. 1 , presume that the first activity data 102(1) indicates that the first participant 110(1) has added the item associated with the first auction 104(1) to her watchlist and that she has also opened a web browser to view the item several times per hour for the last several hours. Further presume that the second activity data 102(2) indicates that the second participant 110(2) has added the item associated with the first auction 104(1) to his watchlist and that he has also periodically opened a web browser to view the item—albeit not as frequently as the first participant 110(1). - The
online auctioneer system 112 may then analyze theactivity data 102 on a per-user basis to identify user signals that are indicative of acquisition interest levels for thevarious participants 110. The acquisition interest level determined for each particular participant may generally indicate a strength of that user's intentions to acquire the item through the competitive bidding. - As a specific but nonlimiting example, the user activities of the first participant 110(1) having added the item associated with the first auction 104(1) to her watchlist may be identified as a user signal(s) that indicates an intention of the first participant 110(1) to acquire the item through the competitive bidding. That is, the first participant 110(1) having added the item to her watchlist serves as evidence that the first participant 110(1) will competitively bid on the item in the sense that her “watching” the item makes it objectively appear more probable that she intends to bid on the item than it would objectively appear had she not “watched” the item. Furthermore, the user activities of the first participant 110(1) having frequently opened the web browser to view the item over the last several hours may be identified as another user signal that indicates an intention of the first participant 110(1) to acquire the item through the competitive bidding. Thus, based on these identified user signals, a “first” acquisition interest level may be determined for the first participant 110(1).
- Similarly, the user activities of the second participant 110(2) having added the item associated with the first auction 104(1) to his watchlist and also having frequently opened the web browser to view the item may be identified as user signals that indicate an intention of the second participant 110(2) to acquire the item through the competitive bidding. Thus, based on these identified user signals, a “second” acquisition interest level may be determined for the second participant 110(2). However, since the second participant 110(2) has viewed the item with a slightly lower frequency than the first participant 110(1), the “second” acquisition interest level that is determined for the second participant 110(2) may be slightly lower than the “first” acquisition interest level that is determined for the first participant 110(1). Stated plainly, the identified user signals may indicate that both the first participant 110(1) and the second participant 110(2) intend to competitively bid on the item but that the first participant is slightly more enthusiastic and/or motivated to do so.
- In some embodiments, determining the acquisition interest levels for the various participants may be based on
historical activity data 120 associated with theindividual participants 110. Thehistorical activity data 120 may define historical user activities of at least some of the plurality of participants in association with previousonline auctions 104—i.e., online auctions that have already occurred. For example, the historicaluser activity date 120 may indicate trends of how users (either individually or as a general populous) tend to behave with respect to particular online auctions prior to bidding on those online auctions. As a specific but non-limiting example, thehistorical activity data 120 may reveal that it is commonplace for users to add an item to their watchlist and then somewhat compulsively view and re-view the item prior to beginning to enter competitive bids on the item. - In some embodiments, the
historical activity data 120 may be “user specific” historical activity data that defines historical user activities of a specific participant in association with previousonline auctions 104—i.e., online auctions that have already occurred. For example, if historical user activity that is stored in association with aspecific user profile 125 indicates that this particular user frequently adds items to her watchlist without later bidding on the item, then this particular user adding an item to her watchlist may be given little or no weight with respect to determining this particular user's acquisition interest level for this item. In contrast however, if the historical user activity indicates that this particular user rarely adds items to her watchlist and always submits competitive bids for such items toward the end of the associated auctions, then this particular user adding an item to her watchlist may be weighed heavily in determining this particular user's acquisition interest level for this item. - In some embodiments, the
online auctioneer system 112 may utilize amachine learning engine 124 to identify correlations between certain types of user activities and competitively bidding on an item. Themachine learning engine 124 may build and/or continually refine anacquisition interest model 118 based upon the identified correlations. For example, as illustrated, theonline auctioneer system 112 may provide the activity data 102 (and/or the historical activity data 120) to themachine learning engine 124. Themachine learning engine 124 may then use this data to build theacquisition interest model 118 which is a model that is usable to predict and/or output acquisition interest levels for individual participants based on the types of user activities that those participants perform in association with anonline auction 104. Exemplary types of user activities that themachine learning engine 124 might identify as correlating with users competitively bidding on an item may include, but are not limited to, users adding items to their watchlists, users frequently checking a status of particular auctions, users leaving a particular auction open (e.g., in a web browser or other application) on their client device for long durations of time, users monitoring a particular auction without browsing through other auctions, and/or any other suitable activity that might be generally indicative an increased likelihood of a participant competitively bidding on an item. - It should be appreciated that any appropriate machine learning techniques may also be utilized, such as unsupervised learning, semi-supervised learning, classification analysis, regression analysis, clustering, etc. One or more predictive models may also be utilized, such as a group method of data handling, Naïve Bayes, k-nearest neighbor algorithm, majority classifier, support vector machines, random forests, boosted trees, Classification and Regression Trees (CART), neural networks, ordinary least square, and so on.
- The user profiles 124 may further include
avatar profile data 122 that defines avatar profiles for theparticipants 110. Theavatar profile data 122 may be utilized by theonline auctioneer system 112 to determine how to graphically render avatars for theparticipants 110 within thevirtual environment 106. In some embodiments, the avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of thevarious participants 110. For example, each of the first participant 110(1), the second participant 110(2), and the N-th participant 110(N) may have corresponding 3D models that may be rendered to graphically represent these participants' presence within thevirtual environment 106 that is associated with the first auction 104(1). - In some embodiments, individual participants may define or otherwise control certain aspects of their corresponding avatars. For example, individual participants may be enabled to define a variety of parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter. It can be appreciated, therefore, that an individual participant may define parameters for their corresponding avatar to cause the avatar to generally resemble what the individual participant looks like in real life. For example, as described in more detail below, the first participant 110(1) may define parameters within her avatar profile so that the avatar that graphically represents her presence within the virtual environment generally resembles how she appears in real life. Similarly, the
other participants 110 may also define parameters within their own avatar profiles so that their respective avatars also resemble them or, if they so choose, some sort of alternate ego. For example, various user's may define parameters to cause their avatar to appear as a dinosaur, a team mascot for a college football team, a robot, or any other suitable configuration. - The
online auctioneer system 112 may use theavatar profile data 122 to determine avatar modification states for the various participants' avatars. The determined avatar modification states may correspond on a per-user basis to the various participants' acquisition interest levels. For example, a “first” avatar modification state may be determined for use with the first participant's 110(1) avatar profile based on the first participant 110(1) having added the item to her watchlist and also frequently checking the status of the first auction 104(1). In this way, a first avatar 126(1) that represents the first participant 110(1) within thevirtual environment 106 may be rendered so that the first participant's 110(1) acquisition interest level relative to other participants is visually perceptible. The other participants' 110 avatars may also be rendered according to those participants' acquisition interest levels so that their relative acquisition interest levels are also visually perceptible. As illustrated, for example, a second avatar 126(2) that represents the second participant 110(2) is rendered so as to appear highly motivated—albeit slightly less so than the first avatar 126(1)—to acquire the item. - In the illustrated embodiment, an N-th participant 110(N) is viewing the
virtual environment 106 via awearable device 128 such as, for example, an augmented reality (“AR”) device or virtual reality (“VR”) device. More specifically, the N-th participant 110(N) is wearing thewearable device 128 on his head and is viewing a virtual environment 160 associated with the first online auction 104(1). In the illustrated example, thevirtual environment 106 is a VR environment in which thewearable device 128 is rendering the first avatar 126(1) and the second avatar 126(2) that represent the presence of the first participant 110(1) and the second participant 110(2), respectively. It can be appreciated from the illustratedavatars 126 that the excitement and/or motivation of thevarious participants 110—as indicated by their corresponding user activities—is readily and visually perceptible in a manner that is clearly lacking in conventional online auction systems. - In various embodiments illustrated herein, the
virtual environment 106 is rendered in accordance with a first-person view. For example, as illustrated inFIG. 1 , when viewing thevirtual environment 106 the N-th participant is able to see avatars associated with the other participants of the auction (e.g., the first participant 110(1) and the second participant 110(2) but not an avatar associated with himself. In various other embodiments, however, thevirtual environment 106 is rendered in accordance with a second-person view or a third-person view. - In some embodiments, the
avatars 126 may be displayed within thevirtual environment 106 alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering theindividual avatars 126 in accordance with avatar modification states that graphically represent the acquisition interest levels for thevarious participants 110, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of theonline auction 104 are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction. This can lessen the lure for theparticipants 110 to leave the first action 104(1) to aimlessly browse through the other auctions 104(2) through 104(N). As described above, this provides a marked improvement to various computing resources by reducing unnecessary web browsing and, therefore, reducing network bandwidth usage. -
FIGS. 2A though 2C illustrate as aspects of an implementation of the techniques described herein in which the virtual environment is a three-dimensional immersive environment. For example, the participant that is viewing (e.g., peering into) the virtual environment may be enabled to walk around similar to a virtual gaming environment. - Turning now to
FIG. 2A , illustrated is an exemplaryvirtual environment 106 in which anavatar 126 that represents aparticipant 110 is rendered to visually communicate an acquisition interest level of theparticipant 110. For purposes ofFIG. 2A , the illustrated avatar is the second avatar 126(2) that represents the second participant 110(2) ofFIG. 1 . As illustrated, the second avatar 126(2) is being rendered in an avatar modification state that is designed to communicate that the second participant 110(2) is generally interested in acquiring anitem 202. - In some embodiments, a graphical representation of the
item 202 may be shown within the virtual environment alongside the second avatar 126(2). The graphical representation of theitem 202 may be a two-dimensional image of theitem 202. For example, a seller may take a picture of the item for sale and upload the picture onto theonline auctioneer system 112. Alternatively, the graphical representation of theitem 202 may be a three-dimensional model of theitem 202. For example, the seller may generate or otherwise obtain object data that defines a 3D model that is associated with the item that is being auction. Exemplary object data may include, but is not limited to, STEP files (i.e., 3D model files formatted according to the “Standard for the Exchange of Product Data”), IGES files (i.e., 3D model files formatted according to the “Initial Graphics Exchange Format”), glTF files (i.e., 3D model files formatted according to the “GL Transmission Format”), and/or any other suitable format for defining 3D models. - In
FIG. 2A , the second avatar 126(2) is being rendered in accordance with a particular acquisition interest level that indicates that the second participant 110(2) has performed some user activity with respect to theitem 202 which indicates he is at least generally interested in theitem 202. However, as of this point, the second participant 110(2) has not performed user activity that indicates he is strongly motivated to acquire theitem 202 through the competitive bidding. For example, perhaps the second participant 110(2) has viewed the item 202 a few times and maybe has even added the item to his watchlist but is not viewing theitem 202 with a frequency that is high enough to indicate a high probability of aggressively bidding on theitem 202. In this specific but nonlimiting example, the avatar modification state causes the second avatar 126(2) to be rendered with a facial expression and a hand gesture that visually communicates at least some interest in theitem 202. - In
FIG. 2A , the N-th participant is donning thewearable device 128 which is rendering thevirtual environment 106. In this way, the N-th participant can “peer” into a virtual auction hall that corresponds to at least the online auction for theitem 202. Upon “peering” into the virtual auction hall, the N-th participant can immediately obtain insight into the competitive landscape of the online auction for theitem 202. Furthermore, it can be appreciated that as of the point of time illustrated inFIG. 2A , the landscape is not as competitive with respect to theitem 202 as compared to the points in time illustrated inFIGS. 2B and 2C . For example, only the second participant 110(2) has performed user activities which indicate an interest in theitem 202. Furthermore, relative toFIGS. 2B and 2C discussed below, noparticipants 110 have performed user activities that indicate a high probability that they will aggressively bid on theitem 202. - In some embodiments, individual virtual environments may be specifically tailed to
individual participants 110. For example, thevirtual environment 106 illustrated inFIG. 2A is shown to include the “Sports Tickets”item 202 along with a “Watch”item 204 and a “Shoe”item 206. In some embodiments, thevirtual environment 106 may be uniquely generated for the N-th participant 110(N) to include items that the N-th participant 110(N) has demonstrated an interest in. As a specific example, the N-th participant 110(N) may have added each of the “Sports Tickets”item 202, the “Watch”item 204, and the “Shoe”item 206 to his watchlist. Then, thevirtual environment 106 is generated to include all of the items which the N-th participant is currently “watching.” In some embodiments, thevirtual environment 106 for anyparticular participant 110 may include an indication of thatvirtual environment 106 being at least partially customized or tailored to theparticular participant 110. For example, in the illustrated embodiment thevirtual environment 106 includes the text of “Steve's Virtual Auction Hall” to indicate to the N-th participant 110(N) that thevirtual environment 106 is his own. - In various embodiments, the
virtual environment 106 may include on or more text fields 208 that displays various types of information regarding the online auction. As illustrated, for example, a first text field 208(1) displays some specific information regarding theitem 202 such as, for example, notes from the seller, which teams will compete, a section of the seats, how many tickets are included, a date that event will take place, and any other suitable type of information. As further illustrated, thevirtual environment 106 includes a second text field 208(2) that displays information regarding the competitive landscape of the online auction. For example, as illustrated, the text field 208(2) indicates that 5 participants have added theitem 202 to their watchlists, 121 participants have viewed the item, and furthermore that a particular participant (i.e., the second participant 110(2)) with a username of “Super_Fan_# 1” has performed user activity that demonstrates a general interest in the item. It can be appreciated that second avatar 126(2) corresponds to this user. - Turning now to
FIG. 2B , the exemplaryvirtual environment 106 ofFIG. 2A is illustrated with an additional avatar being rendered to represent another participant that has performed user activity consistent with a high probability of competitively bidding on theitem 202. For purposes ofFIG. 2B , the additional avatar is the first avatar 126(1) that represents the first participant 110(1) ofFIG. 1 . As illustrated, the first avatar 126(1) is being rendered in an avatar modification state that is designed to communicate that the first participant 110(1) is highly motivated to acquire theitem 202 by submitting competitive bids within the online auction. In the specific but nonlimiting example illustrated, the first avatar 110(1) is being rendered to represent the first participant 110(1) having her fingers crossed in hopes that she will “win” the online auction. - In some embodiments, the various modification states may be designed to dynamically change a size with which the
avatars 126 are rendered based on the acquisition interest levels of the participants being represented. For example, as illustrated inFIG. 2B , the first avatar 126(1) is rendered relatively larger than the second avatar 126(2) due to the analyzed user activity indicating that the first participant 110(1) is more likely to competitively bid on theitem 202 than the second participant 110(2). Thus, the first avatar 126(1) appears more prominent within thevirtual environment 106 than the second avatar 126(2). In this way, the excitement of the first participant 110(1) toward the item may be contagious within thevirtual environment 106 and “rub off” on the other participants by sparking various feelings such as, for example, urgency for theitem 202, scarcity of theitem 202, or competition for theitem 202. - Turning now to
FIG. 2C , the exemplaryvirtual environment 106 ofFIGS. 2A and 2B is illustrated with the second avatar 126(2) being rendered in accordance with an avatar modification state corresponding to a “heightened” acquisition interest level. That is, the acquisition interest level of the second participant 110(2) has been heightened in comparison toFIGS. 2A and 2B . For example, the second participant 110(2) may have also been viewing thevirtual environment 106 and, therefore, may have seen the first avatar 126(1) that represents how excited the first participant 110(1) is about acquiring the “Sports Tickets”item 202. As a result, the second participant 110(2) may have become more excited about the “Sports Tickets”item 202 and begun to perform certain user activities that are consistent with a high probability that the second participant 110(2) would competitively bid against the first participant 110(1) in an effort to “win” the online auction. - By donning the
wearable device 128 and “peering into” thevirtual environment 106, the N-th participant 110(N) is enabled to visually perceive the increasingly competitive landscape of the online auction over time. In this way, the N-th participant's interest in the “Sports Tickets”item 202 may be better acquired and retained as opposed to conventional online auction systems for which a competitive landscape in visually imperceptible. - In some embodiments, the
online auctioneer system 112 may monitor a status of the online auction in order to determine which participant currently holds a high bid for the item being auctioned. Theonline auctioneer system 112 may control and/or modify various aspects of thevirtual environment 106 based upon which particular participant currently has the highest bid. As an example, an acquisition interest level for a particular participant may be determined based on that participant currently and/or frequently having the high bid for an item. Additionally, or alternatively, various aspects of a modification state for an avatar of a particular participant may be determined based on that particular participant currently and/or frequently having the high bid. For example, an avatar for the current high bidder may be rendered so as to appear more excited and/or more happy than other avatars associated with participants that have not bid on the item and/or have lost “high bidder” status. - In some embodiments, the
online auctioneer system 112 may exclusively provide a current high bidder with various avatar abilities with respect to the item. Such abilities may be exclusively provided to the particular participant in the sense that the other participants are not provided with the same abilities. As a specific but non-limiting example, a participant that is currently the high bidder for the item may be provided with the ability for their avatar to hold the item within the virtual environment and/or to taunt the other participant's avatars with the item. For example, the particular participant's avatar may hold a 3D model of the item and may walk up to other participants' avatars and hold the 3D model up to their face and then pull it away quickly while laughing. These avatar abilities that are exclusively provided to the particular participant may, in some implementations, be revoked in the event that some other participant outbids the particular participant. Then, these avatar abilities may be provided to this other participant so long as that participant continues to have the high bid. - Turning now to
FIG. 3 , illustrated is an alternate embodiment of avirtual environment 302 via which aspects of an online auction are made to be visually perceptible to a participant 110(N) of the online auction. In this alternate embodiment, aparticipant 110 that is wearing thewearable device 128 enters a virtual live auction experience in which an item (e.g., a guitar 304) is being auctioned. In various implementations, the virtual live auction experience displays images and/or identifiers of other participants bidding on theitem 304. For example, as illustrated, the virtual live auction experience illustrates an avatar, a user photo, or some other representation for the other participants that are watching, bidding, or are likely to bid on theitem 304. In some embodiments, a username (e.g., Jane D., Sam W., Beth L., John M, Expert Bidder, Music Guy86, Strummin Gal, etc.) may be illustrated adjacent to the various representations of the other participants. - As illustrated, the virtual live auction experience also displays the
guitar 304 along withitem information 306 about the guitar 304 (e.g., a manufacturer, a model, a description, etc.). The virtual auction experience further displaysauction information 308 such as: a minimum bid (e.g., $150), a minimum bid increment (e.g., $5), a current high bid (e.g., $245 which belongs to Strummin Gal), time remaining in the live auction (e.g., two minutes and twenty-four seconds), total number of bids (e.g., 16), total number of bidders (e.g., 8—the seven shown in the virtual auction experience and the participant 110), a bid history, and so forth. - As illustrated, Beth L. is in the process of placing or has recently placed a new high bid of $250—which has yet to but will soon be reflected within the
auction information 308 once it is updated. Moreover, thewearable device 128 provides the participant with anoption 310 to bid $255 or another amount that is higher than the current high bid. The virtual live auction experience can provide sound and other effects (e.g., a visual celebration for a winning bidder), as well. - In some embodiments, a computer system can receive activity data defining user activity from other members of an auction. The activity data can include computing activity such as watching, frequently viewing, or even bidding on the item. The activity data may also include gesture indicators that can be translated into an audio, haptic, and/or visual indicator for the participant. Each gesture of other participants can be ranked or can be related to a priority value and/or an acquisition interest level. For instance, when activity data received from other participants of an auction indicates that another participant is talking with a loud voice or they indicate a high bid, a high priority signal may be generated and translated into a computer-generated voice that is played to the participant. The computer-generated voice may indicate the intent of other participants. As described in more detail in relation to
FIGS. 1 through 2C , such signals and associated priority values can be translated into body language that is displayed via an avatar to the participant on a graphical user interface. These types of signals are not usually present in online auctions and the techniques of this invention enable participants and auction platforms to benefit from these types of signals that are only available at live auction environments. - In some embodiments, the participants of an online auction can also be ranked or prioritized with respect to one another. For instance, if a first participant is in auction with three highly ranked participants, then the techniques described herein may cause audio, visual, and/or haptic indicators generated by the highly ranked participants may be more prominent to the user than audio, visual and/or haptic indicators generated by lower ranked participants. The rankings may be based on prior user history and activity levels.
-
FIG. 4 is a flow diagram that illustrates anexample process 400 describing aspects of the technologies presented herein with reference toFIGS. 1-3 for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. Theprocess 400 is illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. - The particular implementation of the technologies disclosed herein is a matter of choice dependent on the performance and other requirements of the computing device. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts, and modules can be implemented in hardware, software (i.e. computer-executable instructions), firmware, in special-purpose digital logic, and any combination thereof. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform or implement particular functions It should be appreciated that more or fewer operations can be performed than shown in the figures and described herein. These operations can also be performed in a different order than those described herein. Other processes described throughout this disclosure shall be interpreted accordingly.
- At
block 401, a system receives activity data that defines user activity of a plurality of participants in association with an online auction for an item. The user activity may indicate on a per-user basis which participants of the online auction have viewed the online auction, a frequency with which the participants have viewed the online auction, which participants have bid on the item, which users have scheduled bids for the item, which participants have added the item to their watchlist, and any other suitable user activity that may be indicative of whether specific participants will likely attempt to acquire the item via competitive bidding. - Unlike conventional online auction systems which monitor user activity in the aggregate for multiple users, in accordance with the present techniques the activity data may define the user activity on a per-user basis. For example, the activity data may indicate that a particular participant has viewed the online auction for the item several times per hour for the last several hours whereas one or more other participants have viewed the online auction only once and have not returned thereto. The activity data may further indicate that the particular participant has added the item to their “watch list” to trigger updates any time a bid is submitted for the item whereas the one or more other participants are not “watching” the item.
- In some implementations, the activity data may define physical activities performed by the individual participants within their corresponding real-world environments. For example, suppose that a particular participant is donning a virtual reality headset to become immersed into a three-dimensional immersive environment as described herein. Further suppose that while immersed therein, the participant verbally states “Wow, I'd give anything for those ‘Sports Event’ Tickets.” In some implementations, the virtual reality headset may detect and analyze the participant's statement. Additionally, or alternatively, the virtual reality device may detect gestures (e.g., via a camera or other type of sensor) of the participant and analyze these gestures to determine how excited and/or motivated the participant is to acquire the item.
- At
block 403, the activity data may be analyzed to identify user signals that indicate acquisition interest levels for the plurality of participants. The analysis of the activity data may be performed on a per-user basis so that identified user signals are indicative of acquisition interest levels for the various participants on a per-user basis. Stated in plain terms, individual acquisition interest levels may indicate strengths of intentions of corresponding participants to acquire the item through the competitive bidding. For example, the particular participant having added the item to their watchlist and continuing to view the online auction for the item several times per hour may indicate that the particular participant has very strong intentions of enter a winning bid toward the end of the auction. Therefore, based on these user signals, an acquisition interest level may be determined for the particular participant that is relatively higher than for other participants for which corresponding user signals indicate to be relatively less motivated to acquire the item through the competitive bidding. - At
block 405, avatar profile data may be received that defines avatar profiles for the various participants. The avatar profile data may be utilized to determine how to graphically render avatars for the various participants within the virtual environment. The avatar profiles may facilitate dynamic modifications for three-dimensional (“3D”) models of the various participants. For example, a 3D model for a particular user may be dynamically modified as user signals are received that indicate that the particular user is more (or less) motivated to acquire the item being auctioned. - In some implementations, individual participants may be enabled to define or otherwise control certain aspects of their corresponding avatars. For example, individual participants may be enabled to define various parameters for their avatar such as a hair color, a gender, a skin tone, a height, a build (e.g., a muscular body type, an average body type, a slender body type, etc.), a wardrobe, a voice profile, and/or any other suitable parameter. It can be appreciated, therefore, that an individual participant may define parameters for their corresponding avatar to cause the avatar to generally resemble what the individual participant looks like in real life.
- At
block 407, avatar modification states are determined for the various participants' avatars. The avatar modification states may specifically correspond on a per-user basis to the various participants' acquisition interest levels. For example, due to the particular participant having the very strong intentions to acquire the item via the competitive bidding, an avatar modification state may be determined for the particular participant's avatar to make the particular participant's intentions visually perceptible to others via the appearance of the particular participant's avatar. - At
block 409, one or more computing devices may be caused to display the avatars for the various participants in accordance with the avatar modification states that correspond to the various participants' acquisition interest levels. In some embodiments, the avatars may be displayed within the virtual environment alongside a graphical representation of the item being auctioned. It can be appreciated that by rendering the individual avatars in accordance with avatar modification states that graphically represent the acquisition interest levels for the various participants, aspects of the competitive landscape (e.g., degree of probable competition for acquiring the item) of the online auction are made immediately and visually apparent. Thus, in stark contrast to conventional online auctions, in accordance with the techniques described herein the competitive landscape for online auctions is made visually perceptible within a virtual environment associated with the online auction to acquire and retain users' interest in the online auction. -
FIG. 5 shows an illustrative configuration of a wearable device 500 (e.g., a headset system, a head-mounted display, etc.) capable of implementing aspects of the technologies disclosed herein. Thewearable device 500 includes anoptical system 502 with anillumination engine 504 to generate electro-magnetic (“EM”) radiation that includes both a first bandwidth for generating computer-generated (“CG”) images and a second bandwidth for tracking physical objects. The first bandwidth may include some or all of the visible-light portion of the EM spectrum whereas the second bandwidth may include any portion of the EM spectrum that is suitable to deploy a desired tracking protocol. - In the example configuration, the
optical system 502 further includes anoptical assembly 506 that is positioned to receive the EM radiation from theillumination engine 504 and to direct the EM radiation (or individual bandwidths of thereof) along one or more predetermined optical paths. For example, theillumination engine 504 may emit the EM radiation into theoptical assembly 506 along a common optical path that is shared by both the first bandwidth and the second bandwidth. Theoptical assembly 506 may also include one or more optical components that are configured to separate the first bandwidth from the second bandwidth (e.g., by causing the first and second bandwidths to propagate along different image-generation and object-tracking optical paths, respectively). - The
optical assembly 506 includes components that are configured to direct the EM radiation with respect to one or more components of theoptical assembly 506 and, more specifically, to direct the first bandwidth for image-generation purposes and to direct the second bandwidth for object-tracking purposes. In this example, theoptical system 502 further includes asensor 508 to generate object data in response to a reflected-portion of the second bandwidth, i.e. a portion of the second bandwidth that is reflected off an object that exists within a real-world environment. - In various configurations, the
wearable device 500 may utilize theoptical system 502 to generate a composite view (e.g., from a perspective of auser 128 that is wearing the wearable device 500) that includes both one or more CG images and a view of at least a portion of the real-world environment that includes the object. For example, theoptical system 502 may utilize various technologies such as, for example, AR technologies to generate composite views that include CG images superimposed over a real-world view 126. As such, theoptical system 502 may be configured to generate CG images via a display panel. The display panel can include separate right eye and left eye transparent display panels. - Alternatively, the display panel can include a single transparent display panel that is viewable with both eyes and/or a single transparent display panel that is viewable by a single eye only. Therefore, it can be appreciated that the technologies described herein may be deployed within a single-eye Near Eye Display (“NED”) system (e.g., GOOGLE GLASS) and/or a dual-eye NED system (e.g., OCULUS RIFT). The
wearable device 500 is an example device that is used to provide context and illustrate various features and aspects of the user interface display technologies and systems disclosed herein. Other devices and systems, such as VR systems, may also use the interface display technologies and systems disclosed herein. - The display panel may be a waveguide display that includes one or more diffractive optical elements (“DOEs”) for in-coupling incident light into the waveguide, expanding the incident light in one or more directions for exit pupil expansion, and/or out-coupling the incident light out of the waveguide (e.g., toward a user's eye). In some examples, the
wearable device 500 may further include an additional see-through optical component. - In the illustrated example of
FIG. 5 , acontroller 510 is operatively coupled to each of theillumination engine 504, the optical assembly 506 (and/or scanning devices thereof,) and thesensor 508. Thecontroller 510 includes one or more logic devices and one or more computer memory devices storing instructions executable by the logic device(s) to deploy functionalities described herein with relation to theoptical system 502. Thecontroller 510 can comprise one ormore processing units 512, one or more computer-readable media 514 for storing anoperating system 516 and data such as, for example, image data that defines one or more CG images and/or tracking data that defines one or more object tracking protocols. - The computer-
readable media 514 may further include an image-generation engine 518 that generates output signals to modulate generation of the first bandwidth of EM radiation by theillumination engine 504 and also to control the scanner(s) to direct the first bandwidth within theoptical assembly 506. Ultimately, the scanner(s) direct the first bandwidth through a display panel to generate CG images that are perceptible to a user, such as a user interface. - The computer-
readable media 514 may further include an object-trackingengine 520 that generates output signals to modulate generation of the second bandwidth of EM radiation by theillumination engine 504 and also the scanner(s) to direct the second bandwidth along an object-tracking optical path to irradiate an object. Theobject tracking engine 520 communicates with thesensor 508 to receive the object data that is generated based on the reflected-portion of the second bandwidth. - The
object tracking engine 520 then analyzes the object data to determine one or more characteristics of the object such as, for example, a depth of the object with respect to theoptical system 502, an orientation of the object with respect to theoptical system 502, a velocity and/or acceleration of the object with respect to theoptical system 502, or any other desired characteristic of the object. The components of thewearable device 500 are operatively connected, for example, via abus 522, which can include one or more of a system bus, a data bus, an address bus, a PCI bus, a Mini-PCI bus, and any variety of local, peripheral, and/or independent buses. - The
wearable device 500 may further include various other components, for example cameras (e.g., camera 524), microphones (e.g., microphone 526), accelerometers, gyroscopes, magnetometers, temperature sensors, touch sensors, biometric sensors, other image sensors, energy-storage components (e.g. battery), a communication facility, a GPS receiver, etc. Furthermore, thewearable device 500 can include one or moreeye gaze sensors 528. In at least one example, aneye gaze sensor 528 is user facing and is configured to track the position of at least one eye of a user. Accordingly, eye position data (e.g., determined via use of eye gaze sensor 528), image data (e.g., determined via use of the camera 524), and other data can be processed to identify a gaze path of the user. That is, it can be determined that the user is looking at a particular section of a hardware display surface, a particular real-world object or part of a real-world object in the view of the user, and/or a rendered object or part of a rendered object displayed on a hardware display surface. - In some configurations, the
wearable device 500 can include anactuator 529. Theprocessing units 512 can cause the generation of a haptic signal associated with a generated haptic effect toactuator 529, which in turn outputs haptic effects such as vibrotactile haptic effects, electrostatic friction haptic effects, or deformation haptic effects.Actuator 529 includes an actuator drive circuit. Theactuator 529 may be, for example, an electric motor, an electro-magnetic actuator, a voice coil, a shape memory alloy, an electro-active polymer, a solenoid, an eccentric rotating mass motor (“ERM”), a linear resonant actuator (“LRA”), a piezoelectric actuator, a high bandwidth actuator, an electroactive polymer (“EAP”) actuator, an electrostatic friction display, or an ultrasonic vibration generator. - In alternate configurations,
wearable device 500 can include one or moreadditional actuators 529. Theactuator 529 is an example of a haptic output device, where a haptic output device is a device configured to output haptic effects, such as vibrotactile haptic effects, electrostatic friction haptic effects, or deformation haptic effects, in response to a drive signal. In alternate configurations, theactuator 529 can be replaced by some other type of haptic output device. Further, in other alternate configurations,wearable device 500 may not includeactuator 529, and a separate device fromwearable device 500 includes an actuator, or other haptic output device, that generates the haptic effects, andwearable device 500 sends generated haptic signals to that device through a communication device. - The processing unit(s) 512, can represent, for example, a CPU-type processing unit, a GPU-type processing unit, a field-programmable gate array (“FPGA”), another class of digital signal processor (“DSP”), or other hardware logic components that may, in some instances, be driven by a CPU. For example, and without limitation, illustrative types of hardware logic components that can be used include Application-Specific Integrated Circuits (“ASICs”), Application-Specific Standard Products (“ASSPs”), System-on-a-Chip Systems (“SOCs”), Complex Programmable Logic Devices (“CPLDs”), etc.
- As used herein, computer-readable media, such as computer-
readable media 514, can store instructions executable by the processing unit(s) 522. Computer-readable media can also store instructions executable by external processing units such as by an external CPU, an external GPU, and/or executable by an external accelerator, such as an FPGA type accelerator, a DSP type accelerator, or any other internal or external accelerator. In various examples, at least one CPU, GPU, and/or accelerator is incorporated in a computing device, while in some examples one or more of a CPU, GPU, and/or accelerator is external to a computing device. - In various examples, the
wearable device 500 is configured to interact, via network communications, with a network device (e.g., a network server or a cloud server) to implement the configurations described herein. For instance, thewearable device 500 may collect data and send the data over network(s) to the network device. The network device may then implement some of the functionality described herein (e.g., analyze passive signals, determine user interests, select a recommended item, etc.). Subsequently, the network device can cause thewearable device 500 to display an item and/or instruct thewearable device 500 to perform a task. - Computer-readable media can include computer storage media and/or communication media. Computer storage media can include one or more of volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Thus, computer storage media includes tangible and/or physical forms of media included in a device and/or hardware component that is part of a device or external to a device, including but not limited to random access memory (“RAM”), static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), phase change memory (“PCM”), read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), flash memory, rotating media, optical cards or other optical storage media, magnetic storage, magnetic cards or other magnetic storage devices or media, solid-state memory devices, storage arrays, network attached storage, storage area networks, hosted computer storage or any other storage memory, storage device, and/or storage medium that can be used to store and maintain information for access by a computing device.
- In contrast to computer storage media, communication media can embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media. That is, computer storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se.
-
FIG. 6 shows additional details of an example computer architecture for a computer capable of executing the functionalities described herein such as, for example, those described with reference toFIGS. 1A-4 , or any program components thereof as described herein. Thus, thecomputer architecture 600 illustrated inFIG. 6 illustrates an architecture for a server computer, or network of server computers, or any other type of computing device suitable for implementing the functionality described herein. Thecomputer architecture 600 may be utilized to execute any aspects of the software components presented herein, such as software components for implementing thee-commerce system 116 and theitem listing tool 102. - The
computer architecture 600 illustrated inFIG. 6 includes a central processing unit 602 (“CPU”), asystem memory 604, including a random-access memory 606 (“RAM”) and a read-only memory (“ROM”) 608, and asystem bus 610 that couples thememory 604 to theCPU 602. A basic input/output system containing the basic routines that help to transfer information between elements within thecomputer architecture 600, such as during startup, is stored in theROM 608. Thecomputer architecture 600 further includes amass storage device 612 for storing anoperating system 614, other data, and one or more application programs. Themass storage device 612 may further include one or more of theactivity data 102,auction data 104, user profiles 125, or themachine learning engine 125, and/or any of the other software or data components described herein. - The
mass storage device 612 is connected to theCPU 602 through a mass storage controller (not shown) connected to thebus 610. Themass storage device 612 and its associated computer-readable media provide non-volatile storage for thecomputer architecture 600. Although the description of computer-readable media contained herein refers to a mass storage device, such as a solid-state drive, a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available computer storage media or communication media that can be accessed by thecomputer architecture 600. - According to various implementations, the
computer architecture 600 may operate in a networked environment using logical connections to remote computers through anetwork 650 and/or another network (not shown). Thecomputer architecture 600 may connect to thenetwork 650 through anetwork interface unit 616 connected to thebus 610. It should be appreciated that thenetwork interface unit 616 also may be utilized to connect to other types of networks and remote computer systems. Thecomputer architecture 600 also may include an input/output controller 618 for receiving and processing input from a number of other devices, including a keyboard, mouse, or electronic stylus (not shown inFIG. 6 ). Similarly, the input/output controller 618 may provide output to a display screen, a printer, or other type of output device (also not shown inFIG. 6 ). It should also be appreciated that a computing system implemented using the disclosedcomputer architecture 600 to communicate with other computing systems. - It should be appreciated that the software components described herein may, when loaded into the
CPU 602 and executed, transform theCPU 602 and theoverall computer architecture 600 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein. TheCPU 602 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, theCPU 602 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform theCPU 602 by specifying how theCPU 602 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting theCPU 602. - Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like. For example, if the computer-readable media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon.
- As another example, the computer-readable media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.
- In light of the above, it should be appreciated that many types of physical transformations take place in the
computer architecture 600 in order to store and execute the software components presented herein. It also should be appreciated that thecomputer architecture 600 may include other types of computing devices, including smartphones, embedded computer systems, tablet computers, other types of wearable computing devices, and other types of computing devices known to those skilled in the art. It is also contemplated that thecomputer architecture 600 may not include all of the components shown inFIG. 6 , may include other components that are not explicitly shown inFIG. 6 , or may utilize an architecture completely different than that shown inFIG. 6 . - The following clauses described multiple possible configurations for implementing the features described in this disclosure. The various configurations described herein are not limiting nor is every feature from any given configuration required to be present in another configuration. Any two or more of the configurations may be combined together unless the context clearly indicates otherwise. As used herein in this document “or” means and/or. For example, “A or B” means A without B, B without A, or A and B. As used herein, “comprising” means including all listed features and potentially including addition of other features that are not listed. “Consisting essentially of” means including the listed features and those additional features that do not materially affect the basic and novel characteristics of the listed features. “Consisting of” means only the listed features to the exclusion of any feature not listed.
- The disclosure presented herein also encompasses the subject matter set forth in the following clauses:
- Example Clause A, a computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item, wherein the online auction is being conducted by an online auctioneer system to facilitate competitive bidding by the plurality of participants for the item; analyzing the activity data to identify user signals that indicate acquisition interest levels for the plurality of participants, wherein individual acquisition interest levels are indicative of intentions of individual participants to acquire the item through the competitive bidding; receiving avatar profile data defining avatar profiles that facilitate dynamic modifications for three-dimensional model for the plurality of participants; determining, based on the avatar profile data, avatar modification states that correspond to the individual acquisition interest levels for the individual participants; and causing at least one computing device to display, in a virtual environment associated with the online auction, a graphical representation of the item and a plurality of avatars, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent the individual acquisition interest levels for the individual participants.
- Example Clause B, the computer-implemented method of Example Clause A, further comprising receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions, wherein identifying the user signals that indicate the acquisition interest levels is based at least in part on the historical activity data.
- Example Clause C, the computer-implemented method of any one of Example Clauses A through B, further comprising: receiving user specific historical activity data defining historical user activity of a particular participant, of the plurality of participants, in association with previous online auctions, wherein a particular interest acquisition level for the particular participant is determined based at least in part on the user specific historical activity data.
- Example Clause D, the computer-implemented method of any one of Example Clauses A through C, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
- Example Clause E, the computer-implemented method of any one of Example Clauses A through D, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and determining at least some aspects for a particular modification state for, a particular avatar that corresponds to the particular participant, based on the particular participant currently having the high bid for the item.
- Example Clause F, the computer-implemented method of any one of Example Clauses A through E, wherein the virtual environment is a three-dimensional immersive environment in which one or more three-dimensional objects are rendered.
- Example Clause G, the computer-implemented method of any one of Example Clauses A through F, further comprising: monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; providing the particular participant with at least some avatar abilities with respect to the item within the three-dimensional immersive environment.
- Example Clause H, the computer-implemented method of any one of Example Clauses A through G, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
- Example Clause I, a system, comprising: one or more processors; and a memory in communication with the one or more processors, the memory having computer-readable instructions stored thereupon that, when executed by the one or more processors, cause the one or more processors to: receive activity data defining user activity of a plurality of participants in association with an online auction that facilitates competitive bidding, by the plurality of participants, for an item; analyze the activity data to identify user signals that indicate acquisition interest levels associated with intentions of the plurality of participants to acquire the item through the competitive bidding; receive avatar profile data defining avatar profiles associated with the plurality of participants; determine, based on the avatar profile data, a plurality of avatar modification states that correspond to the acquisition interest levels for individual participants; and cause at least one computing device to display a virtual environment that includes a plurality of avatars being rendered adjacent to a graphical representation of the item, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent individual acquisition interest levels for the individual participants.
- Example Clause J, the system of Example Clause I, wherein the computer-readable instructions further cause the one or more processors to: monitor the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and control at least some abilities a particular avatar that corresponds to the particular participant based on the particular participant currently having the high bid for the item.
- Example Clause K, the system of any one of Example Clauses I through J, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
- Example Clause L, the system of any one of Example Clauses I through K, wherein the computer-readable instructions further cause the one or more processors to: receive historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and deploy a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
- Example Clause M, the system of any one of Example Clauses I through L, wherein the acquisition interest levels are determined by analyzing the activity data with respect to the acquisition interest model.
- Example Clause N, the system of any one of Example Clauses I through M, wherein the virtual environment is a three-dimensional immersive environment.
- Example Clause O, the system of any one of Example Clauses I through N, wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
- Example Clause P, a computer-implemented method, comprising: receiving activity data defining user activity of a plurality of participants in association with an online auction for an item; analyzing the activity data to identify user signals that indicate acquisition interest levels for individual participants of the plurality of participants; determining avatar modification states that correspond to individual acquisition interest levels for individual participants of the plurality of participants; and causing at least one computing device to display a virtual environment that includes a graphical representation of the item and a plurality of avatars that are rendered in accordance with the avatar modification states that correspond to the individual acquisition interest levels for individual participants.
- Example Clause Q, the computer-implemented method of any one of Example Clause P, wherein the individual acquisition interest levels for the individual participants are determined based on an acquisition interest model.
- Example Clause R, the computer-implemented method of any one of Example Clauses P through Q, further comprising: receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and deploying a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
- Example Clause S, the computer-implemented method of any one of Example Clauses P through R, wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with different expressive characteristics based on the acquisition interest levels.
- Example Clause T, the computer-implemented method of any one of Example Clauses P through S, wherein the virtual environment is a three-dimensional immersive environment.
- For ease of understanding, the processes discussed in this disclosure are delineated as separate operations represented as independent blocks. However, these separately delineated operations should not be construed as necessarily order dependent in their performance. The order in which the process is described is not intended to be construed as a limitation, and any number of the described process blocks may be combined in any order to implement the process or an alternate process. Moreover, it is also possible that one or more of the provided operations is modified or omitted.
- Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts are disclosed as example forms of implementing the claims.
- The terms “a,” “an,” “the” and similar referents used in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural unless otherwise indicated herein or clearly contradicted by context. The terms “based on,” “based upon,” and similar referents are to be construed as meaning “based at least in part” which includes being “based in part” and “based in whole” unless otherwise indicated or clearly contradicted by context.
- It should be appreciated that any reference to “first,” “second,” etc. items and/or abstract concepts within the Summary and/or Detailed Description is not intended to and should not be construed to necessarily correspond to any reference of “first,” “second,” etc. elements of the claims. In particular, within the Summary and/or Detailed Description, items and/or abstract concepts such as, for example, modification states and/or avatars and/or acquisition interest levels may be distinguished by numerical designations without such designations corresponding to the claims or even other paragraphs of the Summary and/or Detailed Description. For example, any designation of a “first acquisition interest level” and “second acquisition interest level” of the participants within any specific paragraph of this the Summary and/or Detailed Description is used solely to distinguish two different acquisition interest levels within that specific paragraph—not any other paragraph and particularly not the claims.
- Certain configurations are described herein, including the best mode known to the inventors for carrying out the invention. Of course, variations on these described configurations will become apparent to those of ordinary skill in the art upon reading the foregoing description. Skilled artisans will know how to employ such variations as appropriate, and the configurations disclosed herein may be practiced otherwise than specifically described. Accordingly, all modifications and equivalents of the subject matter recited in the claims appended hereto are included within the scope of this disclosure. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
Claims (20)
1. A computer-implemented method, comprising:
receiving activity data defining user activity of a plurality of participants in association with an online auction for an item, wherein the online auction is being conducted by an online auctioneer system to facilitate competitive bidding by the plurality of participants for the item;
analyzing the activity data to identify user signals that indicate acquisition interest levels for the plurality of participants, wherein individual acquisition interest levels are indicative of intentions of individual participants to acquire the item through the competitive bidding;
receiving avatar profile data defining avatar profiles that facilitate dynamic modifications for three-dimensional model for the plurality of participants;
determining, based on the avatar profile data, avatar modification states that correspond to the individual acquisition interest levels for the individual participants; and
causing at least one computing device to display, in a virtual environment associated with the online auction, a graphical representation of the item and a plurality of avatars, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent the individual acquisition interest levels for the individual participants.
2. The computer-implemented method of claim 1 , further comprising receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions, wherein identifying the user signals that indicate the acquisition interest levels is based at least in part on the historical activity data.
3. The computer-implemented method of claim 1 , further comprising:
receiving user specific historical activity data defining historical user activity of a particular participant, of the plurality of participants, in association with previous online auctions, wherein a particular interest acquisition level for the particular participant is determined based at least in part on the user specific historical activity data.
4. The computer-implemented method of claim 1 , wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
5. The computer-implemented method of claim 1 , further comprising:
monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and
determining at least some aspects for a particular modification state for, a particular avatar that corresponds to the particular participant, based on the particular participant currently having the high bid for the item.
6. The computer-implemented method of claim 1 , wherein the virtual environment is a three-dimensional immersive environment in which one or more three-dimensional objects are rendered.
7. The computer-implemented method of claim 6 , further comprising:
monitoring the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item;
providing the particular participant with at least some avatar abilities with respect to the item within the three-dimensional immersive environment.
8. The computer-implemented method of claim 1 , wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
9. A system, comprising:
one or more processors; and
a memory in communication with the one or more processors, the memory having computer-readable instructions stored thereupon that, when executed by the one or more processors, cause the one or more processors to:
receive activity data defining user activity of a plurality of participants in association with an online auction that facilitates competitive bidding, by the plurality of participants, for an item;
analyze the activity data to identify user signals that indicate acquisition interest levels associated with intentions of the plurality of participants to acquire the item through the competitive bidding;
receive avatar profile data defining avatar profiles associated with the plurality of participants;
determine, based on the avatar profile data, a plurality of avatar modification states that correspond to the acquisition interest levels for individual participants; and
cause at least one computing device to display a virtual environment that includes a plurality of avatars being rendered adjacent to a graphical representation of the item, wherein individual avatars are rendered in accordance with individual avatar modification states to graphically represent individual acquisition interest levels for the individual participants.
10. The system of claim 9 , wherein the computer-readable instructions further cause the one or more processors to:
monitor the online auction to determine which particular participant, of the plurality of participants, currently has a high bid for the item; and
control at least some abilities a particular avatar that corresponds to the particular participant based on the particular participant currently having the high bid for the item.
11. The system of claim 9 , wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with a plurality of expressive characteristics based on the acquisition interest levels.
12. The system of claim 9 , wherein the computer-readable instructions further cause the one or more processors to:
receive historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and
deploy a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
13. The system of claim 12 , wherein the acquisition interest levels are determined by analyzing the activity data with respect to the acquisition interest model.
14. The system of claim 9 , wherein the virtual environment is a three-dimensional immersive environment.
15. The system of claim 9 , wherein the at least one computing device comprises an augmented reality (AR) device or a virtual reality (VR) device.
16. A computer-implemented method, comprising:
receiving activity data defining user activity of a plurality of participants in association with an online auction for an item;
analyzing the activity data to identify user signals that indicate acquisition interest levels for individual participants of the plurality of participants;
determining avatar modification states that correspond to individual acquisition interest levels for individual participants of the plurality of participants; and
causing at least one computing device to display a virtual environment that includes a graphical representation of the item and a plurality of avatars that are rendered in accordance with the avatar modification states that correspond to the individual acquisition interest levels for individual participants.
17. The computer-implemented method of claim 16 , wherein the individual acquisition interest levels for the individual participants are determined based on an acquisition interest model.
18. The computer-implemented method of claim 16 , further comprising:
receiving historical activity data defining historical user activity of at least some of the plurality of participants in association with previous online auctions; and
deploying a machine learning engine to generate an acquisition interest model based at least in part on the historical activity data.
19. The computer-implemented method of claim 16 , wherein the avatar modification states facilitate rendering the plurality of avatars in accordance with different expressive characteristics based on the acquisition interest levels.
20. The computer-implemented method of claim 16 , wherein the virtual environment is a three-dimensional immersive environment.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/189,849 US20190156410A1 (en) | 2017-11-17 | 2018-11-13 | Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape |
PCT/US2018/061154 WO2019099593A1 (en) | 2017-11-17 | 2018-11-14 | Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762588189P | 2017-11-17 | 2017-11-17 | |
US16/189,849 US20190156410A1 (en) | 2017-11-17 | 2018-11-13 | Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190156410A1 true US20190156410A1 (en) | 2019-05-23 |
Family
ID=66532446
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/189,776 Pending US20190156377A1 (en) | 2017-11-17 | 2018-11-13 | Rendering virtual content based on items recognized in a real-world environment |
US16/189,720 Active US10891685B2 (en) | 2017-11-17 | 2018-11-13 | Efficient rendering of 3D models using model placement metadata |
US16/189,674 Active 2039-03-12 US11080780B2 (en) | 2017-11-17 | 2018-11-13 | Method, system and computer-readable media for rendering of three-dimensional model data based on characteristics of objects in a real-world environment |
US16/189,817 Active 2039-01-13 US11556980B2 (en) | 2017-11-17 | 2018-11-13 | Method, system, and computer-readable storage media for rendering of object data based on recognition and/or location matching |
US16/189,849 Abandoned US20190156410A1 (en) | 2017-11-17 | 2018-11-13 | Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape |
US17/102,283 Active US11200617B2 (en) | 2017-11-17 | 2020-11-23 | Efficient rendering of 3D models using model placement metadata |
US17/358,615 Abandoned US20210319502A1 (en) | 2017-11-17 | 2021-06-25 | Method, System and Computer-Readable Media for Rendering of Three-Dimensional Model Data Based on Characteristics of Objects in A Real-World Environment |
US18/064,358 Pending US20230109329A1 (en) | 2017-11-17 | 2022-12-12 | Rendering of Object Data Based on Recognition and/or Location Matching |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/189,776 Pending US20190156377A1 (en) | 2017-11-17 | 2018-11-13 | Rendering virtual content based on items recognized in a real-world environment |
US16/189,720 Active US10891685B2 (en) | 2017-11-17 | 2018-11-13 | Efficient rendering of 3D models using model placement metadata |
US16/189,674 Active 2039-03-12 US11080780B2 (en) | 2017-11-17 | 2018-11-13 | Method, system and computer-readable media for rendering of three-dimensional model data based on characteristics of objects in a real-world environment |
US16/189,817 Active 2039-01-13 US11556980B2 (en) | 2017-11-17 | 2018-11-13 | Method, system, and computer-readable storage media for rendering of object data based on recognition and/or location matching |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/102,283 Active US11200617B2 (en) | 2017-11-17 | 2020-11-23 | Efficient rendering of 3D models using model placement metadata |
US17/358,615 Abandoned US20210319502A1 (en) | 2017-11-17 | 2021-06-25 | Method, System and Computer-Readable Media for Rendering of Three-Dimensional Model Data Based on Characteristics of Objects in A Real-World Environment |
US18/064,358 Pending US20230109329A1 (en) | 2017-11-17 | 2022-12-12 | Rendering of Object Data Based on Recognition and/or Location Matching |
Country Status (5)
Country | Link |
---|---|
US (8) | US20190156377A1 (en) |
EP (1) | EP3711011A1 (en) |
KR (1) | KR102447411B1 (en) |
CN (1) | CN111357029A (en) |
WO (5) | WO2019099591A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10636253B2 (en) * | 2018-06-15 | 2020-04-28 | Max Lucas | Device to execute a mobile application to allow musicians to perform and compete against each other remotely |
US10891685B2 (en) | 2017-11-17 | 2021-01-12 | Ebay Inc. | Efficient rendering of 3D models using model placement metadata |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11348147B2 (en) * | 2020-04-17 | 2022-05-31 | At&T Intellectual Property I, L.P. | Facilitation of value-based sorting of objects |
CN106547769B (en) | 2015-09-21 | 2020-06-02 | 阿里巴巴集团控股有限公司 | DOI display method and device |
US11200692B2 (en) * | 2017-08-07 | 2021-12-14 | Standard Cognition, Corp | Systems and methods to check-in shoppers in a cashier-less store |
KR102418992B1 (en) * | 2017-11-23 | 2022-07-11 | 삼성전자주식회사 | Electronic device and the Method for providing Augmented Reality Service thereof |
US20200065879A1 (en) * | 2018-08-22 | 2020-02-27 | Midea Group Co., Ltd. | Methods and systems for home device recommendation |
JP6589038B1 (en) * | 2018-12-19 | 2019-10-09 | 株式会社メルカリ | Wearable terminal, information processing terminal, program, and product information display method |
US20220124294A1 (en) * | 2019-02-15 | 2022-04-21 | Xliminal, Inc. | System and method for interactively rendering and displaying 3d objects |
EP3948840A4 (en) * | 2019-03-18 | 2023-07-19 | Geomagical Labs, Inc. | Virtual interaction with three-dimensional indoor room imagery |
US20220215641A1 (en) * | 2019-05-22 | 2022-07-07 | Nec Corporation | Model generation apparatus, model generation system, model generation method |
US11470017B2 (en) * | 2019-07-30 | 2022-10-11 | At&T Intellectual Property I, L.P. | Immersive reality component management via a reduced competition core network component |
CN110717963B (en) * | 2019-08-30 | 2023-08-11 | 杭州群核信息技术有限公司 | Mixed rendering display method, system and storage medium of replaceable model based on WebGL |
US11341558B2 (en) * | 2019-11-21 | 2022-05-24 | Shopify Inc. | Systems and methods for recommending a product based on an image of a scene |
US11145117B2 (en) * | 2019-12-02 | 2021-10-12 | At&T Intellectual Property I, L.P. | System and method for preserving a configurable augmented reality experience |
EP3872770A1 (en) | 2020-02-28 | 2021-09-01 | Inter Ikea Systems B.V. | A computer implemented method, a device and a computer program product for augmenting a first image with image data from a second image |
US11810595B2 (en) | 2020-04-16 | 2023-11-07 | At&T Intellectual Property I, L.P. | Identification of life events for virtual reality data and content collection |
CN111580670B (en) * | 2020-05-12 | 2023-06-30 | 黑龙江工程学院 | Garden landscape implementation method based on virtual reality |
US20210365673A1 (en) * | 2020-05-19 | 2021-11-25 | Board Of Regents, The University Of Texas System | Method and apparatus for discreet person identification on pocket-size offline mobile platform with augmented reality feedback with real-time training capability for usage by universal users |
US20220108000A1 (en) * | 2020-10-05 | 2022-04-07 | Lenovo (Singapore) Pte. Ltd. | Permitting device use based on location recognized from camera input |
IT202100025055A1 (en) * | 2021-09-30 | 2023-03-30 | Geckoway S R L | SCANNING SYSTEM FOR VIRTUALIZING REAL OBJECTS AND RELATIVE METHOD OF USE FOR THE DIGITAL REPRESENTATION OF SUCH OBJECTS |
CN114513647B (en) * | 2022-01-04 | 2023-08-29 | 聚好看科技股份有限公司 | Method and device for transmitting data in three-dimensional virtual scene |
US20230418430A1 (en) * | 2022-06-24 | 2023-12-28 | Lowe's Companies, Inc. | Simulated environment for presenting virtual objects and virtual resets |
WO2024036510A1 (en) * | 2022-08-17 | 2024-02-22 | Mak Kwun Yiu | System and method for providing virtual premises |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020128952A1 (en) * | 2000-07-06 | 2002-09-12 | Raymond Melkomian | Virtual interactive global exchange |
US20030126068A1 (en) * | 1999-11-18 | 2003-07-03 | Eric Hauk | Virtual trading floor system |
US20050119963A1 (en) * | 2002-01-24 | 2005-06-02 | Sung-Min Ko | Auction method for real-time displaying bid ranking |
US20080091692A1 (en) * | 2006-06-09 | 2008-04-17 | Christopher Keith | Information collection in multi-participant online communities |
US20080147566A1 (en) * | 2006-12-18 | 2008-06-19 | Bellsouth Intellectual Property Corporation | Online auction analysis and recommendation tool |
US20080208749A1 (en) * | 2007-02-20 | 2008-08-28 | Andrew Wallace | Method and system for enabling commerce using bridge between real world and proprietary environments |
US20090063983A1 (en) * | 2007-08-27 | 2009-03-05 | Qurio Holdings, Inc. | System and method for representing content, user presence and interaction within virtual world advertising environments |
US20100079467A1 (en) * | 2008-09-26 | 2010-04-01 | International Business Machines Corporation | Time dependent virtual universe avatar rendering |
US20110040645A1 (en) * | 2009-08-14 | 2011-02-17 | Rabenold Nancy J | Virtual world integrated auction |
US20110072367A1 (en) * | 2009-09-24 | 2011-03-24 | etape Partners, LLC | Three dimensional digitally rendered environments |
US20110270701A1 (en) * | 2010-04-30 | 2011-11-03 | Benjamin Joseph Black | Displaying active recent bidders in a bidding fee auction |
US20120084169A1 (en) * | 2010-09-30 | 2012-04-05 | Adair Aaron J | Online auction optionally including multiple sellers and multiple auctioneers |
US20120246036A1 (en) * | 2011-03-22 | 2012-09-27 | Autonig, LLC | System, method and computer readable medium for conducting a vehicle auction, automatic vehicle condition assessment and automatic vehicle acquisition attractiveness determination |
US8285638B2 (en) * | 2005-02-04 | 2012-10-09 | The Invention Science Fund I, Llc | Attribute enhancement in virtual world environments |
US20130159110A1 (en) * | 2011-12-14 | 2013-06-20 | Giridhar Rajaram | Targeting users of a social networking system based on interest intensity |
US20130257877A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Generating an Interactive Avatar Model |
US20140058812A1 (en) * | 2012-08-17 | 2014-02-27 | Augme Technologies, Inc. | System and method for interactive mobile ads |
US20140143081A1 (en) * | 2012-11-16 | 2014-05-22 | Nextlot, Inc. | Interactive Online Auction System |
US20140279164A1 (en) * | 2013-03-15 | 2014-09-18 | Auction.Com, Llc | Virtual online auction forum |
US8965460B1 (en) * | 2004-01-30 | 2015-02-24 | Ip Holdings, Inc. | Image and augmented reality based networks using mobile devices and intelligent electronic glasses |
US9870716B1 (en) * | 2013-01-26 | 2018-01-16 | Ip Holdings, Inc. | Smart glasses and smart watches for real time connectivity and health |
Family Cites Families (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5347306A (en) | 1993-12-17 | 1994-09-13 | Mitsubishi Electric Research Laboratories, Inc. | Animated electronic meeting place |
US7937312B1 (en) | 1995-04-26 | 2011-05-03 | Ebay Inc. | Facilitating electronic commerce transactions through binding offers |
JP2003529851A (en) | 2000-03-31 | 2003-10-07 | プロジェクト・ビー,エルエルシー | Virtual standing room system |
US6813612B1 (en) | 2000-05-25 | 2004-11-02 | Nancy J. Rabenold | Remote bidding supplement for traditional live auctions |
US8601386B2 (en) | 2007-04-20 | 2013-12-03 | Ingenio Llc | Methods and systems to facilitate real time communications in virtual reality |
US7945482B2 (en) * | 2007-08-23 | 2011-05-17 | Ebay Inc. | Viewing shopping information on a network-based social platform |
US20100125525A1 (en) | 2008-11-18 | 2010-05-20 | Inamdar Anil B | Price alteration through buyer affected aggregation of purchasers |
US20100191770A1 (en) | 2009-01-27 | 2010-07-29 | Apple Inc. | Systems and methods for providing a virtual fashion closet |
US8261158B2 (en) | 2009-03-13 | 2012-09-04 | Fusion-Io, Inc. | Apparatus, system, and method for using multi-level cell solid-state storage as single level cell solid-state storage |
EP2259225A1 (en) | 2009-06-01 | 2010-12-08 | Alcatel Lucent | Automatic 3D object recommendation device in a personal physical environment |
US20110295722A1 (en) | 2010-06-09 | 2011-12-01 | Reisman Richard R | Methods, Apparatus, and Systems for Enabling Feedback-Dependent Transactions |
US8749557B2 (en) | 2010-06-11 | 2014-06-10 | Microsoft Corporation | Interacting with user interface via avatar |
US20120084170A1 (en) | 2010-09-30 | 2012-04-05 | Adair Aaron J | Cumulative point system and scoring of an event based on user participation in the event |
US20120084168A1 (en) | 2010-09-30 | 2012-04-05 | Adair Aaron J | Indication of the remaining duration of an event with a duration recoil feature |
US9996972B1 (en) | 2011-06-10 | 2018-06-12 | Amazon Technologies, Inc. | User/object interactions in an augmented reality environment |
US9921641B1 (en) | 2011-06-10 | 2018-03-20 | Amazon Technologies, Inc. | User/object interactions in an augmented reality environment |
US10008037B1 (en) | 2011-06-10 | 2018-06-26 | Amazon Technologies, Inc. | User/object interactions in an augmented reality environment |
US20150070347A1 (en) | 2011-08-18 | 2015-03-12 | Layar B.V. | Computer-vision based augmented reality system |
US9449342B2 (en) | 2011-10-27 | 2016-09-20 | Ebay Inc. | System and method for visualization of items in an environment using augmented reality |
US10223710B2 (en) | 2013-01-04 | 2019-03-05 | Visa International Service Association | Wearable intelligent vision device apparatuses, methods and systems |
US20130297460A1 (en) | 2012-05-01 | 2013-11-07 | Zambala Lllp | System and method for facilitating transactions of a physical product or real life service via an augmented reality environment |
US20130293530A1 (en) | 2012-05-04 | 2013-11-07 | Kathryn Stone Perez | Product augmentation and advertising in see through displays |
US20140040004A1 (en) | 2012-08-02 | 2014-02-06 | Google Inc. | Identifying a deal in shopping results |
US9336541B2 (en) | 2012-09-21 | 2016-05-10 | Paypal, Inc. | Augmented reality product instructions, tutorials and visualizations |
US11397462B2 (en) | 2012-09-28 | 2022-07-26 | Sri International | Real-time human-machine collaboration using big data driven augmented reality technologies |
US9830632B2 (en) | 2012-10-10 | 2017-11-28 | Ebay Inc. | System and methods for personalization and enhancement of a marketplace |
US20140130076A1 (en) | 2012-11-05 | 2014-05-08 | Immersive Labs, Inc. | System and Method of Media Content Selection Using Adaptive Recommendation Engine |
US20140164282A1 (en) * | 2012-12-10 | 2014-06-12 | Tibco Software Inc. | Enhanced augmented reality display for use by sales personnel |
US20140172570A1 (en) | 2012-12-14 | 2014-06-19 | Blaise Aguera y Arcas | Mobile and augmented-reality advertisements using device imaging |
US20140214547A1 (en) | 2013-01-25 | 2014-07-31 | R4 Technologies, Llc | Systems and methods for augmented retail reality |
US20140279263A1 (en) * | 2013-03-13 | 2014-09-18 | Truecar, Inc. | Systems and methods for providing product recommendations |
US20140267228A1 (en) | 2013-03-14 | 2014-09-18 | Microsoft Corporation | Mapping augmented reality experience to various environments |
US20140282220A1 (en) | 2013-03-14 | 2014-09-18 | Tim Wantland | Presenting object models in augmented reality images |
US9286727B2 (en) | 2013-03-25 | 2016-03-15 | Qualcomm Incorporated | System and method for presenting true product dimensions within an augmented real-world setting |
US9390561B2 (en) | 2013-04-12 | 2016-07-12 | Microsoft Technology Licensing, Llc | Personal holographic billboard |
US9904946B2 (en) * | 2013-07-18 | 2018-02-27 | Paypal, Inc. | Reverse showrooming and merchant-customer engagement system |
US20150294385A1 (en) | 2014-04-10 | 2015-10-15 | Bank Of America Corporation | Display of the budget impact of items viewable within an augmented reality display |
US9588342B2 (en) * | 2014-04-11 | 2017-03-07 | Bank Of America Corporation | Customer recognition through use of an optical head-mounted display in a wearable computing device |
CN105320931B (en) * | 2014-05-26 | 2019-09-20 | 京瓷办公信息系统株式会社 | Item Information provides device and Item Information provides system |
US9959675B2 (en) | 2014-06-09 | 2018-05-01 | Microsoft Technology Licensing, Llc | Layout design using locally satisfiable proposals |
US20150379460A1 (en) | 2014-06-27 | 2015-12-31 | Kamal Zamer | Recognizing neglected items |
US10438229B1 (en) * | 2014-06-30 | 2019-10-08 | Groupon, Inc. | Systems and methods for providing dimensional promotional offers |
US20160012475A1 (en) | 2014-07-10 | 2016-01-14 | Google Inc. | Methods, systems, and media for presenting advertisements related to displayed content upon detection of user attention |
US9728010B2 (en) | 2014-12-30 | 2017-08-08 | Microsoft Technology Licensing, Llc | Virtual representations of real-world objects |
US20160217157A1 (en) | 2015-01-23 | 2016-07-28 | Ebay Inc. | Recognition of items depicted in images |
US20170061700A1 (en) * | 2015-02-13 | 2017-03-02 | Julian Michael Urbach | Intercommunication between a head mounted display and a real world object |
US20160275723A1 (en) | 2015-03-20 | 2016-09-22 | Deepkaran Singh | System and method for generating three dimensional representation using contextual information |
CA2989894A1 (en) | 2015-06-24 | 2016-12-29 | Magic Leap, Inc. | Augmented reality devices, systems and methods for purchasing |
US20170083954A1 (en) * | 2015-08-10 | 2017-03-23 | Reviews From Friends, Inc. | Obtaining Referral Using Customer Database |
US10311493B2 (en) * | 2015-09-15 | 2019-06-04 | Facebook, Inc. | Managing commerce-related communications within a social networking system |
US10049500B2 (en) * | 2015-09-22 | 2018-08-14 | 3D Product Imaging Inc. | Augmented reality e-commerce for home improvement |
US10497043B2 (en) | 2015-09-24 | 2019-12-03 | Intel Corporation | Online clothing e-commerce systems and methods with machine-learning based sizing recommendation |
US20170256096A1 (en) | 2016-03-07 | 2017-09-07 | Google Inc. | Intelligent object sizing and placement in a augmented / virtual reality environment |
US10163271B1 (en) * | 2016-04-04 | 2018-12-25 | Occipital, Inc. | System for multimedia spatial annotation, visualization, and recommendation |
US10395435B2 (en) * | 2016-04-04 | 2019-08-27 | Occipital, Inc. | System for multimedia spatial annotation, visualization, and recommendation |
US10356028B2 (en) | 2016-05-25 | 2019-07-16 | Alphabet Communications, Inc. | Methods, systems, and devices for generating a unique electronic communications account based on a physical address and applications thereof |
US10134190B2 (en) | 2016-06-14 | 2018-11-20 | Microsoft Technology Licensing, Llc | User-height-based rendering system for augmented reality objects |
US20180006990A1 (en) * | 2016-06-30 | 2018-01-04 | Jean Alexandera Munemann | Exclusive social network based on consumption of luxury goods |
US10068379B2 (en) | 2016-09-30 | 2018-09-04 | Intel Corporation | Automatic placement of augmented reality models |
US10332317B2 (en) | 2016-10-25 | 2019-06-25 | Microsoft Technology Licensing, Llc | Virtual reality and cross-device experiences |
KR20190075988A (en) | 2016-10-26 | 2019-07-01 | 오캠 테크놀로지스 리미티드 | Wearable devices and methods that analyze images and provide feedback |
EP3336805A1 (en) | 2016-12-15 | 2018-06-20 | Thomson Licensing | Method and device for a placement of a virtual object of an augmented or mixed reality application in a real-world 3d environment |
US11037202B2 (en) | 2016-12-27 | 2021-06-15 | Paypal, Inc. | Contextual data in augmented reality processing for item recommendations |
WO2018136536A1 (en) | 2017-01-17 | 2018-07-26 | Fair Ip, Llc | Data processing system and method for rules/machine learning model-based screening of inventory |
CA3005051A1 (en) * | 2017-05-16 | 2018-11-16 | Michael J. Schuster | Augmented reality task identification and assistance in construction, remodeling, and manufacturing |
US10949667B2 (en) | 2017-09-14 | 2021-03-16 | Ebay Inc. | Camera platform and object inventory control |
US20190156377A1 (en) | 2017-11-17 | 2019-05-23 | Ebay Inc. | Rendering virtual content based on items recognized in a real-world environment |
-
2018
- 2018-11-13 US US16/189,776 patent/US20190156377A1/en active Pending
- 2018-11-13 US US16/189,720 patent/US10891685B2/en active Active
- 2018-11-13 US US16/189,674 patent/US11080780B2/en active Active
- 2018-11-13 US US16/189,817 patent/US11556980B2/en active Active
- 2018-11-13 US US16/189,849 patent/US20190156410A1/en not_active Abandoned
- 2018-11-14 WO PCT/US2018/061152 patent/WO2019099591A1/en active Application Filing
- 2018-11-14 CN CN201880074250.6A patent/CN111357029A/en active Pending
- 2018-11-14 WO PCT/US2018/061145 patent/WO2019099585A1/en active Application Filing
- 2018-11-14 KR KR1020207013835A patent/KR102447411B1/en active IP Right Grant
- 2018-11-14 EP EP18814749.0A patent/EP3711011A1/en active Pending
- 2018-11-14 WO PCT/US2018/061154 patent/WO2019099593A1/en active Application Filing
- 2018-11-14 WO PCT/US2018/061139 patent/WO2019099581A1/en unknown
- 2018-11-14 WO PCT/US2018/061151 patent/WO2019099590A1/en active Application Filing
-
2020
- 2020-11-23 US US17/102,283 patent/US11200617B2/en active Active
-
2021
- 2021-06-25 US US17/358,615 patent/US20210319502A1/en not_active Abandoned
-
2022
- 2022-12-12 US US18/064,358 patent/US20230109329A1/en active Pending
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030126068A1 (en) * | 1999-11-18 | 2003-07-03 | Eric Hauk | Virtual trading floor system |
US20020128952A1 (en) * | 2000-07-06 | 2002-09-12 | Raymond Melkomian | Virtual interactive global exchange |
US20050119963A1 (en) * | 2002-01-24 | 2005-06-02 | Sung-Min Ko | Auction method for real-time displaying bid ranking |
US8965460B1 (en) * | 2004-01-30 | 2015-02-24 | Ip Holdings, Inc. | Image and augmented reality based networks using mobile devices and intelligent electronic glasses |
US8285638B2 (en) * | 2005-02-04 | 2012-10-09 | The Invention Science Fund I, Llc | Attribute enhancement in virtual world environments |
US20080091692A1 (en) * | 2006-06-09 | 2008-04-17 | Christopher Keith | Information collection in multi-participant online communities |
US20080147566A1 (en) * | 2006-12-18 | 2008-06-19 | Bellsouth Intellectual Property Corporation | Online auction analysis and recommendation tool |
US20080208749A1 (en) * | 2007-02-20 | 2008-08-28 | Andrew Wallace | Method and system for enabling commerce using bridge between real world and proprietary environments |
US20090063983A1 (en) * | 2007-08-27 | 2009-03-05 | Qurio Holdings, Inc. | System and method for representing content, user presence and interaction within virtual world advertising environments |
US9111285B2 (en) * | 2007-08-27 | 2015-08-18 | Qurio Holdings, Inc. | System and method for representing content, user presence and interaction within virtual world advertising environments |
US8223156B2 (en) * | 2008-09-26 | 2012-07-17 | International Business Machines Corporation | Time dependent virtual universe avatar rendering |
US20100079467A1 (en) * | 2008-09-26 | 2010-04-01 | International Business Machines Corporation | Time dependent virtual universe avatar rendering |
US20110040645A1 (en) * | 2009-08-14 | 2011-02-17 | Rabenold Nancy J | Virtual world integrated auction |
US20110072367A1 (en) * | 2009-09-24 | 2011-03-24 | etape Partners, LLC | Three dimensional digitally rendered environments |
US20110270701A1 (en) * | 2010-04-30 | 2011-11-03 | Benjamin Joseph Black | Displaying active recent bidders in a bidding fee auction |
US20120084169A1 (en) * | 2010-09-30 | 2012-04-05 | Adair Aaron J | Online auction optionally including multiple sellers and multiple auctioneers |
US20120246036A1 (en) * | 2011-03-22 | 2012-09-27 | Autonig, LLC | System, method and computer readable medium for conducting a vehicle auction, automatic vehicle condition assessment and automatic vehicle acquisition attractiveness determination |
US20130159110A1 (en) * | 2011-12-14 | 2013-06-20 | Giridhar Rajaram | Targeting users of a social networking system based on interest intensity |
US20130257877A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Generating an Interactive Avatar Model |
US20140058812A1 (en) * | 2012-08-17 | 2014-02-27 | Augme Technologies, Inc. | System and method for interactive mobile ads |
US20140143081A1 (en) * | 2012-11-16 | 2014-05-22 | Nextlot, Inc. | Interactive Online Auction System |
US9870716B1 (en) * | 2013-01-26 | 2018-01-16 | Ip Holdings, Inc. | Smart glasses and smart watches for real time connectivity and health |
US20140279164A1 (en) * | 2013-03-15 | 2014-09-18 | Auction.Com, Llc | Virtual online auction forum |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10891685B2 (en) | 2017-11-17 | 2021-01-12 | Ebay Inc. | Efficient rendering of 3D models using model placement metadata |
US11080780B2 (en) | 2017-11-17 | 2021-08-03 | Ebay Inc. | Method, system and computer-readable media for rendering of three-dimensional model data based on characteristics of objects in a real-world environment |
US11200617B2 (en) | 2017-11-17 | 2021-12-14 | Ebay Inc. | Efficient rendering of 3D models using model placement metadata |
US11556980B2 (en) | 2017-11-17 | 2023-01-17 | Ebay Inc. | Method, system, and computer-readable storage media for rendering of object data based on recognition and/or location matching |
US10636253B2 (en) * | 2018-06-15 | 2020-04-28 | Max Lucas | Device to execute a mobile application to allow musicians to perform and compete against each other remotely |
Also Published As
Publication number | Publication date |
---|---|
KR102447411B1 (en) | 2022-09-28 |
WO2019099581A1 (en) | 2019-05-23 |
US11080780B2 (en) | 2021-08-03 |
KR20200073256A (en) | 2020-06-23 |
WO2019099593A1 (en) | 2019-05-23 |
US20190156403A1 (en) | 2019-05-23 |
US11200617B2 (en) | 2021-12-14 |
CN111357029A (en) | 2020-06-30 |
US20190156377A1 (en) | 2019-05-23 |
WO2019099591A1 (en) | 2019-05-23 |
US10891685B2 (en) | 2021-01-12 |
US20210073901A1 (en) | 2021-03-11 |
US11556980B2 (en) | 2023-01-17 |
WO2019099585A1 (en) | 2019-05-23 |
US20190156393A1 (en) | 2019-05-23 |
EP3711011A1 (en) | 2020-09-23 |
WO2019099590A1 (en) | 2019-05-23 |
US20230109329A1 (en) | 2023-04-06 |
US20190156582A1 (en) | 2019-05-23 |
US20210319502A1 (en) | 2021-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190156410A1 (en) | Systems and methods for translating user signals into a virtual environment having a visually perceptible competitive landscape | |
US20200219302A1 (en) | Method for Sharing Emotions Through the Creation of Three-Dimensional Avatars and Their Interaction | |
US20190251603A1 (en) | Systems and methods for a machine learning based personalized virtual store within a video game using a game engine | |
US11610250B2 (en) | Generating a product recommendation based on a user reaction | |
US11514132B2 (en) | Automatic website data migration | |
US11468618B2 (en) | Animated expressive icon | |
US11024101B1 (en) | Messaging system with augmented reality variant generation | |
CN112567360A (en) | Content recommendation system | |
US11706167B2 (en) | Generating and accessing video content for products | |
US11934643B2 (en) | Analyzing augmented reality content item usage data | |
CN115668897A (en) | Context-based augmented reality communication | |
US20190180319A1 (en) | Methods and systems for using a gaming engine to optimize lifetime value of game players with advertising and in-app purchasing | |
US11347932B2 (en) | Decoupling website service from presentation layer | |
WO2021252501A1 (en) | Reply interface with stickers for messaging system | |
CN114503165A (en) | Automatic dance animation | |
KR20220154816A (en) | Location Mapping for Large Scale Augmented Reality | |
CN116034310A (en) | Smart glasses with outward facing display | |
US20230289560A1 (en) | Machine learning techniques to predict content actions | |
EP4272093A1 (en) | Engagement analysis based on labels |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EBAY INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YANKOVICH, STEVE;TIMONEN, JOSH;SIGNING DATES FROM 20181109 TO 20181112;REEL/FRAME:047490/0881 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |