WO2017212484A1 - Method and system for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences - Google Patents

Method and system for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences Download PDF

Info

Publication number
WO2017212484A1
WO2017212484A1 PCT/IL2017/050631 IL2017050631W WO2017212484A1 WO 2017212484 A1 WO2017212484 A1 WO 2017212484A1 IL 2017050631 W IL2017050631 W IL 2017050631W WO 2017212484 A1 WO2017212484 A1 WO 2017212484A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
data
virtual
augmented
metadata
Prior art date
Application number
PCT/IL2017/050631
Other languages
French (fr)
Inventor
Adiel GUR
Alon Melchner
Original Assignee
Gur Adiel
Alon Melchner
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gur Adiel, Alon Melchner filed Critical Gur Adiel
Publication of WO2017212484A1 publication Critical patent/WO2017212484A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • H04N21/44224Monitoring of user activity on external systems, e.g. Internet browsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present invention generally relates to data gathering and personalization techniques.
  • it relates to techniques used to perceive a user's intent and preferences within augmented, mixed and virtual reality experiences and to tailor presented data corresponding to their intent that may be offered to the user, such as advertising services or reward content offered across augmented, mixed and virtual reality experiences.
  • Virtual, augmented or mixed reality environments are generated by computers using, in part, data that describes the environment. This data may describe, for example, various objects with which a user may sense and interact with. Examples of these objects include objects that are rendered and displayed for a user to see, audio that is played for a user to hear, and tactile (or haptic) feedback for a user to feel. Users may sense and interact with the virtual and augmented reality environments through a variety of visual, auditory and tactical means.
  • Virtual, augmented or mixed realities generally refer to altering a view of reality. Artificial information about a real environment can be overlaid over a view of the real environment. The artificial information can be interactive or otherwise manipulable, providing the user of such information with an altered, and often enhanced, perception of reality.
  • virtual and augmented reality environments are still a relatively new area of interest with limited present-day experiences.
  • intelligent agent systems can only collect data about a user's events when that person is using a computing device through physical interaction. Notably these days, such use will often involve application computing (accessing different content experiences), browsing the Internet and interacting with web sites, for example when buying goods or services displayed there, or clicking-through a banner advert on a web page.
  • Virtual laboratory smart agent discloses a method for conducting a virtual training session with first and second training objectives having first and second validation paths, respectively.
  • the method steps include configuring a virtual machine manipulated by a student to complete the first training objective, configuring a smart agent to execute on the virtual machine including a validator specified by the first validation path, generating a result by performing a first validation check of the virtual training session by at least the validator of the smart agent to identify an event of the virtual machine, identifying, using the smart agent, completion of the first training objective based on the result, and advancing the virtual training session from the first training objective to the second training objective in response to identifying completion of the first training objective.
  • U.S. Application 20030207237 Agent for guiding children in a virtual learning environment discloses a method for guiding a young child, "user", in a controlled virtual environment.
  • the controlled virtual environment is constructed by software when executed in a computer.
  • a guardian establishes parameters and a user is thereafter presented with the controlled environment which is governed, in part, by the guardian-provided parameters.
  • Data is accumulated concerning interactions and movements of the user's selector device within the controlled environment.
  • the user is provided with guidance on the basis of the accumulated data, within the constraints of the parameters provided by the guardian.
  • the guardian can be provided with reports concerning at least a portion of the accumulated data, for example, by electronic mail.
  • the user can select a virtual environment to be displayed in the controlled environment, a visible "buddy" which can be used to provide the aforesaid guidance by communicating to the user, information processed by an intelligent agent software component; and engage in an event that satisfies constraints or goals provided by the guardian.
  • It is another object of the present invention to provide an interactive computer- implemented system for data capture for collecting user event data generated during user events in augmented and/or virtual reality experiences and recorded for the use of an intelligent agent comprising: at least one processor; and at least one data storage device storing a plurality of instructions and data wherein, upon execution of said instructions by the at least one processor, said instructions cause: electronically collect and store augmented and/or virtual element display metadata, said element display data enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyze and correlate element display metadata predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction with the experiences content; electronically collect and store user event metadata, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and
  • FIG. 1 graphically illustrates, according to another preferred embodiment of the present invention, a flow chart, according to another preferred embodiment, of the present invention method for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences;
  • FIG. 2 graphically illustrates, according to another preferred embodiment of the present invention, an example of the system for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences
  • FIG. 3 graphically illustrates, according to another preferred embodiment of the present invention, an example of computerized environment for implementing the invention.
  • FIG. 4 graphically illustrates, according to another preferred embodiment of the present invention, a flow chart, according to another preferred embodiment, of the present invention method for constructing an agent-accessible user technology database.
  • FIG. 5 graphically illustrates, according to another preferred embodiment of the present invention, an example of the tracking function using the system according to one implementation of the technology disclosed.
  • the portable data capture device contemplated in preferred embodiments of the invention comprises a processor, a memory, and at least one environmental sensors able to detect user event and interaction in virtual and/or augmented reality.
  • sensors can take many forms, but could for example include means responsive to temperature, light, humidity, movement, sound, haptic or RF signals.
  • the data capture device can be carried on the user's body and so is preferably wearable, for example in the sense of being a headset attachable to the head. While the data capture device is being carried or more preferably worn, environmental data is recorded from the sensors either continuously or periodically, remotely or real-time. The record thus collected can be described as a plurality of time-series and/or as a repository of specifically defined and triggered events by the user's interaction within the experiences.
  • the term "portable data capture device” interchangeably refers, but not limited to such as a mobile phone, laptop, tablet, wearable computing device, cellular communicating device, digital camera (still and/or video), PDA, computer server, video camera, television, electronic visual dictionary, communication device, personal computer, all employing virtual and/or augmented technology capabilities and etc.
  • the present invention means and methods are performed in a standalone electronic device comprising at least one virtual and/or augmented reality application. Additionally or alternatively, at least a portion of such as processing, memory accessible, databases, includes a cloud-based application, and/or wired-based application.
  • the software components within virtual and/or augmented reality experiences and/or image databases provided are stored in a local memory module and/or stored in a remote server.
  • the term "environment sensor” interchangeably refers, but not limited to a hardware and/or software unit that is capable of emitting and/or detecting a signal, which is involved in the process of the user tracking and sends information to the processing unit. Differences can be noticed in some systems, with the emitters being worn by the users and covered by sensors, which are attached to the environment.
  • the signals emitted from emitters to different sensors can take various shapes, including electromagnetic signals, optical signals, mechanical signals and acoustic signals, such as electromagnetic tracking systems, employing calculation of magnetic fields generated by bypassing an electric current simultaneously through 3 coiled wires acoustic tracking systems and how its magnetic field constructs an impact on the other coils (for example); acoustic tracking systems, employing ultrasonic sound waves to identify the orientation and position of a target; optical tracking systems, employing light to calculate a target's orientation along with position; and mechanical tracking systems, dependent on a physical link between a fixed reference point and the target.
  • electromagnetic tracking systems employing calculation of magnetic fields generated by bypassing an electric current simultaneously through 3 coiled wires acoustic tracking systems and how its magnetic field constructs an impact on the other coils (for example); acoustic tracking systems, employing ultrasonic sound waves to identify the orientation and position of a target; optical tracking systems, employing light to calculate a target's orientation along with position; and mechanical tracking systems, dependent on
  • user event parameters interchangeably refers, but not limited to a predefined set of parameters performed by the user within virtual and/or augmented experiences and measured by the system, for example, a focus point at which the user looks within virtual and/or augmented realities or what virtual object the user virtually interacts with in certain series of time.
  • the implemented data capturing for collecting user event data generated during user events within augmented and/or virtual reality experiences and recorded for the use of an intelligent agent can be executed using a computerized process according to the example method 100 illustrated in FIG. 1. As illustrated in FIG. 1,
  • the method 100 can first electronically collect and store augmented and/or virtual element display metadata 102, said element display data enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyze and correlate element display metadata 104 predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction ranking of the experience content; electronically collect and store user event metadata 106, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; electronically generate a conclusion-data model 108 of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, wherein said conclusion-data data model is used to compute user's event parameters by an
  • the conclusion-data model uses a predetermined event ranking scale to rank the correlation between the element display metadata (EDM) and the user event metadata (UEM) to assign ranking scores to how and when the user reacts and interacts the elements in augmented, mixed and/or virtual reality environment.
  • EDM element display metadata
  • UDM user event metadata
  • System 200 includes one or more sensors 202 within virtual and/or augmented reality experiences coupled to an image, audio, location and/or haptic feedback processing system 206.
  • Sensors can be any hardware and/or software unit that is capable of emitting and/or detecting a signal, which is involved in the process of the user tracking and sending information to the processing unit; more generally, the term "sensor” herein refers to any device (or combination of devices) capable of capturing a signal of a user physical and/or virtual interaction within virtual and/or augmented computerized environments and representing that signal in the form of digital data.
  • Haptic sensors can include accelerometers sensors coupled to the processing system 206.
  • Accelerometers can be any type of microphone useful for obtaining movement signals from a user, e.g. head/eye tracking using magnetometers, accelerometers and gyroscopes; more generally, the terms “accelerometer” “magnetometer” and “gyroscope” herein refers to any device (or combination of devices) capable of measuring motion and direction in space.
  • sensors 202 are oriented toward a region of interest 208 within virtual and/or augmented environments, that includes at least a portion of a virtual element 210, in which an object of interest 212 (in this example, a hand) moves across and in contact with the virtual element 210 along the indicated path 214.
  • the sensors are positioned for receiving a signal when the object of interest 212 interacts virtual element 210 for capturing the signals propagating there through.
  • one or more of the sensors 202 are disposed opposite the motion to be detected, e.g., where the hand 212 is expected to move.
  • Processing system 206 which can be, e.g., a computer system, can control the operation of sensors 202 to capture user's interaction of the region of interest 208 and video, audio, location and/or haptic feedback signals propagating through the virtual element 210. Based on the captured signals processing system 206 can determine the position, location and/or motion of object 212. In one implementation, processing system 206 stores a table of signal signatures -i.e., response characteristics - produced by a specific gesture and/or event (e.g., a hand move) performed on and/or towards various virtual objects.
  • a specific gesture and/or event e.g., a hand move
  • the user can be instructed to perform this gesture when the system is first used on a particular virtual object, and the response characteristics are detected by processing system 206 (via sensors 202) and compared to find the best-matching signature.
  • Each signature is associated with a particular medium and, more importantly, the speed of event detection therein. Accordingly, when the best-matching signature is located, the associated value of is used.
  • FIG. 3 graphically illustrates, according to another preferred embodiment of the present invention, an example of computerized system for implementing the invention 300.
  • the systems and methods described herein can be implemented in software or hardware or any combination thereof.
  • the systems and methods described herein can be implemented using one or more computing devices which may or may not be physically or logically separate from each other. Additionally, various aspects of the methods described herein may be combined or merged into other functions.
  • the illustrated system elements could be combined into a single hardware device or separated into multiple hardware devices. If multiple hardware devices are used, the hardware devices could be physically located proximate to or remotely from each other.
  • the methods can be implemented in a computer program product accessible from a computer-usable or computer-readable storage medium that provides program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer-readable storage medium can be any apparatus that can contain or store the program for use by or in connection with the computer or instruction execution system, apparatus, or device.
  • a data processing system suitable for storing and/or executing the corresponding program code can include at least one processor coupled directly or indirectly to computerized data storage devices such as memory elements.
  • Input/output (I/O) devices can be coupled to the system.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • the features can be implemented on a computer with a display device, such as an LCD (liquid crystal display), virtual display, or another type of monitor for displaying information to the user, and a keyboard and an input device, such as a mouse or trackball by which the user can provide input to the computer.
  • a display device such as an LCD (liquid crystal display), virtual display, or another type of monitor for displaying information to the user
  • a keyboard and an input device such as a mouse or trackball by which the user can provide input to the computer.
  • a computer program can be a set of instructions that can be used, directly or indirectly, in a computer.
  • the systems and methods described herein can be implemented using programming languages such as FlashTM, JAVATM, C++, C, C#, Visual BasicTM, JavaScriptTM, PHP, XML, HTML, etc., or a combination of programming languages, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • the software can include, but is not limited to, firmware, resident software, microcode, etc. Protocols such as SOAP/HTTP may be used in implementing interfaces between programming modules.
  • the components and functionality described herein may be implemented on any desktop operating system executing in a virtualized or non-virtualized environment, using any programming language suitable for software development, including, but not limited to, different versions of Microsoft WindowsTM, AppleTM MacTM, iOSTM, AndroidTM, UnixTM/X- WindowsTM, Windows MobileTM, LinuxTM, etc.
  • the system could be implemented using a web application framework, such as Ruby on Rails.
  • the processing system can be in communication with a computerized data storage system.
  • the data storage system can include a non-relational or relational data store, such as a MySQLTM or other relational database. Other physical and logical database types could be used.
  • the data store may be a database server, such as Microsoft SQL ServerTM, OracleTM, IBM DB2TM, SQLITETM, or any other database software, relational or otherwise.
  • the data store may store the information identifying syntactical tags and any information required to operate on syntactical tags.
  • the processing system may use object- oriented programming and may store data in objects.
  • the processing system may use an object-relational mapper (ORM) to store the data objects in a relational database.
  • ORM object-relational mapper
  • an RDBMS can be used.
  • tables in the RDBMS can include columns that represent coordinates.
  • data representing user events, virtual elements, etc. can be stored in tables in the RDBMS.
  • the tables can have pre-defined relationships between them.
  • the tables can also have adjuncts associated with the coordinates.
  • Suitable processors for the execution of a program of instructions include, but are not limited to, general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer.
  • a processor may receive and store instructions and data from a computerized data storage device such as a read-only memory, a random access memory, both, or any combination of the data storage devices described herein.
  • a processor may include any processing circuitry or control circuitry operative to control the operations and performance of an electronic device.
  • the processor may also include, or be operatively coupled to communicate with, one or more data storage devices for storing data.
  • data storage devices can include, as non-limiting examples, magnetic disks (including internal hard disks and removable disks), magneto- optical disks, optical disks, read-only memory, random access memory, and/or flash storage.
  • Storage devices suitable for tangibly embodying computer program instructions and data can also include all forms of non-volatile memory, including, for example, semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, ASICs (application- specific integrated circuits).
  • ASICs application-specific integrated circuits
  • the systems, modules, and methods described herein can be implemented using any combination of software or hardware elements.
  • the systems, modules, and methods described herein can be implemented using one or more virtual machines operating alone or in combination with each other. Any applicable virtualization solution can be used for encapsulating a physical computing machine platform into a virtual machine that is executed under the control of virtualization software running on a hardware computing platform or host.
  • the virtual machine can have both virtual system hardware and guest operating system software.
  • the systems and methods described herein can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them.
  • the components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks that form the Internet.
  • One or more embodiments of the invention may be practiced with other computer system configurations, including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, etc.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a network.
  • the implemented constructing an agent-accessible user technology database can be executed using a computerized process according to the example method 400 illustrated in FIG. 4.
  • the method 400 can first provide a conclusion-data model 402 of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; analyze and correlate element data 404 comprising the element metadata and user event metadata; construct a user event database 406 for the augmented and/or virtual reality experiences using said analysis of the element data and the element metadata in relation to the user event metadata; and enable the agent to lease a portion of the data 408 displayed in augmented and/or virtual reality experiences of the user technology database which is associated with element data and the element metadata in the user event database.
  • Fig. 5 is a schematic illustration of an example of the tracking function using the system 200 according to one implementation of the technology disclosed.
  • the system employs recording data in a perspective camera (sensor) projection area 520 of an object at the center of user's attention 510 wherein each marked black point represents a bounding volume point 512 as referenced to an object that is partially rendered by the camera 522 measured by the distance from the closest point in the bounding volume 518 to the center of the camera's projection area 516 to the center of the object 514 in the augmented, mixed and/or virtual reality environments.
  • augmented/mixed/virtual reality devices in order to track user's movement data, e.g.
  • the system employs camera sensors (for each eye) to move accordingly based on the acquired data from one or more of accelerometers, magnetometer and/or gyroscope.
  • the present invention discloses recording of the camera rotation but also recording the proximity of all objects that are rendered each frame to the center of the camera, in order to generate the conclusion-data model of the user's interest and interaction within the augmented, mixed and/or virtual reality environments.
  • Drawn objects can be detected by calculating a bounding volume, based on the minimum and maximum vertices (3D points) of the 3D object.
  • the system employs transformation of the 3D points to the screen (camera) coordinates and tracks whether any and/or all of the marked black points are within the camera render range (viewport).
  • the system employs the method described herein to obtain the 2D points on the camera's projection area 520 from the 3D points in 3D space to calculate the distance between these 2D points and the center point of the camera's projection area 516,
  • V ample herein represents the center point of the camera s projection area and (x2, y2) represents any of the points on the object's bounding volume and its center.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention extends to methods, systems, and devices for a data capturing for collecting user event data generated during user events in augmented and/or virtual reality experiences and recorded for the use of an intelligent agent, comprising: electronically collecting and storing augmented and/or virtual element display metadata, said element display data enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyzing and correlating element display metadata predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction with the experiences content; electronically collecting and storing user event metadata, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; electronically generating a conclusion-data model of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and electronically storing said conclusion-data model of the augmented and/or virtual environment on said computer storage medium.

Description

METHOD AND SYSTEM FOR DATA GATHERING AND USER
PERSONALIZATION TECHNIQUES IN AUGMENTED, MIXED AND VIRTUAL REALITY EXPERIENCES
FIELD OF THE INVENTION
[1] The present invention generally relates to data gathering and personalization techniques. In particular, it relates to techniques used to perceive a user's intent and preferences within augmented, mixed and virtual reality experiences and to tailor presented data corresponding to their intent that may be offered to the user, such as advertising services or reward content offered across augmented, mixed and virtual reality experiences.
BACKGROUND OF THE INVENTION
[2] The following description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[3] Virtual, augmented or mixed reality environments are generated by computers using, in part, data that describes the environment. This data may describe, for example, various objects with which a user may sense and interact with. Examples of these objects include objects that are rendered and displayed for a user to see, audio that is played for a user to hear, and tactile (or haptic) feedback for a user to feel. Users may sense and interact with the virtual and augmented reality environments through a variety of visual, auditory and tactical means. Virtual, augmented or mixed realities generally refer to altering a view of reality. Artificial information about a real environment can be overlaid over a view of the real environment. The artificial information can be interactive or otherwise manipulable, providing the user of such information with an altered, and often enhanced, perception of reality. However, virtual and augmented reality environments are still a relatively new area of interest with limited present-day experiences.
[4] Data gathering technics in different computerized environments to utilize how information can be applied have led information technology companies to develop intelligent agent systems. Intelligent agents are software experiences that gather data about a user's preferences, habits, and interests, in different environments and can then apply that data to deliver personalized services and/or content to the user. The user benefits from such tailored information by suffering less irritating distraction, and by learning of exact information that he or she wishes or needs to know. The provider of tailored information benefits too, because the user is more likely to buy something that is relevant to his or her needs and aspirations.
[5] At present, intelligent agent systems can only collect data about a user's events when that person is using a computing device through physical interaction. Notably these days, such use will often involve application computing (accessing different content experiences), browsing the Internet and interacting with web sites, for example when buying goods or services displayed there, or clicking-through a banner advert on a web page.
[6] Intelligent agents in virtual and augmented reality experiences are known. U.S. Patent No.
8406682, Virtual laboratory smart agent, discloses a method for conducting a virtual training session with first and second training objectives having first and second validation paths, respectively. The method steps include configuring a virtual machine manipulated by a student to complete the first training objective, configuring a smart agent to execute on the virtual machine including a validator specified by the first validation path, generating a result by performing a first validation check of the virtual training session by at least the validator of the smart agent to identify an event of the virtual machine, identifying, using the smart agent, completion of the first training objective based on the result, and advancing the virtual training session from the first training objective to the second training objective in response to identifying completion of the first training objective.
[7] U.S. Application 20030207237, Agent for guiding children in a virtual learning environment discloses a method for guiding a young child, "user", in a controlled virtual environment. The controlled virtual environment is constructed by software when executed in a computer. A guardian establishes parameters and a user is thereafter presented with the controlled environment which is governed, in part, by the guardian-provided parameters. Data is accumulated concerning interactions and movements of the user's selector device within the controlled environment. The user is provided with guidance on the basis of the accumulated data, within the constraints of the parameters provided by the guardian. The guardian can be provided with reports concerning at least a portion of the accumulated data, for example, by electronic mail. The user can select a virtual environment to be displayed in the controlled environment, a visible "buddy" which can be used to provide the aforesaid guidance by communicating to the user, information processed by an intelligent agent software component; and engage in an event that satisfies constraints or goals provided by the guardian.
[8] However, none of the current technologies and prior art, taken alone or in combination, does not address nor provide a truly integrated solution for data capture techniques for collecting in augmented, mixed and virtual reality experiences user event information generated during user events and record it for the use of an intelligent agent, using the existing in a virtual and augmented devices technology. The potential, to employ such personalization techniques, when tailored properly, for the data to be presented not only when the user chooses to access it, but seamlessly, deeply integrated with user interface of augmented, mixed and virtual reality experiences and so, fully synchronized with the user's constantly-changing needs and circumstances, is tremendous, constructing a complete immersion.
[9] Therefore, there is a long felt and unmet need for a system and method that overcomes the problems associated with the prior art.
[10] As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[11] All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
[12] Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims.
SUMMARY OF THE INVENTION
[13] It is thus an object of the present invention to provide a data capture method for collecting user event data generated during user events in augmented and/or virtual reality experiences and recorded for the use of an intelligent agent, the method comprising: electronically collecting and storing augmented and/or virtual element display metadata, said element display data enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyzing and correlating element display metadata predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction with the experiences content; electronically collecting and storing user event metadata, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; electronically generating a conclusion-data model of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and electronically storing said conclusion-data model of the augmented and/or virtual environment on said computer storage medium.
[14] It is another object of the present invention to provide an interactive computer- implemented system for data capture for collecting user event data generated during user events in augmented and/or virtual reality experiences and recorded for the use of an intelligent agent comprising: at least one processor; and at least one data storage device storing a plurality of instructions and data wherein, upon execution of said instructions by the at least one processor, said instructions cause: electronically collect and store augmented and/or virtual element display metadata, said element display data enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyze and correlate element display metadata predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction with the experiences content; electronically collect and store user event metadata, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; electronically generate a conclusion-data model of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and electronically store said conclusion-data model of the augmented and/or virtual environment on said computer storage medium.
It is another object of the present invention to provide a non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising: electronically collecting and storing augmented and/or virtual element display metadata, said element display data enabling a processor to cause a display of a conclusion- data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyzing and correlating element display metadata predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction with the experiences content; electronically collecting and storing user event metadata, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; electronically generating a conclusion-data model of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and electronically storing said conclusion- data model of the augmented and/or virtual environment on said computer storage medium.
[16] The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF THE PREFERRED EMBODIMENTS
[17] The novel features believed to be characteristics of the invention are set forth in the appended claims. The invention itself, however, as well as the preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of illustrative embodiment when read in conjunction with the accompanying drawings. In order to better understand the invention and its implementation in a practice, a plurality of embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which
[18] FIG. 1 graphically illustrates, according to another preferred embodiment of the present invention, a flow chart, according to another preferred embodiment, of the present invention method for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences;
[19] FIG. 2 graphically illustrates, according to another preferred embodiment of the present invention, an example of the system for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences; and
[20] FIG. 3 graphically illustrates, according to another preferred embodiment of the present invention, an example of computerized environment for implementing the invention.
[21] FIG. 4 graphically illustrates, according to another preferred embodiment of the present invention, a flow chart, according to another preferred embodiment, of the present invention method for constructing an agent-accessible user technology database.
[22] FIG. 5 graphically illustrates, according to another preferred embodiment of the present invention, an example of the tracking function using the system according to one implementation of the technology disclosed. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[23] In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
[24] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[25] While the technology will be described in conjunction with various embodiment(s), it will be understood that they are not intended to limit the present technology to these embodiments. On the contrary, the present technology is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the various embodiments as defined by the appended claims.
[26] Furthermore, in the following description of embodiments, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, the present technology may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present embodiments.
[27] Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present description of embodiments, discussions utilizing terms such as "transmitting", "detecting," "calculating", "processing", "performing," "identifying," "configuring" or the like, refer to the actions and processes of a computer system, or similar electronic computing device. The computer system or similar electronic computing device manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices, including integrated circuits down to and including chip level firmware, assembler, and hardware based micro code.
[28] As will be explained in further detail below, the technology described herein relates to data gathering and personalization techniques in virtual and/or augmented reality experiences. The portable data capture device contemplated in preferred embodiments of the invention comprises a processor, a memory, and at least one environmental sensors able to detect user event and interaction in virtual and/or augmented reality. Such sensors can take many forms, but could for example include means responsive to temperature, light, humidity, movement, sound, haptic or RF signals. The data capture device can be carried on the user's body and so is preferably wearable, for example in the sense of being a headset attachable to the head. While the data capture device is being carried or more preferably worn, environmental data is recorded from the sensors either continuously or periodically, remotely or real-time. The record thus collected can be described as a plurality of time-series and/or as a repository of specifically defined and triggered events by the user's interaction within the experiences.
[29] The term "portable data capture device" interchangeably refers, but not limited to such as a mobile phone, laptop, tablet, wearable computing device, cellular communicating device, digital camera (still and/or video), PDA, computer server, video camera, television, electronic visual dictionary, communication device, personal computer, all employing virtual and/or augmented technology capabilities and etc. The present invention means and methods are performed in a standalone electronic device comprising at least one virtual and/or augmented reality application. Additionally or alternatively, at least a portion of such as processing, memory accessible, databases, includes a cloud-based application, and/or wired-based application. In some embodiments, the software components within virtual and/or augmented reality experiences and/or image databases provided, are stored in a local memory module and/or stored in a remote server. [30] The term "environment sensor" interchangeably refers, but not limited to a hardware and/or software unit that is capable of emitting and/or detecting a signal, which is involved in the process of the user tracking and sends information to the processing unit. Differences can be noticed in some systems, with the emitters being worn by the users and covered by sensors, which are attached to the environment. The signals emitted from emitters to different sensors can take various shapes, including electromagnetic signals, optical signals, mechanical signals and acoustic signals, such as electromagnetic tracking systems, employing calculation of magnetic fields generated by bypassing an electric current simultaneously through 3 coiled wires acoustic tracking systems and how its magnetic field constructs an impact on the other coils (for example); acoustic tracking systems, employing ultrasonic sound waves to identify the orientation and position of a target; optical tracking systems, employing light to calculate a target's orientation along with position; and mechanical tracking systems, dependent on a physical link between a fixed reference point and the target. One of the non-limiting examples of the environment sensors are gyroscope and/or accelerometer and/or magnetometer in virtual reality experiences and image tracking devices in augmented reality experiences.
[31] The terms "user" used interchangeably in the present invention, refers hereinafter to any party that engages in active and/or passive interaction with virtual and/or augmented reality experiences.
[32] The term "user event parameters" interchangeably refers, but not limited to a predefined set of parameters performed by the user within virtual and/or augmented experiences and measured by the system, for example, a focus point at which the user looks within virtual and/or augmented realities or what virtual object the user virtually interacts with in certain series of time.
[33] While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and the above detailed description. It should be understood, however, that it is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
[34] As a non-limiting example, the implemented data capturing for collecting user event data generated during user events within augmented and/or virtual reality experiences and recorded for the use of an intelligent agent can be executed using a computerized process according to the example method 100 illustrated in FIG. 1. As illustrated in FIG. 1, the method 100 can first electronically collect and store augmented and/or virtual element display metadata 102, said element display data enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; electronically analyze and correlate element display metadata 104 predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction ranking of the experience content; electronically collect and store user event metadata 106, said user event metadata enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; electronically generate a conclusion-data model 108 of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and electronically store said conclusion-data model 110 of the augmented and/or virtual environment on said computer storage medium. The conclusion-data model uses a predetermined event ranking scale to rank the correlation between the element display metadata (EDM) and the user event metadata (UEM) to assign ranking scores to how and when the user reacts and interacts the elements in augmented, mixed and/or virtual reality environment. Upon detection of full-correlation between the element display metadata (EDM) and the user event metadata (UEM), the ranking scores are extracted and stored in the database.
Reference is now made to Fig. 2, which is a schematic illustration of an example of the system 200 according to one implementation of the technology disclosed. System 200 includes one or more sensors 202 within virtual and/or augmented reality experiences coupled to an image, audio, location and/or haptic feedback processing system 206. Sensors can be any hardware and/or software unit that is capable of emitting and/or detecting a signal, which is involved in the process of the user tracking and sending information to the processing unit; more generally, the term "sensor" herein refers to any device (or combination of devices) capable of capturing a signal of a user physical and/or virtual interaction within virtual and/or augmented computerized environments and representing that signal in the form of digital data. Haptic sensors can include accelerometers sensors coupled to the processing system 206. Accelerometers can be any type of microphone useful for obtaining movement signals from a user, e.g. head/eye tracking using magnetometers, accelerometers and gyroscopes; more generally, the terms "accelerometer" "magnetometer" and "gyroscope" herein refers to any device (or combination of devices) capable of measuring motion and direction in space.
[36] In operation, sensors 202 are oriented toward a region of interest 208 within virtual and/or augmented environments, that includes at least a portion of a virtual element 210, in which an object of interest 212 (in this example, a hand) moves across and in contact with the virtual element 210 along the indicated path 214. The sensors are positioned for receiving a signal when the object of interest 212 interacts virtual element 210 for capturing the signals propagating there through. In some implementations, one or more of the sensors 202 are disposed opposite the motion to be detected, e.g., where the hand 212 is expected to move. Processing system 206, which can be, e.g., a computer system, can control the operation of sensors 202 to capture user's interaction of the region of interest 208 and video, audio, location and/or haptic feedback signals propagating through the virtual element 210. Based on the captured signals processing system 206 can determine the position, location and/or motion of object 212. In one implementation, processing system 206 stores a table of signal signatures -i.e., response characteristics - produced by a specific gesture and/or event (e.g., a hand move) performed on and/or towards various virtual objects. The user can be instructed to perform this gesture when the system is first used on a particular virtual object, and the response characteristics are detected by processing system 206 (via sensors 202) and compared to find the best-matching signature. Each signature is associated with a particular medium and, more importantly, the speed of event detection therein. Accordingly, when the best-matching signature is located, the associated value of is used.
[37] Reference is made now to FIG. 3 which graphically illustrates, according to another preferred embodiment of the present invention, an example of computerized system for implementing the invention 300. The systems and methods described herein can be implemented in software or hardware or any combination thereof. The systems and methods described herein can be implemented using one or more computing devices which may or may not be physically or logically separate from each other. Additionally, various aspects of the methods described herein may be combined or merged into other functions. [38] In some embodiments, the illustrated system elements could be combined into a single hardware device or separated into multiple hardware devices. If multiple hardware devices are used, the hardware devices could be physically located proximate to or remotely from each other.
[39] The methods can be implemented in a computer program product accessible from a computer-usable or computer-readable storage medium that provides program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer-readable storage medium can be any apparatus that can contain or store the program for use by or in connection with the computer or instruction execution system, apparatus, or device.
[40] A data processing system suitable for storing and/or executing the corresponding program code can include at least one processor coupled directly or indirectly to computerized data storage devices such as memory elements. Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. To provide for interaction with a user, the features can be implemented on a computer with a display device, such as an LCD (liquid crystal display), virtual display, or another type of monitor for displaying information to the user, and a keyboard and an input device, such as a mouse or trackball by which the user can provide input to the computer.
[41] A computer program can be a set of instructions that can be used, directly or indirectly, in a computer. The systems and methods described herein can be implemented using programming languages such as Flash™, JAVA™, C++, C, C#, Visual Basic™, JavaScript™, PHP, XML, HTML, etc., or a combination of programming languages, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. The software can include, but is not limited to, firmware, resident software, microcode, etc. Protocols such as SOAP/HTTP may be used in implementing interfaces between programming modules. The components and functionality described herein may be implemented on any desktop operating system executing in a virtualized or non-virtualized environment, using any programming language suitable for software development, including, but not limited to, different versions of Microsoft Windows™, Apple™ Mac™, iOS™, Android™, Unix™/X- Windows™, Windows Mobile™, Linux™, etc. The system could be implemented using a web application framework, such as Ruby on Rails.
[42] The processing system can be in communication with a computerized data storage system.
The data storage system can include a non-relational or relational data store, such as a MySQL™ or other relational database. Other physical and logical database types could be used. The data store may be a database server, such as Microsoft SQL Server™, Oracle™, IBM DB2™, SQLITE™, or any other database software, relational or otherwise. The data store may store the information identifying syntactical tags and any information required to operate on syntactical tags. In some embodiments, the processing system may use object- oriented programming and may store data in objects. In these embodiments, the processing system may use an object-relational mapper (ORM) to store the data objects in a relational database. The systems and methods described herein can be implemented using any number of physical data models. In one example embodiment, an RDBMS can be used. In those embodiments, tables in the RDBMS can include columns that represent coordinates. In the case of environment tracking systems, data representing user events, virtual elements, etc. can be stored in tables in the RDBMS. The tables can have pre-defined relationships between them. The tables can also have adjuncts associated with the coordinates.
[43] Suitable processors for the execution of a program of instructions include, but are not limited to, general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. A processor may receive and store instructions and data from a computerized data storage device such as a read-only memory, a random access memory, both, or any combination of the data storage devices described herein. A processor may include any processing circuitry or control circuitry operative to control the operations and performance of an electronic device.
[44] The processor may also include, or be operatively coupled to communicate with, one or more data storage devices for storing data. Such data storage devices can include, as non-limiting examples, magnetic disks (including internal hard disks and removable disks), magneto- optical disks, optical disks, read-only memory, random access memory, and/or flash storage. Storage devices suitable for tangibly embodying computer program instructions and data can also include all forms of non-volatile memory, including, for example, semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application- specific integrated circuits). [45] The systems, modules, and methods described herein can be implemented using any combination of software or hardware elements. The systems, modules, and methods described herein can be implemented using one or more virtual machines operating alone or in combination with each other. Any applicable virtualization solution can be used for encapsulating a physical computing machine platform into a virtual machine that is executed under the control of virtualization software running on a hardware computing platform or host. The virtual machine can have both virtual system hardware and guest operating system software.
[46] The systems and methods described herein can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks that form the Internet.
[47] One or more embodiments of the invention may be practiced with other computer system configurations, including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, etc. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a network.
[48] As a non-limiting example, the implemented constructing an agent-accessible user technology database can be executed using a computerized process according to the example method 400 illustrated in FIG. 4. As illustrated in FIG. 1, the method 400 can first provide a conclusion-data model 402 of the augmented, mixed and/or virtual reality environment based on said element data comprising the element metadata and user event metadata, said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; analyze and correlate element data 404 comprising the element metadata and user event metadata; construct a user event database 406 for the augmented and/or virtual reality experiences using said analysis of the element data and the element metadata in relation to the user event metadata; and enable the agent to lease a portion of the data 408 displayed in augmented and/or virtual reality experiences of the user technology database which is associated with element data and the element metadata in the user event database. Reference is now made to Fig. 5, which is a schematic illustration of an example of the tracking function using the system 200 according to one implementation of the technology disclosed. As illustrated in FIG. 5, the system employs recording data in a perspective camera (sensor) projection area 520 of an object at the center of user's attention 510 wherein each marked black point represents a bounding volume point 512 as referenced to an object that is partially rendered by the camera 522 measured by the distance from the closest point in the bounding volume 518 to the center of the camera's projection area 516 to the center of the object 514 in the augmented, mixed and/or virtual reality environments. In the existing modern augmented/mixed/virtual reality devices, in order to track user's movement data, e.g. head/eye movement, the system employs camera sensors (for each eye) to move accordingly based on the acquired data from one or more of accelerometers, magnetometer and/or gyroscope. The present invention discloses recording of the camera rotation but also recording the proximity of all objects that are rendered each frame to the center of the camera, in order to generate the conclusion-data model of the user's interest and interaction within the augmented, mixed and/or virtual reality environments. Drawn objects can be detected by calculating a bounding volume, based on the minimum and maximum vertices (3D points) of the 3D object. To detect whether the object is rendered or not by the bounding volume, the system employs transformation of the 3D points to the screen (camera) coordinates and tracks whether any and/or all of the marked black points are within the camera render range (viewport). The system employs the method described herein to obtain the 2D points on the camera's projection area 520 from the 3D points in 3D space to calculate the distance between these 2D points and the center point of the camera's projection area 516,
V ample herein,
Figure imgf000017_0001
represents the center point of the camera s projection area and (x2, y2) represents any of the points on the object's bounding volume and its center. By recording the distances, per frame or given delta-time, the system acquires a set of distances of the center of the projection window from all drawn objects to generate conclusion data model by giving each object a score, based on the time their given points had high proximity to the center of the projection area. There are additional known in the art sensors which can be used to detect user's interest in a more accurate way, e.g. detecting whether an object with high proximity has caused a stimulation to the user's brain, blood pressure, sweat or facial expression by using necessary sensors, and contribute to ranking scores assigning of how and when the user reacts and interacts with the elements in augmented, mixed and/or virtual reality environment.
[50] While one or more embodiments of the invention have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the invention.
[51] In the description of embodiments, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific embodiments of the claimed subject matter. It is to be understood that other embodiments may be used and that changes or alterations, such as structural changes, may be made. Such embodiments, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein may be presented in a certain order, in some cases the ordering may be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other embodiments using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results.

Claims

[52] CLAIMS
1. A computer- implemented data capture method for collecting user event data generated during user events within augmented and/or virtual reality experiences and recorded for the use of an intelligent agent, the method comprising:
a. electronically collecting and storing augmented and/or virtual element display metadata (EDM), said element display metadata (EDM) enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments;
b. electronically analyzing and correlating element display metadata (EDM) predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction ranking of the experience content;
c. electronically collecting and storing user event metadata (UEM), said user event metadata (UEM) enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing in real-time based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments;
d. electronically generating a conclusion-data model of the augmented, mixed and/or virtual reality environments based on said element data comprising the element display metadata (EDM) and the user event metadata (UEM), wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and
e. electronically storing said conclusion-data model of the augmented and/or virtual environment on said computer storage medium.
2. The method of claim 1, wherein the user event metadata (UEM) consisting from a group of temperature, light, humidity, movement, sound, haptic and/or RF interaction within augmented and/or virtual reality experiences or any combinations thereof.
3. The method of claim 1, which includes updating the element display metadata (EDM) and the user event metadata (UEM) in real-time.
4. The method of claim 1, further comprising constructing a user event database for the augmented and/or virtual reality experiences using said analysis of the element display metadata (EDM) in relation to the user event metadata (UEM).
5. The method of claim 1, further comprising constructing an agent- accessible user technology database wherein data stored in the user technology database is predetermined to be associated with at least one of element display metadata (EDM) in the user event database.
6. The method of claim 1, further enabling the agent to lease a portion of the data displayed in augmented and/or virtual reality experiences of the user technology database which is associated with element data and the element metadata in the user event database.
7. An interactive computer- implemented system for data capture for collecting user event data generated during user events in augmented and/or virtual reality experiences and recorded for the use of an intelligent agent comprising:
a. at least one processor; and
b. at least one data storage device storing a plurality of instructions and data wherein, upon execution of said instructions by the at least one processor, said instructions cause:
i. electronically collecting and storing augmented and/or virtual element display metadata (EDM), said element display metadata (EDM) enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments;
ii. electronically analyzing and correlating element display metadata (EDM) predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction ranking of the experience content;
iii. electronically collecting and storing user event metadata (UEM), said user event metadata (UEM) enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing in real-time based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments; iv. electronically generating a conclusion-data model of the augmented, mixed and/or virtual reality environments based on said element data comprising the element display metadata (EDM) and the user event metadata (UEM), wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and
v. electronically storing said conclusion-data model of the augmented and/or virtual environment on said computer storage medium.
8. The system of claim 7, wherein the user event metadata (UEM) consisting from a group of temperature, light, humidity, movement, sound, haptic and/or RF interaction within augmented and/or virtual reality experiences or any combinations thereof.
9. The system of claim 7, wherein said instructions further cause updating the element display metadata (EDM) and the user event metadata (UEM) in real-time.
10. The system of claim 7, wherein said instructions further cause constructing a user event database for the augmented and/or virtual reality experiences using said analysis of the element display metadata (EDM) in relation to the user event metadata (UEM).
11. The system of claim 7, wherein said instructions further cause constructing an agent- accessible user technology database wherein data stored in the user technology database is predetermined to be associated with at least one of element display metadata (EDM) in the user event database.
12. The system of claim 7, wherein said instructions further cause enabling the agent to lease a portion of the data displayed in augmented and/or virtual reality experiences of the user technology database which is associated with element data and the element metadata in the user event database.
13. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising:
a. electronically collecting and storing augmented and/or virtual element display metadata (EDM), said element display metadata (EDM) enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements and the position of said elements in relation to one another in the augmented and/or virtual environments; b. electronically analyzing and correlating element display metadata (EDM) predetermined to be associated with at least one of the virtual elements, wherein said each of a plurality of virtual elements is associated with generated by the user interaction ranking of the experience content;
c. electronically collecting and storing user event metadata (UEM), said user event metadata (UEM) enabling a processor to cause a display of a conclusion-data immersive representation of each of a plurality of virtual elements changing in real-time based on user interaction with them and the position of said elements in relation to one another and the user in the augmented and/or virtual environments;
d. electronically generating a conclusion-data model of the augmented, mixed and/or virtual reality environments based on said element data comprising the element display metadata (EDM) and the user event metadata (UEM), wherein said conclusion-data data model is used to compute user's event parameters by an intelligent agent, and wherein said conclusion-data model is displayable to a user of the data model; and
e. electronically storing said conclusion-data model of the augmented and/or virtual environment on said computer storage medium.
14. The computer-readable medium of claim 13, wherein the user event metadata (UEM) consisting from a group of temperature, light, humidity, movement, sound, haptic and/or RF interaction within augmented and/or virtual reality experiences or any combinations thereof.
15. The computer-readable medium of claim 13, the operations further comprising updating the element display metadata (EDM) and the user event metadata (UEM) in real-time.
16. The computer-readable medium of claim 13, the operations further comprising constructing a user event database for the augmented and/or virtual reality experiences using said analysis of the element display metadata (EDM) in relation to the user event metadata (UEM).
17. The computer-readable medium of claim 13, the operations further comprising constructing an agent-accessible user technology database wherein data stored in the user technology database is predetermined to be associated with at least one of element data and the element metadata in the user event database.
8. The computer-readable medium of claim 13, the operations further comprising constructing an agent-accessible user technology database wherein data stored in the user technology database is predetermined to be associated with at least one of element display metadata (EDM) in the user event database.
PCT/IL2017/050631 2016-06-06 2017-06-06 Method and system for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences WO2017212484A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662345889P 2016-06-06 2016-06-06
US62/345,889 2016-06-06

Publications (1)

Publication Number Publication Date
WO2017212484A1 true WO2017212484A1 (en) 2017-12-14

Family

ID=60577676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2017/050631 WO2017212484A1 (en) 2016-06-06 2017-06-06 Method and system for data gathering and user personalization techniques in augmented, mixed and virtual reality experiences

Country Status (1)

Country Link
WO (1) WO2017212484A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10681183B2 (en) 2014-05-28 2020-06-09 Alexander Hertel Platform for constructing and consuming realm and object featured clouds

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063983A1 (en) * 2007-08-27 2009-03-05 Qurio Holdings, Inc. System and method for representing content, user presence and interaction within virtual world advertising environments
US20100180216A1 (en) * 2009-01-15 2010-07-15 International Business Machines Corporation Managing interactions in a virtual world environment
US20100205035A1 (en) * 2009-02-09 2010-08-12 Baszucki David B Providing Advertisements in Virtual Environments

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063983A1 (en) * 2007-08-27 2009-03-05 Qurio Holdings, Inc. System and method for representing content, user presence and interaction within virtual world advertising environments
US20100180216A1 (en) * 2009-01-15 2010-07-15 International Business Machines Corporation Managing interactions in a virtual world environment
US20100205035A1 (en) * 2009-02-09 2010-08-12 Baszucki David B Providing Advertisements in Virtual Environments

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10681183B2 (en) 2014-05-28 2020-06-09 Alexander Hertel Platform for constructing and consuming realm and object featured clouds
US11368557B2 (en) 2014-05-28 2022-06-21 Alexander Hertel Platform for constructing and consuming realm and object feature clouds
US11729245B2 (en) 2014-05-28 2023-08-15 Alexander Hertel Platform for constructing and consuming realm and object feature clouds
US12101371B2 (en) 2014-05-28 2024-09-24 Alexander Hertel Platform for constructing and consuming realm and object feature clouds

Similar Documents

Publication Publication Date Title
Zhao et al. Desktop versus immersive virtual environments: effects on spatial learning
Creem-Regehr et al. The influence of restricted viewing conditions on egocentric distance perception: Implications for real and virtual indoor environments
US10289376B2 (en) Method for displaying virtual object in plural electronic devices and electronic device supporting the method
Bergstrom et al. Eye tracking in user experience design
US11854148B2 (en) Virtual content display opportunity in mixed reality
CN105339868B (en) Vision enhancement based on eyes tracking
JP6072237B2 (en) Fingertip location for gesture input
US10055894B2 (en) Markerless superimposition of content in augmented reality systems
JP6013583B2 (en) Method for emphasizing effective interface elements
US9030495B2 (en) Augmented reality help
US20170115742A1 (en) Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command
EP3364272A1 (en) Automatic localized haptics generation system
WO2018127782A1 (en) Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command
KR20160022922A (en) User interface navigation
EP3036718A1 (en) Approaches for simulating three-dimensional views
CN111698564B (en) Information recommendation method, device, equipment and storage medium
CN103858073A (en) Touch free interface for augmented reality systems
CN104778600A (en) Incentive mechanisms for user interaction and content consumption
Brancati et al. Touchless target selection techniques for wearable augmented reality systems
Luoto Systematic literature review on user logging in virtual reality
US10509473B2 (en) Providing haptic feedback on a screen
CN112449691B (en) Refining virtual mesh models by physical contact
US10852814B1 (en) Bounding virtual object
CN103752010A (en) Reality coverage enhancing method used for control equipment
Dondi et al. Gaze-based human–computer interaction for museums and exhibitions: technologies, Applications and Future Perspectives

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17809843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 200319)

122 Ep: pct application non-entry in european phase

Ref document number: 17809843

Country of ref document: EP

Kind code of ref document: A1