US20180018708A1 - Holographic Technology Implemented Retail Solutions - Google Patents
Holographic Technology Implemented Retail Solutions Download PDFInfo
- Publication number
- US20180018708A1 US20180018708A1 US15/381,555 US201615381555A US2018018708A1 US 20180018708 A1 US20180018708 A1 US 20180018708A1 US 201615381555 A US201615381555 A US 201615381555A US 2018018708 A1 US2018018708 A1 US 2018018708A1
- Authority
- US
- United States
- Prior art keywords
- social media
- session
- server system
- user
- physical objects
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
- G06F3/1446—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/117—Tagging; Marking up; Designating a block; Setting of attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/043—Distributed expert systems; Blackboards
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0281—Customer communication at a business location, e.g. providing product or service information, consulting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/20—Individual registration on entry or exit involving the use of a pass
- G07C9/22—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
- G07C9/25—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
- G07C9/257—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition electronically
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0132—Head-up displays characterised by optical features comprising binocular systems
- G02B2027/0134—Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/004—Annotating, labelling
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19654—Details concerning communication with a camera
- G08B13/19656—Network used to communicate with a camera, e.g. WAN, LAN, Internet
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- This description relates to operation of sensor networks such as those used with security, intrusion and alarm systems installed on commercial premises.
- sensors are deployed in commercial buildings. Types of sensors typically include motion detectors, cameras, and proximity sensors (used to determine whether a door or window has been opened). Such sensors can constantly collecting data that is used to determine whether an alarm should be triggered, but also continues to collect data after an alarm is triggered.
- Retail establishments often use simple physical walk-throughs with users having smart-phone and/or tablet based presentations, and use conventional retail analytics applications, and verbal descriptions as tools used for analysis to investigate trends and potential explanations of observations suggested by data analytics.
- Augmented reality virtual reality and mixed reality technologies are known.
- virtual reality refers to technologies that replicate an environment with a simulation of a user being immersed in the replicated environment.
- Augmented reality generally refers to technologies that present a view of a real-world environment augmented with computer generated data.
- Mixed reality a relatively new term generally involves technologies that involve a merging of real world and virtual world environments where real and virtual objects exist and interact.
- a system includes a server system including one or more processor devices, memory in communication with the one or more processor devices and a storage device that stores a program of computing instructions for execution by the processor using the memory, the program comprising instructions configured to cause the processor to receive a set of social media feeds, filter the social media feeds according to at least one criterion to derive data according to the at least one criterion; receive from a mixed reality device, an image containing a view of a set of physical objects, execute one or more merchandizing algorithms that apply the data derived from the filtered social media feeds to provide as an output data describing one or more social media related values associated with one or more of the physical objects in the view of the set of physical objects, generate a set of virtual objects that include the data describing the one or more social media values regarding the set of physical objects, and send the set of virtual objects to the mixed reality device.
- aspects also include computer program products and methods.
- the disclosed techniques use computer implemented techniques that obtain information from various electronic systems/devices in the physical world, which devices are exemplified by security systems, and merge that information into a virtual world of policies and analytics that involve retail systems that generate analytical information regarding customers and their preferences and needs.
- the techniques also involve processing in real time of various feeds from social media systems.
- These techniques are adapted for various modes of operation that execute algorithms that apply filtered social media feeds and merchandizing parameters with respect to physical items. This improves upon simple physical walk-throughs blended with smart-phone and tablet based presentations, conventional retail analytics apps, and verbal descriptions. In many cases the main tools of such analysis are limited to emails and spreadsheets. Using these conventional methods it is very time consuming and difficult, or even impossible, to investigate trends and potential explanations of observations suggested by data analytics.
- FIG. 1 is a schematic diagram of an exemplary networked security system.
- FIG. 2 is a block diagram of a generally conventional constrained device typically used in security systems.
- FIG. 3 is a block diagram depicting a sales promotion system integrated with a mixed reality system.
- FIG. 4 is a flow chart of an embodiment of a sales promotion application.
- FIG. 5 is a block diagram of an AR/VR session manager.
- FIG. 6 is a block diagram of an AI based cognitive agent.
- FIG. 7 is a block diagram of another embodiment of a including retail process, store, and item information databases, the AR/VR device and session manager.
- FIG. 8 is a flow chart of exemplary processing.
- an integrated platform 10 that integrates via a distributed network 11 , mixed reality devices 13 a - 13 c with security/intrusion/alarm/surveillance systems 15 a - 15 c (typically including sensors 20 , functional nodes 18 and typically including a panel not shown).
- Examples of mixed reality devices 13 a - 13 c are those in which the mixed reality devices incorporate a live, real world presentation of elements of the physical real-world with virtual elements that are calculated or produced from inputs and which are rendered on a display so that to a user these calculated or produced elements are perceived to exist together with the physical real world in a common environment.
- Examples of such mixed reality devices 13 a - 13 c include mixed reality devices such as Hololens® (Microsoft), (a smart-glasses, cordless, Windows 10® (Microsoft) computer headset that includes various sensors and a high-definition stereoscopic 3D optical head-mounted display, and spatial sound to allow for augmented reality applications.
- Other mixed reality devices/augmented reality systems such as Google Glass® (Google) could be used. There are many such systems on the market of which these are two examples.
- the security systems 15 a - 15 c typically include a panel (not shown), such as for an intrusion detection system, an intrusion detection panel wired or wirelessly connected to a variety of sensors deployed in a premises. Typically, such panels receive signals from one or more of these sensors to indicate a current state or value or that a particular condition being monitored has changed or become unsecure.
- the integrated platform 10 includes data collection systems that are coupled to wireless sensor networks and wireless devices, with remote server-based monitoring via servers 14 and report generation.
- wireless sensor networks generally use a combination of wired and wireless links between computing devices, with wireless links usually used for the lowest level connections (e.g., end-node device to hub/gateway 16 ).
- the edge (wirelessly-connected) tier of the network is comprised of resource-constrained devices 20 with specific functions. These devices 20 may have a small-to-moderate amount of processing power and memory, and may be battery powered, thus requiring that they conserve energy by spending much of their time in sleep mode.
- a typical model is one where the edge devices 20 generally form a single wireless network in which each end-node communicates directly with its parent node (e.g., 18 ) in a hub-and-spoke-style architecture.
- the parent node may be, e.g., an access point on a gateway or a sub-coordinator which is, in turn, connected to the access point or another sub-coordinator.
- the distributed network 11 is logically divided into a set of tiers or hierarchical levels 12 a - 12 c .
- the mixed reality devices 13 a - 13 n are shown in communication with the top one or two tiers or hierarchical levels 12 a - 12 c .
- the lower level tier 12 c is illustrated divided into different premises 19 a - 19 c for ease in explaining details of the applications that will be discussed below.
- the premises 19 a - 19 c are each associated with one of the security systems 15 a - 15 c .
- the security systems can be independent meaning that there are no connections (as shown) among fully functional nodes of different premises or dependent meaning that there are connections (not shown) among fully functional nodes of different premises.
- servers and/or virtual servers 14 running a “cloud computing” paradigm that are networked together using well-established networking technology such as Internet protocols or which can be private networks that use none or part of the Internet.
- Applications that run on those servers 14 communicate using various protocols such as for Web Internet networks XML/SOAP, RESTful web service, and other application layer technologies such as HTTP and ATOM.
- the distributed network 11 has direct links between devices (nodes) as shown and discussed below.
- Servers 14 execute analytics (analysis programs of various sorts) that are managed in concert with a session manager system 80 ( FIG. 4 ).
- the servers 14 can access a database 23 .
- the second logically divided tier or hierarchical level 12 b involves gateways 16 located at central, convenient places inside individual buildings and structures, e.g., 13 a - 13 c . These gateways 16 communicate with servers 14 in the upper tier whether the servers are stand-alone dedicated servers and/or cloud based servers running cloud applications using web programming techniques.
- the middle tier gateways 16 are also shown with both local area network 17 a (e.g., Ethernet or 802.11) and cellular network interfaces 17 b .
- Each gateway is equipped with an access point (fully functional node or “F” node) that is physically attached to that access point and that provides a wireless connection point to other nodes in the wireless network.
- the links (illustrated by lines not numbered) shown in FIG. 1 represent direct (single-hop MAC layer) connections between devices.
- a formal networking layer that functions in each of the three tiers shown in FIG. 1 ) uses a series of these direct links together with routing devices to send messages (fragmented or non-fragmented) from one device to another over the network.
- the distributed network topology also includes a lower tier (edge layer) 12 c set of devices that involve fully-functional sensor nodes 18 (e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in FIG. 1 are marked in with an “F”) as well as constrained wireless sensor nodes or sensor end-nodes 20 (marked in the FIG. 1 with “C”).
- fully-functional sensor nodes 18 e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in FIG. 1 are marked in with an “F”
- constrained wireless sensor nodes or sensor end-nodes 20 marked in the FIG. 1 with “C”.
- wired sensors can be included in aspects of the distributed network 11 .
- the distributed network 11 implements a state machine approach to an application layer that runs on the lower tier devices 18 and 20 .
- States in the state machine are comprised of sets of functions that execute in coordination, and these functions can be individually deleted or substituted or added to in order to alter the states in the state machine of a particular lower tier device.
- the state function based application layer uses an edge device operating system that allows for loading and execution of individual functions (after the booting of the device) without rebooting the device (so-called “dynamic programming”).
- edge devices could use other operating systems provided such systems allow for loading and execution of individual functions (after the booting of the device) preferably without rebooting of the edge devices.
- a constrained device 20 that is part of the security/intrusion/alarm/surveillance systems (either integrated examples of such system or standalone examples) is shown.
- a constrained device 20 as used herein is a device having substantially less persistent and volatile memory other computing devices, sensors, systems in a particular networked detection/sensor/alarm system.
- Constrained device 20 includes a processor device 21 a , e.g., a CPU and or other type of controller device that executes under an operating system, generally with 8-bit or 16-bit logic rather than the 32-and 64-bit logic used by high-end computers and microprocessors.
- the constrained device 20 has a relatively small flash/persistent retail establishment 21 b and volatile memory 21 c in comparison with other the computing devices on the network.
- the persistent retail establishment 21 b is about a megabyte of storage or less and volatile memory 21 c is about several kilobytes of RAM memory or less.
- the constrained device 20 has a network interface card 21 d that interfaces the constrained device 20 to the network 11 .
- a wireless interface card is used, but in some instances a wired interface could be used.
- a transceiver chip driven by a wireless network protocol stack e.g., 802.15.4/6LoWPAN
- the constrained device 20 also includes a sensor 22 and a sensor interface 22 a that interfaces to the processor 21 a .
- Sensor 22 can be any type of sensor type device. Typical types of sensors include temperature, simple motion, 1- 2- or 3-axis acceleration force, humidity, pressure, selective chemical, sound/piezo-electric transduction, and/or numerous others, implemented singly or in combination to detect complex events.
- a constrained device 20 can follow the current constraints on flash/persistent storage memory and RAM memory and less than 10-20 kilobytes of RAM/volatile memory, but can have more depending on configuration and in some instances the operating system. These constrained devices 20 are configured in this manner; generally due to cost/physical configuration considerations. These types of constrained devices 20 generally have a static software image (i.e., the logic programmed into the constrained device is always the same).
- Constrained devices 20 execute a real-time operating system that can use dynamic programming and support.
- the real-time operating system (“RTOS”) executes and otherwise manages a dynamic set of user-defined independent executable functions or tasks that are either built into a loaded image (software and RTOS that executes on the constrained device) or that are downloaded during normal operation of the constrained device 20 or a combination of the two, with the former (built into the image) using as subroutines instances of the latter (downloaded during operation).
- RTOS real-time operating system
- Certain of the applications set forth below will cause systems to access these constrained devices 20 to upload data and otherwise control the devices 20 according to needs of the applications.
- a facility can be any type but is typically, e.g., a commercial, industrial, facility, with interior areas, (buildings) and exterior areas that are subject to surveillance and other types of monitoring.
- the buildings can be of any configuration, wide open spaces such as a warehouse, to compartmentalized facilities such as labs/offices.
- the retail establishment includes the plural sensors 22 ( FIG. 1 ).
- a portion of the sensors 22 are r.f., hot spots or the like through which Wi-Fi or other Internet access services are provided.
- Sensors 22 that are hot spots or the like capture information, as a user moves about the retail establishment from the user's possession of the mixed reality device 13 a , as will be discussed in further detail below.
- AR/VR augmented reality/virtual reality
- a user of the mixed reality device 13 a may walk through a retail establishment, examine physical items on retail establishment shelves, and at the same time (via the processing 40 (discussed below) that integrates retail-based analytical processing with mixed reality system technology) will observe visual representations of results of execution of the retail-based analytically processing. These result can be ubiquitous, meaning many or an abundant number of such execution results.
- Examples of such results can be so called “shrinkage levels” for the item or category of items over a selected period of time, “foot traffic,” “dwell time,” “conversion,” and other retail-related data in specific areas of the retail establishment (e.g., the aisle passing by a particular retail item display) as a function of sales promotions of the item.
- Other examples include the visual representation of the correlation of sales between the physical item in view and other items in the retail establishment or available online.
- Still other examples include a correlation of profit of the particular item to profit of other items, etc.
- the mixed reality device 13 a facilitates coordination of communication between two or more individuals discussing (in close proximity to each other in the retail establishment, or via remote communications) retail establishment processes, specific retail items, retail establishment layout issues, and so forth.
- Some implementations include a cognitive agent (artificial intelligence based assistant or “information retrieval and analytics” assistant) that when used in conjunction with the mixed reality device 13 a can produce a more powerful analysis tool.
- a cognitive agent artificial intelligence based assistant or “information retrieval and analytics” assistant
- the user may look at an item on the retail establishment shelf while the AR/VR platform displays virtual objects (like pie charts, graphs, tables, etc.) giving sales, shrinkage, merchandising, and other retail information related to that item, and at the same time the user may (using natural spoken language) query the potentially large collection of backend information systems by asking the cognitive agent simple questions related to the real and virtual objects on display.
- the cognitive agent using a web service includes analysis engines to answer questions from the user.
- the combination of mixed reality device 13 a and AI agent gives the user a very powerful analysis tool stimulated by an initial visual input of objects in the physical world (i.e., natural inspection of items in view and conversations with others and/or questions to the platform).
- a sales promotion system 40 as shown includes plural databases.
- the system 40 is configured to execute a sales promotion application 70 (further described in FIG. 4 ).
- a first database, store inventory database 42 , in system 40 is shown as containing data on retail items, including item name, SKU number, retail price, wholesale price, location in the retail establishment (aisle no., shelf no., slot no., planogram reference no., etc.), number of items in stock, number of items on order, expected arrival date for ordered stock, inventory turnover for the item, and any other data associated with a given retail item.
- Store inventory database 42 is connected to the Internet 63 (or a private network), via store inventory web services 42 a.
- the system 40 also includes other databases that include retail establishment layout information (store layout database 44 ) including retail planograms, fixture locations, layout codes and/or layout version names for each retail establishment address, historical and future planned changes in layout, etc. (connected to the Internet 63 via store layout web service 46 a )
- the store layout database 44 could also include the layout of the same aisle of location for the same retailer's retail establishments that have the same configuration and demographics with the highest performance, as measured in different ways.
- the system 40 also includes an item information database 46 (connected to the Internet via item information web service 46 a ) and having photo images or icon representations of retail items, retail establishment shelf layouts, and other retail related objects.
- Retail establishment performance data, personnel information, and other retail operations and merchandise data can be stored in a merchandizing and promotions database 48 connected to the Internet 63 via merchandizing and promotions web service 48 a.
- the system 40 includes a mobile AR/VR (augmented reality/virtual reality) device, e.g., mixed reality device 13 a , an AR/VR session management system 80 , and a wireless (e.g., Wi-Fi) network 62 with wireless access points such as that shown above in FIG. 1 , within the retail establishment 60 .
- a mobile AR/VR (augmented reality/virtual reality) device e.g., mixed reality device 13 a
- an AR/VR session management system 80 e.g., a wireless (e.g., Wi-Fi) network 62 with wireless access points such as that shown above in FIG. 1 , within the retail establishment 60 .
- Wi-Fi wireless access points
- FIG. 3 The organization of the databases in FIG. 3 are given as examples and are somewhat simplified relative to the design and implementation of actual enterprise-scale retail databases encountered in the commercial world. That is, no attempt is made in the figure to show how the databases are fragmented and deployed for data redundancy, scalability, fast data access, and so forth. Also, the segregation of various types of data into separate databases is simplified in FIG. 3 and it should be recognized that other database architectures can be imagined which are compatible with and included as additional embodiments.
- the mixed reality device 13 a e.g., an “AR/VR device” (augmented reality' virtual reality) allows the user to see the real environment with data or “artificial images” imposed on a view of the real environment.
- AR/VR device augmented reality' virtual reality
- Microsoft HoloLens and Google Glass are examples of commercial devices which allow this mixing of “real” and “virtual” realities as referred to herein also as mixed reality systems.
- the mixed reality device interacts with an outside network and the web (e.g., using a Wi-Fi connection) and also allows for input from the user (e.g., using hand gestures and/or voice commands).
- FIG. 3 shows the various databases and the AR/VR session management system 80 as remote applications (i.e., implemented in one or more servers outside of the retail establishment).
- each of these is accessible via web services (such as RESTful micro-web services) well known to those skilled in the art of distributed databases and mobile services.
- FIG. 3 does not suggest any ownership or management policy of the databases or the AR/VR session management system, and the description specifically includes embodiments where functionality of the system of FIG. 3 is divided in arbitrary ways so as to allow ownership and/or management by various parties which may or may not include the retailer as one of those parties.
- sales promotion application 70 integrates retail-based analytical processing with mixed reality system technology is shown. Described below is a specific implementation of this processing 70 , others may be implemented. As a user of the mixed reality device 13 a walks through the retail establishment, the location of the user and associated mixed reality device 13 a inside the retail establishment is determined and tracked 72 as the user moves around the retail establishment with the mixed reality device 13 a.
- Tracking 72 is accomplished through a number of techniques including wireless triangulation of the device, various “internal GPS” technologies (BLE, RFID, NFC, etc.) or dead-reckoning based accelerometer data integration.
- BLE Bluetooth Low-Fi Protected Access
- RFID Near-Fi Protected Access
- NFC Near-Fi Protected Access to the Cloud
- Tracking 72 is accomplished through a number of techniques including wireless triangulation of the device, various “internal GPS” technologies (BLE, RFID, NFC, etc.) or dead-reckoning based accelerometer data integration.
- BLE GPS
- RFID RFID
- NFC etc.
- dead-reckoning based accelerometer data integration dead-reckoning based accelerometer data integration.
- the physical location of either the mixed reality device 13 a or some other device on the person of the user, e.g., a smartphone
- other technology components such as cameras, beacons, and other access points may be used. These components have been omitted from FIG. 3 and
- the tracked device makes its location (and by inference the location of the user and the mixed reality device 13 a ) known by sending location data over the in-retail establishment wireless network to the AR/VR session manager 80 .
- the location of the user and mixed reality device 13 a are determined without any location determination functionality on the mixed reality device 13 a , and without any second device (i.e., smart phone) if some other outside system (e.g., a video surveillance system with image analytics capabilities able to determine location) is available and is used to track the user's location during the AR/VR session.
- some other outside system e.g., a video surveillance system with image analytics capabilities able to determine location
- the user may also specify where in the retail establishment they are by some other technique such as selecting a location on a map of the retail establishment.
- the AR/VR system may determine its own location by capturing the image or images of items in its surroundings which have been previously mapped by some to the current location. Using such a location to image map the mixed reality device can determine its own location.
- the “image” in such a case might be an actual image recorded in some convenient file format, or it might be an index or set of indices derived from the image in a manner which makes them unique to that image (i.e., an image index or hash).
- the user views items and other points of interest in the retail establishment through the mixed reality device 13 a .
- the AR/VR session manager 80 chooses 74 virtual items and context-relevant information to show to the user on the display of the mixed reality device 13 a.
- the AR/VR session manager 80 sends 76 the chosen virtual items and context-relevant information to the mixed reality device 13 a .
- the user may view several items in the field of view of the mixed reality device display.
- the mixed reality device 13 a provides a user interface (not shown) that displays menu options that allow the user to highlight a specific item, or a group of items, and display information for a variable period of time which is also selected using the interface menu items. This information is sent 78 to the AR/VR session manager 80 .
- the AR/VR session manager 80 analyzes 80 the user highlight information to drill down to find corresponding content on the specific items highlighted in the display, which is sent to the mixed reality device 13 a.
- the user interface (not shown) can be used to enter 82 notes as the user reviews the real and virtual objects and information presented in the display of the mixed reality device 13 a . While engaged in such a session as, the user may also use standard voice communications or voice-to-chat technology available on the mixed reality device to communicate 84 with a second (remote) user or group of users or compose emails or text messages, etc. These actions may be part of a retail establishment review process with extensive pre-planning or may be impromptu as the user goes through the retail establishment in pursuit of day-to-day managerial responsibilities.
- an AR/VR session manager 80 interacts with the mixed reality device 13 a over the Internet using a “session portal” 82 , e.g., a web service (application programming interface (API) or in another embodiment, a dedicated socket with SMTP or other transfer protocol.
- the session portal 82 is bi-directional meaning that each of the mixed reality devices (MRS) 13 a - 13 c can send data to the session manager 80 and receive data from the session manager 80 .
- the mixed reality devices (MRS) 13 a - 13 c send updates on their states to the session manager 80 .
- the states of the mixed reality devices 13 a - 13 c are represented virtually or “mirrored” in a device state representation 84 inside the session manager 80 .
- Input from the mixed reality devices (MRS) 13 a - 13 c to the session manager 80 is used in analytic programs executed on the servers.
- the camera on the mixed reality device 13 a may send an image containing an area showing a retail item with its characteristic consumer brand packaging (by which it is easily recognized by consumers).
- This part of the image is identified by an input analyzer 86 , which relies on image libraries accessible via the web service of the item information database and potentially other databases exposed by the consumer product manufacture, or other web browsers' image analytics services.
- the input analyzer 86 informs analytical manager 88 with inputs to analytic programs (not shown) executing on the servers 14 .
- the analytics manager 88 uses a current mode and inputs presented to it, in order to decide what to present (virtually) to the user on the device viewer and what to request of the analytics executing on the server. Information presented is produced by the analytics manager using data received from the various analytical programs that execute various analytics both conventional as well as to be developed.
- the session mode manager 90 monitors the mode selected by the user (as mirrored in the device state representation) and informs the analytics manager of the selection. Information presented is produced by the virtual content manager using data from the various databases accessible via web services attached to the various external retail databases shown, by way of example, in FIG. 3 .
- the session is logged by the input analyzer, including any notes or annotations provided by the user of the mixed reality device (spoken, typed, or sent via some other mode of communication) into session log/notes records 94 that are stored in a database as records.
- This locale log/record in the session manager 80 are backed up in an external database (not shown) for long-term storage, reporting, and further analysis.
- This local session and long-term storage may also include a full record or “recording” of part or all of the session, rather than just the user notes.
- the user may also view comparative information for this item relative to other items in the retail establishment, in its inventory category, department, or relative to all items in the retail establishment, or relative to this item or group of items in other retail establishments or groups of retail establishments (e.g., the retail establishment's regional division).
- the user (who may have some managerial responsibility in the retail establishment, but may also be a retail establishment analyst or even a shopper, or some other person interested in analyzing the retail establishment or its products and processes) begins use of system in a “user session” with the mixed reality device 13 a initiating the session by a switch on the device, a voice command, and/or a hand gesture.
- the mixed reality device 13 a includes a motion sensor that awakens the device 13 a (i.e., loads operating system components and prepares for input) when the device 13 a senses motion.
- the mixed reality device 13 a may require input of a user ID and password to enable further operation and interaction with the user.
- An AR/VR session is initiated by the user via a menu user interface (not shown) of the mixed reality device 13 a.
- the mixed reality device 13 a operate in various modes, and based on the mode of operation (and the location of the user, and the orientation of the mixed reality device the AR/VR session manager 80 chooses virtual items and context-relevant information to show to the user on the screen.
- One mode is a “loss prevention management” mode.
- the user may approach a particular retail shelf and inspect a retail item, at which time the AR/VR session manager 80 sends information about the item's inventory shrinkage (that is, difference between item sales and item inventory replenishment integrated over a period of time).
- the user may have several items in their field of view, and the device's user interface may display menu options that allow the user to highlight a specific item, or a group of items, and display information for a variable period of time which is also selected using the interface menu items.
- Inventory shrinkage analysis is accomplished when the AR/VR session manager highlights items above some shrinkage threshold (measured in some meaningful metric such as absolute dollars lost, or percent increase in shrinkage over some historical average or benchmark). For example, the user might walk down the aisle of the retail establishment and as they do so, various retail items on the retail establishment shelves come into their (physical) field of view.
- the Mixed reality device 13 a 's camera sends these images back to the session manager 80 via the web connection and the session manager identifies the items and compares sales to current inventory levels (as determined by item-level RFID scans or some other real-time or quasi-real-time inventory measurement technology).
- the shrinkage or inventory loss level for a particular item in view is in excess of a pre-determined threshold (selected by the user at the beginning of the session, or in some pre-configuration file used during initiation of the session), the item is highlighted in red or outlined in some manner which emphasizes its location on the viewer of the mixed reality device 13 a . That is, the item “glows” in some manner in the view of the user.
- the user may then select the item using voice or hand gestures and, as a result, see more detailed information such as shrinkage data, shrinkage calculation input data, historical shrinkage data, comparisons of this data to data from other retail establishments, retail categories, competing brands, and so forth.
- Another mode is a comparative mode, where the user may view comparative information for an item relative to other items in the retail establishment in its inventory category, department, or relative to all items in the retail establishment, or relative to this item or group of items in other retail establishments or groups of retail establishments (e.g., the retail establishment's regional division).
- Another mode allows the user to use the device interface to enter notes as the user reviews the real and virtual objects and information presented.
- Another mode allows the user to use the device while engaged in such an AR/VR session to use standard voice communications or voice-to-chat technology available on the mixed reality devices to communicate with a second (remote) user or group of users about what they are seeing and thinking, or to compose emails or text messages for immediate or future sending.
- the above mentioned application involves the session manager when in loss prevention management mode.
- Other modes might include “merchandising mode” or “planogram mode” in which the items in view of the user are studied with respect to their location on the shelf and in the retail establishment, and how that relates to quality of sales.
- the session manager could operate in generic or “mixed” mode in which any unusual information about an item is virtualized as some visual object and presented to their view.
- merchandising mode as an example, the user might consider an item in view, and highlight it with a hand gesture or voice command.
- the device's user interface might then give graphics showing excursion above or below anticipated sales for a selected period of time, sales comparisons with other directly competing brands, other items in the sales category, the same item in other retail establishments in the vicinity, region, or nation, or comparisons with other sales benchmarks.
- a very valuable comparison is the comparison of sales (turnover frequency) of that item in its current shelf location, compared with sales in other past locations in the same retail establishment.
- the user may not need to select specific items when doing a retail establishment analysis in merchandising mode.
- the AR/VR session manager notes their exceptionality with respect to one of the metrics and benchmarks mentioned above and highlights the item in the field of view in the mixed reality device viewer.
- Another application involves the analysis of the effectiveness of sales promotions.
- the user can view an item and see projected by the AR/VR session manager onto the device screen information related to sales (turnover frequency) relative to changes in retail price over selectable periods of time (in this retail establishment or a collection of comparable retail establishments).
- a particular application of the AR/VR system related to retail establishment promotions is its correlation to foot traffic in each aisle.
- the user stands in a given location in an aisle, he or she should be able to see (based on menu selections via the device's user interface) dwell time in front of each shelf location (of a selected area size, over a selected period of time) as a function of particular promotions. For instance, the spot on the floor might show various shades of green or red depending on whether foot traffic is a little or a lot above or below normal during the promotions period of the selected retail item.
- the AR/VR system may also generate a highlight on an area rendered in the mixed reality device, where a retail establishment promotion is supposed to be in place at a given date and time, but is not in place. For example, a promotion is scheduled to be on an aisle end-cap at 9:00 am on Tuesday. If the promotion is not in place at the location and scheduled time, the AR/VR system can generate an alert that is rendered on the mixed reality device that could include a description and/or image of the missing promotion and a proposed email or text to the person in the retail establishment responsible. Using this function, and similar functionality related to planned advertising, price changes, warnings, and so forth, the user may simply walk through the retail establishment and see (and record for later viewing, logging, or reporting) visual representations of differences between the actual and intended state of the retail establishment.
- FIGS. 3 and 5 and the application ( FIG. 4 ) described above are enhanced using an (artificial intelligence based) cognitive assistant.
- the cognitive agent 120 is an example of one way of implementing such an agent. That is, those skilled in the art of expert systems, natural language processing (NLP) based cognitive agents, and similar AI based agents will be able to immediately envision various permutations of an agent which enables the applications described below.
- NLP natural language processing
- Input text 121 enters the cognitive agent via the NLP pre-processor 122 .
- voice data are converted to text by a voice to text sub-system 122 a using standard voice recognition technology.
- the NLP pre-processor 122 parses the sentence into “tagged” words and phrases (i.e., the NLP pre-processor 122 labels each word with its part of speech such as “noun”, “verb-transitive”, and so forth).
- the output of the pre-processor 122 is a first data structure that represents the input text with words and phrases connected to each other and annotated with indicators of meaning (or “sense” or “context”) pointing to the words and phrases. These meanings are extracted or determined using statistics and pattern based comparisons to a semantic network 124 , which along with the lexicon 126 , retail establishments, includes the cognitive agent's knowledge base of language (domain knowledge ontologies 128 ).
- the representation of the input text (e.g., the first data structure with determined meanings) is passed to a response macro-planner 130 that determines information content of a response from the cognitive agent 120 120 .
- the macro-planner 130 can use non-monotonic planning with default logic and patterns or templates (from episodic memory 132 ) and also domain knowledge 128 to produce a second data structure that contains the response.
- This data structure does not contain “sentences” per se, but rather concepts and concept qualifiers (for time, quantity, type and degree of emphasis, etc.) that together comprise the logic behind the response from the cognitive agent 120 120 .
- This second data structure is passed to a micro-planner 134 that produces sentences from the second data structure for the response from cognitive agent 120 using the language knowledge contained in the lexicon 126 and semantic network 124 .
- the sentences can be “polished,” e.g., corrected, for grammar, (e.g., tense, singular/plural matching, etc.) in a finisher 136 that outputs text output 137 .
- Domain knowledge ontologies and episodic memory are enhanced over time as the cognitive agent is given new information, and as old faulty information is corrected. That is, these can be written to selectively by “actuator” functionality in an extension of the response macro-planner.
- the AR/VR session manager 80 may generate input text that is sent directly to the cognitive agent 120 in order to apprise the cognitive agent of the context of the conversation with the user of the mixed reality device 13 a . For instance, when the user asks the cognitive agent 120 : “Has shrinkage for this item increased more than expected this month?” the cognitive agent 120 may not know to what “this” refers, and will also not know the time period in question without further information. The agent could ask the user for time and item ID clarification, but it is more convenient for the user if the cognitive agent 120 is passed information directly from the session manager 80 about the item that is highlighted, and what time period is selected or configured.
- the session manager 80 makes the session state visible to the cognitive agent 120 , via a web-service that is not controlled by or visible to the user.
- the knowledge base of the cognitive agent 120 is specifically trained for the particular retailer's markets, key customer base and associated demographics, item types and uses, and any other knowledge categorization which makes it easier for the agent to narrow questions and better interpret the intention behind the user's questions. There are many questions that might be asked by the user of the mixed reality device 13 a of the cognitive agent 120 .
- the sales promotion application 70 can be enhanced as the cognitive agent 120 allows the user to ask questions directly through the mixed reality device 13 a , which are sent as input to the AR/VR session manager system 80 (see FIG. 7 ).
- the user can considers an item and the accompanying information on sales vs. promotional price and can drill down to ask the cognitive agent 120 (via the voice-to-chat interface) various questions such as:
- the cognitive agent 120 is made sufficiently flexible in its ability to process and interpret natural language, and it is sufficiently broad in its retail practices knowledge base that it can answer many and any such questions without requiring an enumeration and “hard coding” of specific questions.
- the cognitive agent 120 answers these questions by use of an Artificial Intelligence platform that takes the output of the cognitive agent 120 and parses words in the output. Such capability has been demonstrated to a limited extent by the Apple Siri platform, and to a much greater extent by IBM's Watson AI engine and IPSoft's Amelia cognitive agent.
- the cognitive agent 120 can answer an actual question “where has shrinkage increased over the prior month” by session manger system 80 having a sub system “input analyzer 86 a ” that processes input from the cognitive agent 120 and recognizes that shrinkage is being asked accesses current and historical data, calculates the shrinkage if any and delivers the calculated analytics back to the cognitive 120 to place that value into a sentence that is rendered back to the user.
- Another embodiment involves the visual correlation of item sales. For example, if the user of the mixed reality device 13 a highlights a particular item on a shelf or display, the device 13 a may show all items in the vicinity whose sales correlate most closely with the selected item.
- Another embodiment specifically includes an application of the described system in which the sales, shrinkage, or associated foot traffic correlations for a particular retail item or group of items is shown to the user, via the mixed reality device 13 a , for various alternative retail establishment layouts including but not limited to the current (physical) retail establishment layout.
- the user or users engage in collaborate session to compare retail establishment performance impacts for those layouts, and investigate (particularly when the cognitive agent 120 is used) deep implications of retail data analytics applied to retail establishments with varying retail establishment layouts that are shown visually on the mixed reality device 13 a.
- a previously recorded session for a particular retail establishment is replayed to simulate the retail establishment visit for the original user or a new user.
- One such embodiment includes the case where images are captured (for example, while the user walks through the retail establishment) and data/information virtualization and projection is applied later to produce the augmented walk-through session.
- Another embodiment includes a session (produced by any of the techniques described above) involving a series of real and virtual objects presented in the session, with subsequent application of the cognitive agent 120 to answer user questions and record user thoughts and expertise as the user watches the previously recorded session.
- the system 40 of FIG. 3 can be an enhanced, system 140 adapted for other modes.
- the enhanced system in addition to including the features of FIG. 3 , also includes feeds from social media systems 144 a - 144 n .
- This system can be adapted for various modes of operation that execute algorithms that apply user filtered social media feeds and merchandizing parameters with respect to physical items.
- One such mode is a merchandising/training mode.
- the user walks around the retail store and views (real/tangible) items for sale and other tangible items and things of interest (such as advertising signage, models and displays).
- a suitable mode of operation e.g., merchandising mode, general use mode, sales analysis mode, etc.
- the user views a set of such tangible items (visible through the display of the mixed reality device 13 a ) and select one item or a small group of related items from the view in the mixed reality device 13 a and see a set of virtual objects projected by the display of the mixed reality device 13 a , as generally discussed above.
- the enhanced system 140 includes a processing system 142 that accesses data from the databases 42 - 48 to construct these virtual objects that represent and correspond to data trends related to the tangible item or items, but with the processing to construct this virtual objects modified to include external data trend information.
- external data trend data are data related to social media.
- the processing system 142 receives feeds from social media systems 144 a - 144 n via the Internet and executes social media sales promotion application 150 that accesses 152 feeds from social media systems 144 a - 144 n .
- the processing system 142 accesses these various types of social media feeds, filters 154 those feeds and extracts 156 relevant data for use by the processing system 142 in construction of virtual objects.
- one of the virtual objects that may be constructed is a virtual object that represents a pie chart that shows interest levels associated with the real-world object when rendered on the mixed reality device 13 a .
- These objects in the form of the pie chart can depict percent positive and percent negative comments, e.g., from mining of contents of messages from various social media platforms, such as Twitter, Facebook, Instagram, etc.
- the processing system 142 obtains content from posted references to the item or current promotion or news about the item, along with top, e.g., three, top five, etc. positive and negative words and short phrases in the positive and negative references.
- the virtual objects are constructed in a similar manner as discussed above and include a set of icons (e.g., several small pie charts) rendered in, e.g., a corner of the display of the mixed reality device 13 a, and when one of these is selected by the user, a fuller and richer set of related data are shown to the user on the display.
- data can include trending history of positive/negative response ratio, occurrence trends and history of selected words and phrases in the social media, and so forth).
- the processing system 142 executes various algorithms that present such data to the user, according to merchandizing principles used or to be developed in the industry. The algorithms also apply one or more merchandizing parameters associated with the set of physical objects.
- processing 150 for construction of such virtual objects receives a set of social media feeds, filters the feeds according to criterion or criteria 152 .
- the processing 150 generates 154 a set of virtual objects that will be send to the mixed reality device 13 a to be rendered in juxtaposition with physical views of the objects.
- the processing 150 can also invoke 158 the cognitive agent that receives user inputs regarding the physical items, analyzes these inputs and detects ambiguities, 160 .
- the processing 150 resolves 162 ambiguities by accessing data (e.g., highlighted object in mixed reality device 13 a display), directed from the mixed reality device 13 a.
- virtual object enhancement is meant that the view of the tangible item on the screen of the AR system is highlighted or augmented (e.g., by color outline, color hue and brightness change, and so forth).
- exceptional in some way is meant that the item in view is discussed in social media with above-average frequency or intensity, with respect to a judging criterion that are empirically set.
- the selection of highlighted items are filtered with respect to other analytical results such as inferred demographics of persons posting feeds to social media platforms.
- the user walks around the store and sees highlighted all items that are trending upward in social media, relative to some threshold (e.g., 10% more social media daily references in past seven days relative to daily references in the preceding seven days). Similarly, the user see other items trending downward.
- the time frame for comparison may be selected to correspond to promotions, and may be depend upon the particular item.
- the comparison to determine whether or not the item is highlighted (augmented) in the AR system view is based on five days compared to baseline for item A and nine days compared to baseline for item C.
- baseline could be any convenient number of days in the recent past which provide a good bench mark for the normal (non-promoted) daily frequency of mention in social media for the item.
- an outside retail analytics service could be used to obtain a list of discussed items (e.g., the hottest 100 items in the retail store).
- the session mode manager uses this data along with query results from another service that obtains data on one or more merchandizing parameters associated with the set of physical objects, e.g.,—inventory stock level and location service—to produce a (virtual object based) map of the store showing the location of each hot item.
- the user may physically walk through the store to each item or take a virtual tour with the mixed reality device 13 a showing the most recent images of the display area for each item in turn.
- Execution of the one or more merchandizing algorithms using the data derived from the filtered social media feeds and in some implementations also using one or more merchandizing parameters associated with the set of physical objects provide as an output from execution of such algorithms, data that describe one or more social media derived characteristics ascribed to the products, and which can also include in some implementations, one or more merchandizing values associated with one or more of the physical objects in the view of the set of physical objects
- the user may select items (those items on the “hot list” or those other items in the vicinity of hot items) and see detailed results of social media analytics, including but not limited to:
- the session mode manager provides via a configuration interface, a list of well-known social media sources, each with a check box adjacent to it, such that the user may turn on or off each source, one by one, and thus allowing fine control in how the social media sources are filtered for use as inputs to the applications described herein.
- This configuration should also allow the user to input new sources not available in pre-configured or pre-populated versions of the list.
- the virtual objects showing social media reference intensity details will include a list of those social media sources which have contributed most heavily (relative to some threshold criterion) to the displayed results.
- the augmented reality system and applications described above are enhanced with annotation functionality and/or a comments log in which a store manager or other system user filling some investigatory role is able to record (as voice recording or voice-to-text notes, hand-written notes captured on a portable device via stylus, or typed notes) thoughts, comments, reactions, action items, and/or future priorities and plans that occur to them as they take a real or virtual tour of the retail store (or a virtual tour of a list of items related to a real or imaged hypothetical store).
- annotations are stored in a log accompanying and synchronized with the AR session.
- the log of annotations are produced through multiple tours by one or more users and thus, in the most general case, will be composite log of the idea and insight contributions of multiple users.
- the system specifically includes functionality in the “session mode manager” and “session log and notes” modules (see FIG. 5 ), which allows a user to find recordings, via queries, of previously recorded real or virtual tour sessions matching desired criteria and use those to communicate deep insights about the retail operation among retail enterprise employees, partner representatives, vendors, and retail customers.
- Enabling functionality in these modules is implemented as sub-modules or replicated instances of modules or sub-modules, and that these modules and sub-modules may be local-server-based, AR-device based, cloud-based, or a combination of those deployment options.
- the number of session log files will grow large and these can be stored economically, e.g., in a cloud-based web-service-accessible database.
- the regional manager may use search menu functionality exposed in the session mode manager interface to request all recorded sessions made in stores that have sales performance in the top 3% of all stores (to use a specific number for purposes of illustration) for a particular product or product category over (say, for example) the past year.
- the session mode manager queries an external database web-service or analytics engine service to find the store IDs corresponding to the top sales for that item over the desired time interval. After the session mode manager is in possession of these store IDs the session mode manager uses the store IDs to search session log and nodes module to find the list of all recorded sessions for those stores over the time period of interest.
- the session manager can sort the recorded sessions in the list according to appropriateness as judged by the product or product category relevance (i.e., push to the top of the search list all of those session recordings which contain the largest numbers of references to the product of interest).
- a user such as the regional manager mentioned above, may use search methods similar to those described above to find session recordings matching a single criterion or multiple combined search criteria.
- the user may add new annotations and notes (as text, audio recording, or both) that describe how the sessions might be used for a specific purpose (e.g., training or other business initiatives such as presentations to partners or trade groups, public press, standards organizations, vendor representatives, etc.).
- the same regional manager or other individuals for example, an individual responsible for organizing training activities for an upcoming new store manager orientation event
- annotation refers to a single key word or phrase, or a sentence or sentences that contain a set or subset of key words or phrases.
- the annotation may also be associated with a set of key words or phrases through synonyms or phrases of equivalent meaning as mapped through some appropriate lexicon or semantic mapping.
- FIG. 5 the organization of modules in FIG. 5 is for purposes of illustration, and it is feasible to combine modules into one module or to split a module into multiple modules of more specific purposes.
- sessions are searched for and annotated or tagged, and subsequently used to demonstrate to various users differences in or contrasting methods related to retail item merchandising aspects including but not limited to store layout options, promotions practices details, product placement, product grouping, signage, and so forth.
- a user interested in investigating retail best practices could search for sessions recorded in high performance stores, or in any store as reviewed and annotated by expert users skilled in noting good and bad retail practices. The user could then view those sessions to hear (or read) the annotations of the expert users in order to gain insights into their expert skills.
- a user views a previously recorded session and ask questions or make comments about their thoughts and concerns, which are then recorded as new annotations in the session.
- the newly annotated session (copy) is viewed by an expert or “teacher” user who records answers to the questions and concerns, and this copy with answer annotations is subsequently reviewed by the first user in order to obtain those answers.
- the system can be configured to allow recorded sessions to be used to communicate ideas about retail practices between retail experts and representatives of consumer product manufacturers.
- a merchandising department of a particular consumer product manufacturer i.e., brand owner
- the consumer product manufacturer may add their own annotations about promotions options and this can be subsequently viewed in the recorded session by merchandising people in the retail organization.
- This may be facilitated by having shared access areas of storage in the session log and notes module or by providing specific functionality in the session mode manager that allows the original producer of a session to specifically authorize specific users or user organizations access to the session.
- the session mode manager further allows subsequent users (reviewers adding new annotations) to modify (more specifically to narrow) access control to the session.
- the original producer or annotator of a previously recorded session controls access by recording secret keywords or phrases in the session, preferably but not necessarily tagged for “access control”, such that any user who knows the keyword or phrase may gain access to the session, or to a particular annotated version of the session.
- the session producer or subsequent annotator use annotations to mark a session for copy to one or more areas or lists in the log set aside for specific other users inside or outside of the organization or company of the first user.
- the system includes functionality to allow the superposition of shelf layout plan views (i.e., “plan-o-grams”) by one user onto the image or scene of a particular part of the store for view and consideration by a second user.
- shelf layout plan views i.e., “plan-o-grams”
- a merchandising department of a brand manufacturer may produce a promotions plan or a product, with a particular shelf layout and with specific shelf edge advertising, and/or floor decal advertising (for placement on the floor in front of the display), and/or other promotions details.
- These promotions plan or a product is captured by producing an AR session in a real store or demo store convenient for the merchandising department employees, and then cutting out and saving the images from that session.
- a library or some other suitable repository where they can be accessed automatically or manually and superimposed into a new session (recorded previously by the retailer employees) which shows a different store layout.
- the retailer produces a session of a new store layout and convey this to the brand manufacturer representatives so that they can superimpose signage or other images and thus fabricate a virtual scene of one or more new promotions options for a particular retail product.
- the session with annotations are tagged for a specific user in the store (i.e., a subordinate referenced by name or function), and a text or email message sent to that user instructing them to view the session using the AR device or on a computer terminal, tablet, or smart phone application emulating the screen of the AR device.
- a user may view two related sessions which are synchronized so as to facilitate comparison. For example, if two sessions (A and B) are recorded at two different times (for example, several months apart) in the same store, these are viewed subsequently by a user, and the user may physically or virtually walk through the store, requesting the presentation of views A and B (toggling between the two) to see differences between the two at any given location in the store. This might be done in order to understand and appreciate differences between two stores or versions of the same store corresponding to two differing performance situations (i.e., the change in the store has led to significant increase or decrease in store performance, and the A-to-B session comparison is made to help the user see and appreciate how the stores differ in the eyes of shoppers).
- a session is recorded at a previously agreed upon time in a selected store or stores by a store representative, and the session is made available (e.g., in a shared storage area or via an access-controlled web service) to other employees in the retail organization, and/or to employees of a consumer brand manufacturer so that these reviewers may verify that promotions of items, placement and stocking of items, shelf allocation, and other details of display and stocking, as prescribed by contractual agreement, are being complied with by the retailer.
- the actual producer of the session may be an employee of the retailer, a representative of the brand manufacturer, a paid mystery shopper, or any other appropriate third party whose responsibility it is to visit the store and produce the session record.
- the system of FIG. 7 (as with that of FIG. 5 ) and the applications described above are enhanced using an (artificial intelligence based) cognitive assistant 120 , as generally described for FIG. 6 .
- This cognitive assistant would be an expert system that executes natural language processing (NLP).
- NLP natural language processing
- These cognitive agents 120 and similar AI based agents can assist with obtaining information regarding input queries generated by the user in the form of input text for questions especially as the questions pertain to activity regarding social media.
- These questions enter the cognitive agent 120 via the NLP pre-processor 122 generally as discussed above for FIG. 6 and are processed as in FIG. 4 .
- the cognitive agent 120 may improve the performance of the system as follows.
- search criteria and search constraints are complex enough that a simple and traditional menu-based search is not feasible (or impractical).
- the user may ask the cognitive agent 120 to find to find the best comparison store with similar demographics, same store sales, and weather history or to find the best item having the highest social media activity.
- the cognitive agent 120 may search its knowledge base and find that it needs to clarify the meaning of some of these constraints before it can conduct an effective search.
- the agent may ask if “weather” in this case means average temperature, number of severe storm days, or some other qualifier.
- the user may then think for a moment and then decide that the key issue is average high temperature, which is communicated to the cognitive agent.
- the cognitive agent 120 may then conduct a search and find a list of five stores which match (roughly) demographics and same store sales.
- the agent may ask if “highest” in this case means highest positive or highest negative or merely just an aggregation of activity.
- the user may then think for a moment and then decide that the key issue is highest aggregated activity which is communicated to the cognitive agent.
- the cognitive agent 120 may then conduct a search and find a list of item(s) that have the highest activity.
- the cognitive agent 120 may summarize the findings and then ask the user which is of most interest. The user specifies one, and then the conversation goes on from there as the agent and user interact with each other until the user has a session that they wish to view.
- the cognitive agent 120 can produce annotation or log structures of a session review as discussed above, since the cognitive agent 120 specializes in capturing the context and semantic-specific history of conversation in its episodic memory. These structures would include a session ID and cross references to other similar related sessions, tags to content regarding the session as well as any queries and/or results.
- the cognitive agent 120 notes (via an interface with the session mode manager) the session ID and cross-references this to the specific episodes or times of conversation in its own memory. When a user reviews a session and makes specific notes and comments, these can therefore be cross-references with similar notes (semantically speaking) made by other users of the current session.
- the cognitive agent 120 alerts the various users (via outside messaging such as email or text message) of their apparently mutual interest. Via its semantic network and episodic memory the cognitive agent 120 makes an ideal medium through which to associate related issues in different stores with a common store item, problem type, and so forth. For instance, a user may ask the cognitive agent 120 to show a list of all sessions or session clips associated with mis-implemented promotions for a particular product from a particular manufacturer (and perhaps narrowed with other criteria).
- the cognitive agent 120 records a conversation or soliloquy made by one or two employees with respect to the correct way to produce a store display, or perform some store task, and then the agent is used to associate this recording to all logged sessions that are relevant (i.e., that involve review of the product, or discuss the process).
- the above-described applications include specific features such as reading in of the annotation record of a session into the NLP based knowledge base for expansion capability of the agent.
- the agent is “taught” about a session using its ID, keywords, annotation text, and any other readable or hearable information about the session.
- the agent may retrieve the session ID or information rooted in the session's annotations to respond (in whole or in part) to queries of its own memory that it might make while in conversation with a user about retail practices and issues.
- Servers can be any of a variety of computing devices capable of receiving information, such as a server, a distributed computing system 10 , a rack-mounted server and so forth. Server may be a single server or a group of servers that are at a same location or at different locations. Servers can receive information from client device user device via interfaces. Interfaces can be any type of interface capable of receiving information over a network, such as an Ethernet interface, a wireless networking interface, a fiber-optic networking interface, a modem, and so forth. Server also includes a processor and memory and a bus system including, for example, an information bus and a motherboard, can be used to establish and to control information communication between the components of server.
- Processor may include one or more microprocessors.
- processor may include any appropriate processor and/or logic that is capable of receiving and storing information, and of communicating over a network (not shown).
- Memory can include a hard drive and a random access memory storage device, such as a dynamic random access memory computer readable hardware storage devices and media and other types of non-transitory storage devices.
- Embodiments can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof.
- Computer programs can be implemented in a high-level procedural or object oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language.
- Suitable processors include, by way of example, both general and special purpose microprocessors.
- a processor will receive instructions and information from a read-only memory and/or a random access memory.
- a computer will include one or more mass storage devices for storing information files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks.
- Storage devices suitable for tangibly embodying computer program instructions and information include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD_ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
- semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
- magnetic disks such as internal hard disks and removable disks
- magneto-optical disks magneto-optical disks
- CD_ROM disks compact discs
Abstract
Description
- This application claims priority under 35 U.S.C. § 119(e) to provisional U.S. Patent Application 62/361,053, filed on Jul. 12, 2016, entitled: “Holographic Technology Implemented Security and Retail Solutions” the entire contents of which is incorporated herein by reference and provisional U.S. Patent Application 62/361,669, filed on Jul. 13, 2016, entitled: “Holographic Technology Implemented Security and Retail Solutions the entire contents of which is incorporated herein by reference.
- This description relates to operation of sensor networks such as those used with security, intrusion and alarm systems installed on commercial premises.
- It is common for businesses to have systems for detecting conditions at their premises and signaling the conditions to a monitoring station or to authorized users of the security system. For example, such buildings employ systems in the areas of fire detection, smoke detection, intrusion detection, access control, video surveillance etc. Many different types of security sensors are deployed in commercial buildings. Types of sensors typically include motion detectors, cameras, and proximity sensors (used to determine whether a door or window has been opened). Such sensors can constantly collecting data that is used to determine whether an alarm should be triggered, but also continues to collect data after an alarm is triggered.
- Retail establishments often use simple physical walk-throughs with users having smart-phone and/or tablet based presentations, and use conventional retail analytics applications, and verbal descriptions as tools used for analysis to investigate trends and potential explanations of observations suggested by data analytics.
- Augmented reality, virtual reality and mixed reality technologies are known. Generally, virtual reality refers to technologies that replicate an environment with a simulation of a user being immersed in the replicated environment. Augmented reality, generally refers to technologies that present a view of a real-world environment augmented with computer generated data. Mixed reality a relatively new term generally involves technologies that involve a merging of real world and virtual world environments where real and virtual objects exist and interact.
- According to an aspect, a system includes a server system including one or more processor devices, memory in communication with the one or more processor devices and a storage device that stores a program of computing instructions for execution by the processor using the memory, the program comprising instructions configured to cause the processor to receive a set of social media feeds, filter the social media feeds according to at least one criterion to derive data according to the at least one criterion; receive from a mixed reality device, an image containing a view of a set of physical objects, execute one or more merchandizing algorithms that apply the data derived from the filtered social media feeds to provide as an output data describing one or more social media related values associated with one or more of the physical objects in the view of the set of physical objects, generate a set of virtual objects that include the data describing the one or more social media values regarding the set of physical objects, and send the set of virtual objects to the mixed reality device.
- Aspects also include computer program products and methods.
- Disclosed are techniques that use mixed reality and/or augmented reality and virtual reality technologies to improve the analysis of retail processes and activity in retail establishments. The disclosed techniques use computer implemented techniques that obtain information from various electronic systems/devices in the physical world, which devices are exemplified by security systems, and merge that information into a virtual world of policies and analytics that involve retail systems that generate analytical information regarding customers and their preferences and needs. The techniques also involve processing in real time of various feeds from social media systems. These techniques are adapted for various modes of operation that execute algorithms that apply filtered social media feeds and merchandizing parameters with respect to physical items. This improves upon simple physical walk-throughs blended with smart-phone and tablet based presentations, conventional retail analytics apps, and verbal descriptions. In many cases the main tools of such analysis are limited to emails and spreadsheets. Using these conventional methods it is very time consuming and difficult, or even impossible, to investigate trends and potential explanations of observations suggested by data analytics.
- The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention is apparent from the description and drawings, and from the claims.
-
FIG. 1 is a schematic diagram of an exemplary networked security system. -
FIG. 2 is a block diagram of a generally conventional constrained device typically used in security systems. -
FIG. 3 is a block diagram depicting a sales promotion system integrated with a mixed reality system. -
FIG. 4 is a flow chart of an embodiment of a sales promotion application. -
FIG. 5 is a block diagram of an AR/VR session manager. -
FIG. 6 is a block diagram of an AI based cognitive agent. -
FIG. 7 is a block diagram of another embodiment of a including retail process, store, and item information databases, the AR/VR device and session manager. -
FIG. 8 is a flow chart of exemplary processing. - As shown in
FIG. 1 , described herein are examples of an integratedplatform 10 that integrates via a distributed network 11, mixed reality devices 13 a-13 c with security/intrusion/alarm/surveillance systems 15 a-15 c (typically includingsensors 20,functional nodes 18 and typically including a panel not shown). - Examples of mixed reality devices 13 a-13 c are those in which the mixed reality devices incorporate a live, real world presentation of elements of the physical real-world with virtual elements that are calculated or produced from inputs and which are rendered on a display so that to a user these calculated or produced elements are perceived to exist together with the physical real world in a common environment. Examples of such mixed reality devices 13 a-13 c include mixed reality devices such as Hololens® (Microsoft), (a smart-glasses, cordless, Windows 10® (Microsoft) computer headset that includes various sensors and a high-definition stereoscopic 3D optical head-mounted display, and spatial sound to allow for augmented reality applications. Other mixed reality devices/augmented reality systems such as Google Glass® (Google) could be used. There are many such systems on the market of which these are two examples.
- The security systems 15 a-15 c typically include a panel (not shown), such as for an intrusion detection system, an intrusion detection panel wired or wirelessly connected to a variety of sensors deployed in a premises. Typically, such panels receive signals from one or more of these sensors to indicate a current state or value or that a particular condition being monitored has changed or become unsecure.
- The integrated
platform 10 includes data collection systems that are coupled to wireless sensor networks and wireless devices, with remote server-based monitoring viaservers 14 and report generation. As described in more detail below, wireless sensor networks generally use a combination of wired and wireless links between computing devices, with wireless links usually used for the lowest level connections (e.g., end-node device to hub/gateway 16). In an example network, the edge (wirelessly-connected) tier of the network is comprised of resource-constrained devices 20 with specific functions. Thesedevices 20 may have a small-to-moderate amount of processing power and memory, and may be battery powered, thus requiring that they conserve energy by spending much of their time in sleep mode. A typical model is one where theedge devices 20 generally form a single wireless network in which each end-node communicates directly with its parent node (e.g., 18) in a hub-and-spoke-style architecture. The parent node may be, e.g., an access point on a gateway or a sub-coordinator which is, in turn, connected to the access point or another sub-coordinator. - In
FIG. 1 , the distributed network 11 is logically divided into a set of tiers or hierarchical levels 12 a-12 c. The mixed reality devices 13 a-13 n are shown in communication with the top one or two tiers or hierarchical levels 12 a-12 c. InFIG. 1 , thelower level tier 12 c is illustrated divided into different premises 19 a-19 c for ease in explaining details of the applications that will be discussed below. The premises 19 a-19 c are each associated with one of the security systems 15 a-15 c. The security systems can be independent meaning that there are no connections (as shown) among fully functional nodes of different premises or dependent meaning that there are connections (not shown) among fully functional nodes of different premises. - In the upper tier or
hierarchical level 12 a of the network are disposed servers and/orvirtual servers 14 running a “cloud computing” paradigm that are networked together using well-established networking technology such as Internet protocols or which can be private networks that use none or part of the Internet. Applications that run on thoseservers 14 communicate using various protocols such as for Web Internet networks XML/SOAP, RESTful web service, and other application layer technologies such as HTTP and ATOM. The distributed network 11 has direct links between devices (nodes) as shown and discussed below.Servers 14 execute analytics (analysis programs of various sorts) that are managed in concert with a session manager system 80 (FIG. 4 ). Theservers 14 can access a database 23. - The second logically divided tier or
hierarchical level 12 b, referred to here as a middle tier, involvesgateways 16 located at central, convenient places inside individual buildings and structures, e.g., 13 a-13 c. Thesegateways 16 communicate withservers 14 in the upper tier whether the servers are stand-alone dedicated servers and/or cloud based servers running cloud applications using web programming techniques. Themiddle tier gateways 16 are also shown with bothlocal area network 17 a (e.g., Ethernet or 802.11) andcellular network interfaces 17 b. Each gateway is equipped with an access point (fully functional node or “F” node) that is physically attached to that access point and that provides a wireless connection point to other nodes in the wireless network. The links (illustrated by lines not numbered) shown inFIG. 1 represent direct (single-hop MAC layer) connections between devices. A formal networking layer (that functions in each of the three tiers shown inFIG. 1 ) uses a series of these direct links together with routing devices to send messages (fragmented or non-fragmented) from one device to another over the network. - The distributed network topology also includes a lower tier (edge layer) 12 c set of devices that involve fully-functional sensor nodes 18 (e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in
FIG. 1 are marked in with an “F”) as well as constrained wireless sensor nodes or sensor end-nodes 20 (marked in theFIG. 1 with “C”). In some embodiments wired sensors (not shown) can be included in aspects of the distributed network 11. - The distributed network 11 implements a state machine approach to an application layer that runs on the
lower tier devices - Referring to
FIG. 2 , a generic constrainedcomputing device 20 that is part of the security/intrusion/alarm/surveillance systems (either integrated examples of such system or standalone examples) is shown. A constraineddevice 20 as used herein is a device having substantially less persistent and volatile memory other computing devices, sensors, systems in a particular networked detection/sensor/alarm system.Constrained device 20 includes aprocessor device 21 a, e.g., a CPU and or other type of controller device that executes under an operating system, generally with 8-bit or 16-bit logic rather than the 32-and 64-bit logic used by high-end computers and microprocessors. The constraineddevice 20 has a relatively small flash/persistentretail establishment 21 b andvolatile memory 21 c in comparison with other the computing devices on the network. Generally the persistentretail establishment 21 b is about a megabyte of storage or less andvolatile memory 21 c is about several kilobytes of RAM memory or less. - The constrained
device 20 has anetwork interface card 21 d that interfaces the constraineddevice 20 to the network 11. Typically a wireless interface card is used, but in some instances a wired interface could be used. Alternatively, a transceiver chip driven by a wireless network protocol stack (e.g., 802.15.4/6LoWPAN) can be used as the (wireless) network interface. These components are coupled together via a bus structure. The constraineddevice 20 also includes asensor 22 and asensor interface 22 a that interfaces to theprocessor 21 a.Sensor 22 can be any type of sensor type device. Typical types of sensors include temperature, simple motion, 1- 2- or 3-axis acceleration force, humidity, pressure, selective chemical, sound/piezo-electric transduction, and/or numerous others, implemented singly or in combination to detect complex events. - The disclosed implementations of a constrained
device 20 can follow the current constraints on flash/persistent storage memory and RAM memory and less than 10-20 kilobytes of RAM/volatile memory, but can have more depending on configuration and in some instances the operating system. Theseconstrained devices 20 are configured in this manner; generally due to cost/physical configuration considerations. These types ofconstrained devices 20 generally have a static software image (i.e., the logic programmed into the constrained device is always the same). -
Constrained devices 20 execute a real-time operating system that can use dynamic programming and support. The real-time operating system (“RTOS”) executes and otherwise manages a dynamic set of user-defined independent executable functions or tasks that are either built into a loaded image (software and RTOS that executes on the constrained device) or that are downloaded during normal operation of the constraineddevice 20 or a combination of the two, with the former (built into the image) using as subroutines instances of the latter (downloaded during operation). Certain of the applications set forth below will cause systems to access these constraineddevices 20 to upload data and otherwise control thedevices 20 according to needs of the applications. - In the examples below, a facility can be any type but is typically, e.g., a commercial, industrial, facility, with interior areas, (buildings) and exterior areas that are subject to surveillance and other types of monitoring. The buildings can be of any configuration, wide open spaces such as a warehouse, to compartmentalized facilities such as labs/offices.
- The retail establishment includes the plural sensors 22 (
FIG. 1 ). In one implementation, a portion of thesensors 22 are r.f., hot spots or the like through which Wi-Fi or other Internet access services are provided.Sensors 22 that are hot spots or the like capture information, as a user moves about the retail establishment from the user's possession of themixed reality device 13 a, as will be discussed in further detail below. - Described now are techniques that allow the user of the
mixed reality device 13 a (an augmented reality/virtual reality (AR/VR)device 13 a) to interact with the physical environment of the retail establishment, such as the retail items on the retail establishment's shelves, aisles between shelves, and spaces in and around the retail establishment, together with “virtual” items such as data objects that describe merchandising, promotions, inventory shrinkage, and other retail process concepts, in a unified and simplified way. - A user of the
mixed reality device 13 a may walk through a retail establishment, examine physical items on retail establishment shelves, and at the same time (via the processing 40 (discussed below) that integrates retail-based analytical processing with mixed reality system technology) will observe visual representations of results of execution of the retail-based analytically processing. These result can be ubiquitous, meaning many or an abundant number of such execution results. - Examples of such results can be so called “shrinkage levels” for the item or category of items over a selected period of time, “foot traffic,” “dwell time,” “conversion,” and other retail-related data in specific areas of the retail establishment (e.g., the aisle passing by a particular retail item display) as a function of sales promotions of the item. Other examples include the visual representation of the correlation of sales between the physical item in view and other items in the retail establishment or available online. Still other examples include a correlation of profit of the particular item to profit of other items, etc.
- The
mixed reality device 13 a facilitates coordination of communication between two or more individuals discussing (in close proximity to each other in the retail establishment, or via remote communications) retail establishment processes, specific retail items, retail establishment layout issues, and so forth. - Some implementations include a cognitive agent (artificial intelligence based assistant or “information retrieval and analytics” assistant) that when used in conjunction with the
mixed reality device 13 a can produce a more powerful analysis tool. For example, the user may look at an item on the retail establishment shelf while the AR/VR platform displays virtual objects (like pie charts, graphs, tables, etc.) giving sales, shrinkage, merchandising, and other retail information related to that item, and at the same time the user may (using natural spoken language) query the potentially large collection of backend information systems by asking the cognitive agent simple questions related to the real and virtual objects on display. The cognitive agent using a web service (or other forms of database access methods, its own internal structures like representations of episodic memory, domain knowledge bases, lexicons, and also external service accessible) includes analysis engines to answer questions from the user. The combination ofmixed reality device 13 a and AI agent gives the user a very powerful analysis tool stimulated by an initial visual input of objects in the physical world (i.e., natural inspection of items in view and conversations with others and/or questions to the platform). - Referring to
FIG. 3 , asales promotion system 40 as shown includes plural databases. Thesystem 40 is configured to execute a sales promotion application 70 (further described inFIG. 4 ). A first database, store inventory database 42, insystem 40 is shown as containing data on retail items, including item name, SKU number, retail price, wholesale price, location in the retail establishment (aisle no., shelf no., slot no., planogram reference no., etc.), number of items in stock, number of items on order, expected arrival date for ordered stock, inventory turnover for the item, and any other data associated with a given retail item. Store inventory database 42 is connected to the Internet 63 (or a private network), via store inventory web services 42 a. - The
system 40 also includes other databases that include retail establishment layout information (store layout database 44) including retail planograms, fixture locations, layout codes and/or layout version names for each retail establishment address, historical and future planned changes in layout, etc. (connected to theInternet 63 via storelayout web service 46 a) Thestore layout database 44 could also include the layout of the same aisle of location for the same retailer's retail establishments that have the same configuration and demographics with the highest performance, as measured in different ways. - The
system 40 also includes an item information database 46 (connected to the Internet via iteminformation web service 46 a) and having photo images or icon representations of retail items, retail establishment shelf layouts, and other retail related objects. Retail establishment performance data, personnel information, and other retail operations and merchandise data can be stored in a merchandizing andpromotions database 48 connected to theInternet 63 via merchandizing andpromotions web service 48 a. - In addition, the
system 40 includes a mobile AR/VR (augmented reality/virtual reality) device, e.g.,mixed reality device 13 a, an AR/VRsession management system 80, and a wireless (e.g., Wi-Fi) network 62 with wireless access points such as that shown above inFIG. 1 , within theretail establishment 60. Other implementations of such networks could be used. - The organization of the databases in
FIG. 3 are given as examples and are somewhat simplified relative to the design and implementation of actual enterprise-scale retail databases encountered in the commercial world. That is, no attempt is made in the figure to show how the databases are fragmented and deployed for data redundancy, scalability, fast data access, and so forth. Also, the segregation of various types of data into separate databases is simplified inFIG. 3 and it should be recognized that other database architectures can be imagined which are compatible with and included as additional embodiments. - The
mixed reality device 13 a, e.g., an “AR/VR device” (augmented reality' virtual reality) allows the user to see the real environment with data or “artificial images” imposed on a view of the real environment. Microsoft HoloLens and Google Glass are examples of commercial devices which allow this mixing of “real” and “virtual” realities as referred to herein also as mixed reality systems. The mixed reality device interacts with an outside network and the web (e.g., using a Wi-Fi connection) and also allows for input from the user (e.g., using hand gestures and/or voice commands). -
FIG. 3 shows the various databases and the AR/VRsession management system 80 as remote applications (i.e., implemented in one or more servers outside of the retail establishment). In one embodiment each of these is accessible via web services (such as RESTful micro-web services) well known to those skilled in the art of distributed databases and mobile services. - In other embodiments, some or all of the data could be located on servers in the retail establishment.
FIG. 3 does not suggest any ownership or management policy of the databases or the AR/VR session management system, and the description specifically includes embodiments where functionality of the system ofFIG. 3 is divided in arbitrary ways so as to allow ownership and/or management by various parties which may or may not include the retailer as one of those parties. - Referring to
FIG. 4 , sales promotion application 70 integrates retail-based analytical processing with mixed reality system technology is shown. Described below is a specific implementation of this processing 70, others may be implemented. As a user of themixed reality device 13 a walks through the retail establishment, the location of the user and associatedmixed reality device 13 a inside the retail establishment is determined and tracked 72 as the user moves around the retail establishment with themixed reality device 13 a. - Tracking 72 is accomplished through a number of techniques including wireless triangulation of the device, various “internal GPS” technologies (BLE, RFID, NFC, etc.) or dead-reckoning based accelerometer data integration. For the purposes of discussion it is only necessary to note that the physical location of either the
mixed reality device 13 a (or some other device on the person of the user, e.g., a smartphone) is estimated to within a few feet of the user's actual location in the retail establishment using technologies well known to those skilled in the art. Depending on the technology used to track the location of themixed reality device 13 a (or the user), other technology components such as cameras, beacons, and other access points may be used. These components have been omitted fromFIG. 3 and are not specifically referred to inFIG. 4 , for simplicity. - In the case where the actual device being tracked is not the mixed reality device, but rather some other device (such as a smart phone in the pocket of the user), the tracked device makes its location (and by inference the location of the user and the
mixed reality device 13 a) known by sending location data over the in-retail establishment wireless network to the AR/VR session manager 80. It should also be noted that the location of the user andmixed reality device 13 a are determined without any location determination functionality on themixed reality device 13 a, and without any second device (i.e., smart phone) if some other outside system (e.g., a video surveillance system with image analytics capabilities able to determine location) is available and is used to track the user's location during the AR/VR session. - The user may also specify where in the retail establishment they are by some other technique such as selecting a location on a map of the retail establishment. In another embodiment the AR/VR system may determine its own location by capturing the image or images of items in its surroundings which have been previously mapped by some to the current location. Using such a location to image map the mixed reality device can determine its own location. The “image” in such a case might be an actual image recorded in some convenient file format, or it might be an index or set of indices derived from the image in a manner which makes them unique to that image (i.e., an image index or hash).
- During a session, the user views items and other points of interest in the retail establishment through the
mixed reality device 13 a. Based on a selected mode of operation into which the session has been placed, the location of the user, and determined orientation of the mixed reality device (i.e., what the device is facing and what items in the physical environment the user is viewing through the device), the AR/VR session manager 80 chooses 74 virtual items and context-relevant information to show to the user on the display of themixed reality device 13 a. - The AR/
VR session manager 80 sends 76 the chosen virtual items and context-relevant information to themixed reality device 13 a. The user may view several items in the field of view of the mixed reality device display. Themixed reality device 13 a provides a user interface (not shown) that displays menu options that allow the user to highlight a specific item, or a group of items, and display information for a variable period of time which is also selected using the interface menu items. This information is sent 78 to the AR/VR session manager 80. The AR/VR session manager 80 analyzes 80 the user highlight information to drill down to find corresponding content on the specific items highlighted in the display, which is sent to themixed reality device 13 a. - The user interface (not shown) can be used to enter 82 notes as the user reviews the real and virtual objects and information presented in the display of the
mixed reality device 13 a. While engaged in such a session as, the user may also use standard voice communications or voice-to-chat technology available on the mixed reality device to communicate 84 with a second (remote) user or group of users or compose emails or text messages, etc. These actions may be part of a retail establishment review process with extensive pre-planning or may be impromptu as the user goes through the retail establishment in pursuit of day-to-day managerial responsibilities. - Referring now to
FIG. 5 , an AR/VR session manager 80 is shown. Thesession manager 80 interacts with themixed reality device 13 a over the Internet using a “session portal” 82, e.g., a web service (application programming interface (API) or in another embodiment, a dedicated socket with SMTP or other transfer protocol. Thesession portal 82 is bi-directional meaning that each of the mixed reality devices (MRS) 13 a-13 c can send data to thesession manager 80 and receive data from thesession manager 80. The mixed reality devices (MRS) 13 a-13 c send updates on their states to thesession manager 80. The states of the mixed reality devices 13 a-13 c are represented virtually or “mirrored” in adevice state representation 84 inside thesession manager 80. - Input from the mixed reality devices (MRS) 13 a-13 c to the
session manager 80 is used in analytic programs executed on the servers. For example, the camera on themixed reality device 13 a may send an image containing an area showing a retail item with its characteristic consumer brand packaging (by which it is easily recognized by consumers). This part of the image is identified by aninput analyzer 86, which relies on image libraries accessible via the web service of the item information database and potentially other databases exposed by the consumer product manufacture, or other web browsers' image analytics services. Theinput analyzer 86 informsanalytical manager 88 with inputs to analytic programs (not shown) executing on theservers 14. Theanalytics manager 88 uses a current mode and inputs presented to it, in order to decide what to present (virtually) to the user on the device viewer and what to request of the analytics executing on the server. Information presented is produced by the analytics manager using data received from the various analytical programs that execute various analytics both conventional as well as to be developed. Thesession mode manager 90 monitors the mode selected by the user (as mirrored in the device state representation) and informs the analytics manager of the selection. Information presented is produced by the virtual content manager using data from the various databases accessible via web services attached to the various external retail databases shown, by way of example, inFIG. 3 . - In
FIG. 5 , the session is logged by the input analyzer, including any notes or annotations provided by the user of the mixed reality device (spoken, typed, or sent via some other mode of communication) into session log/notesrecords 94 that are stored in a database as records. This locale log/record in thesession manager 80 are backed up in an external database (not shown) for long-term storage, reporting, and further analysis. This local session and long-term storage may also include a full record or “recording” of part or all of the session, rather than just the user notes. - The user may also view comparative information for this item relative to other items in the retail establishment, in its inventory category, department, or relative to all items in the retail establishment, or relative to this item or group of items in other retail establishments or groups of retail establishments (e.g., the retail establishment's regional division).
- In this embodiment, the user (who may have some managerial responsibility in the retail establishment, but may also be a retail establishment analyst or even a shopper, or some other person interested in analyzing the retail establishment or its products and processes) begins use of system in a “user session” with the
mixed reality device 13 a initiating the session by a switch on the device, a voice command, and/or a hand gesture. Themixed reality device 13 a includes a motion sensor that awakens thedevice 13 a (i.e., loads operating system components and prepares for input) when thedevice 13 a senses motion. Themixed reality device 13 a may require input of a user ID and password to enable further operation and interaction with the user. An AR/VR session is initiated by the user via a menu user interface (not shown) of themixed reality device 13 a. - As discussed the
mixed reality device 13 a operate in various modes, and based on the mode of operation (and the location of the user, and the orientation of the mixed reality device the AR/VR session manager 80 chooses virtual items and context-relevant information to show to the user on the screen. - One mode is a “loss prevention management” mode. In such a mode, the user may approach a particular retail shelf and inspect a retail item, at which time the AR/
VR session manager 80 sends information about the item's inventory shrinkage (that is, difference between item sales and item inventory replenishment integrated over a period of time). The user may have several items in their field of view, and the device's user interface may display menu options that allow the user to highlight a specific item, or a group of items, and display information for a variable period of time which is also selected using the interface menu items. - Inventory shrinkage analysis is accomplished when the AR/VR session manager highlights items above some shrinkage threshold (measured in some meaningful metric such as absolute dollars lost, or percent increase in shrinkage over some historical average or benchmark). For example, the user might walk down the aisle of the retail establishment and as they do so, various retail items on the retail establishment shelves come into their (physical) field of view. The
Mixed reality device 13 a's camera sends these images back to thesession manager 80 via the web connection and the session manager identifies the items and compares sales to current inventory levels (as determined by item-level RFID scans or some other real-time or quasi-real-time inventory measurement technology). If the shrinkage or inventory loss level for a particular item in view is in excess of a pre-determined threshold (selected by the user at the beginning of the session, or in some pre-configuration file used during initiation of the session), the item is highlighted in red or outlined in some manner which emphasizes its location on the viewer of themixed reality device 13 a. That is, the item “glows” in some manner in the view of the user. The user may then select the item using voice or hand gestures and, as a result, see more detailed information such as shrinkage data, shrinkage calculation input data, historical shrinkage data, comparisons of this data to data from other retail establishments, retail categories, competing brands, and so forth. - Another mode is a comparative mode, where the user may view comparative information for an item relative to other items in the retail establishment in its inventory category, department, or relative to all items in the retail establishment, or relative to this item or group of items in other retail establishments or groups of retail establishments (e.g., the retail establishment's regional division).
- Another mode, as mentioned above allows the user to use the device interface to enter notes as the user reviews the real and virtual objects and information presented.
- Another mode, as mentioned above allows the user to use the device while engaged in such an AR/VR session to use standard voice communications or voice-to-chat technology available on the mixed reality devices to communicate with a second (remote) user or group of users about what they are seeing and thinking, or to compose emails or text messages for immediate or future sending.
- The above mentioned application involves the session manager when in loss prevention management mode. Other modes might include “merchandising mode” or “planogram mode” in which the items in view of the user are studied with respect to their location on the shelf and in the retail establishment, and how that relates to quality of sales. Optionally the session manager could operate in generic or “mixed” mode in which any unusual information about an item is virtualized as some visual object and presented to their view. Specifically, and using merchandising mode as an example, the user might consider an item in view, and highlight it with a hand gesture or voice command. The device's user interface might then give graphics showing excursion above or below anticipated sales for a selected period of time, sales comparisons with other directly competing brands, other items in the sales category, the same item in other retail establishments in the vicinity, region, or nation, or comparisons with other sales benchmarks. A very valuable comparison is the comparison of sales (turnover frequency) of that item in its current shelf location, compared with sales in other past locations in the same retail establishment. As in the case of the inventory shrinkage mode, the user may not need to select specific items when doing a retail establishment analysis in merchandising mode. That is, the user may simply walk up and down the retail establishment aisles and as exceptional items come into the field of view, the AR/VR session manager notes their exceptionality with respect to one of the metrics and benchmarks mentioned above and highlights the item in the field of view in the mixed reality device viewer.
- Another application involves the analysis of the effectiveness of sales promotions. When in the merchandising mode or perhaps a special promotions analysis mode, the user can view an item and see projected by the AR/VR session manager onto the device screen information related to sales (turnover frequency) relative to changes in retail price over selectable periods of time (in this retail establishment or a collection of comparable retail establishments).
- A particular application of the AR/VR system related to retail establishment promotions is its correlation to foot traffic in each aisle. As the user stands in a given location in an aisle, he or she should be able to see (based on menu selections via the device's user interface) dwell time in front of each shelf location (of a selected area size, over a selected period of time) as a function of particular promotions. For instance, the spot on the floor might show various shades of green or red depending on whether foot traffic is a little or a lot above or below normal during the promotions period of the selected retail item.
- The AR/VR system may also generate a highlight on an area rendered in the mixed reality device, where a retail establishment promotion is supposed to be in place at a given date and time, but is not in place. For example, a promotion is scheduled to be on an aisle end-cap at 9:00 am on Tuesday. If the promotion is not in place at the location and scheduled time, the AR/VR system can generate an alert that is rendered on the mixed reality device that could include a description and/or image of the missing promotion and a proposed email or text to the person in the retail establishment responsible. Using this function, and similar functionality related to planned advertising, price changes, warnings, and so forth, the user may simply walk through the retail establishment and see (and record for later viewing, logging, or reporting) visual representations of differences between the actual and intended state of the retail establishment.
- In another embodiment, the system of
FIGS. 3 and 5 and the application (FIG. 4 ) described above are enhanced using an (artificial intelligence based) cognitive assistant. - Referring now to
FIG. 6 , a cognitive agent 120 is shown. The cognitive agent 120 is an example of one way of implementing such an agent. That is, those skilled in the art of expert systems, natural language processing (NLP) based cognitive agents, and similar AI based agents will be able to immediately envision various permutations of an agent which enables the applications described below. - Input text 121 (such as a question: “Has shrinkage for this item increased more than expected this month?”) enters the cognitive agent via the
NLP pre-processor 122. In voice based input embodiments, voice data are converted to text by a voice to text sub-system 122 a using standard voice recognition technology. TheNLP pre-processor 122 parses the sentence into “tagged” words and phrases (i.e., theNLP pre-processor 122 labels each word with its part of speech such as “noun”, “verb-transitive”, and so forth). The output of the pre-processor 122 is a first data structure that represents the input text with words and phrases connected to each other and annotated with indicators of meaning (or “sense” or “context”) pointing to the words and phrases. These meanings are extracted or determined using statistics and pattern based comparisons to asemantic network 124, which along with thelexicon 126, retail establishments, includes the cognitive agent's knowledge base of language (domain knowledge ontologies 128). - The representation of the input text (e.g., the first data structure with determined meanings) is passed to a
response macro-planner 130 that determines information content of a response from the cognitive agent 120 120. The macro-planner 130 can use non-monotonic planning with default logic and patterns or templates (from episodic memory 132) and alsodomain knowledge 128 to produce a second data structure that contains the response. This data structure does not contain “sentences” per se, but rather concepts and concept qualifiers (for time, quantity, type and degree of emphasis, etc.) that together comprise the logic behind the response from the cognitive agent 120 120. This second data structure is passed to a micro-planner 134 that produces sentences from the second data structure for the response from cognitive agent 120 using the language knowledge contained in thelexicon 126 andsemantic network 124. The sentences can be “polished,” e.g., corrected, for grammar, (e.g., tense, singular/plural matching, etc.) in afinisher 136 that outputstext output 137. Domain knowledge ontologies and episodic memory are enhanced over time as the cognitive agent is given new information, and as old faulty information is corrected. That is, these can be written to selectively by “actuator” functionality in an extension of the response macro-planner. - The AR/
VR session manager 80 may generate input text that is sent directly to the cognitive agent 120 in order to apprise the cognitive agent of the context of the conversation with the user of themixed reality device 13 a. For instance, when the user asks the cognitive agent 120: “Has shrinkage for this item increased more than expected this month?” the cognitive agent 120 may not know to what “this” refers, and will also not know the time period in question without further information. The agent could ask the user for time and item ID clarification, but it is more convenient for the user if the cognitive agent 120 is passed information directly from thesession manager 80 about the item that is highlighted, and what time period is selected or configured. Thesession manager 80 makes the session state visible to the cognitive agent 120, via a web-service that is not controlled by or visible to the user. - The knowledge base of the cognitive agent 120 is specifically trained for the particular retailer's markets, key customer base and associated demographics, item types and uses, and any other knowledge categorization which makes it easier for the agent to narrow questions and better interpret the intention behind the user's questions. There are many questions that might be asked by the user of the
mixed reality device 13 a of the cognitive agent 120. - Using the cognitive agent 120, the sales promotion application 70 can be enhanced as the cognitive agent 120 allows the user to ask questions directly through the
mixed reality device 13 a, which are sent as input to the AR/VR session manager system 80 (seeFIG. 7 ). - The user can considers an item and the accompanying information on sales vs. promotional price and can drill down to ask the cognitive agent 120 (via the voice-to-chat interface) various questions such as:
- (1) What was overall retail establishment performance like in this retail establishment over the same period of time?
- (2) What was the weather like?
- (3) What were overall retail sales (in grocery, or apparel, or whatever category) like over the same period of time, relative to the previous year?
- (4) What was the foot traffic in this particular aisle of the retail establishment, during the top six shopping hours of each Saturday, for the 10 Saturdays previous to the beginning of the promotion, and for each Saturday during the current promotion, and is there a statistically meaningful correlation between that traffic change and the promotion?
- These are merely examples of questions that could be asked, as there are many other questions that might be asked of the cognitive agent 120. The cognitive agent 120 is made sufficiently flexible in its ability to process and interpret natural language, and it is sufficiently broad in its retail practices knowledge base that it can answer many and any such questions without requiring an enumeration and “hard coding” of specific questions.
- The cognitive agent 120 answers these questions by use of an Artificial Intelligence platform that takes the output of the cognitive agent 120 and parses words in the output. Such capability has been demonstrated to a limited extent by the Apple Siri platform, and to a much greater extent by IBM's Watson AI engine and IPSoft's Amelia cognitive agent. The cognitive agent 120 can answer an actual question “where has shrinkage increased over the prior month” by
session manger system 80 having a sub system “input analyzer 86 a” that processes input from the cognitive agent 120 and recognizes that shrinkage is being asked accesses current and historical data, calculates the shrinkage if any and delivers the calculated analytics back to the cognitive 120 to place that value into a sentence that is rendered back to the user. - Another embodiment involves the visual correlation of item sales. For example, if the user of the
mixed reality device 13 a highlights a particular item on a shelf or display, thedevice 13 a may show all items in the vicinity whose sales correlate most closely with the selected item. Another embodiment specifically includes an application of the described system in which the sales, shrinkage, or associated foot traffic correlations for a particular retail item or group of items is shown to the user, via themixed reality device 13 a, for various alternative retail establishment layouts including but not limited to the current (physical) retail establishment layout. The user or users engage in collaborate session to compare retail establishment performance impacts for those layouts, and investigate (particularly when the cognitive agent 120 is used) deep implications of retail data analytics applied to retail establishments with varying retail establishment layouts that are shown visually on themixed reality device 13 a. - Also are embodiments in which a previously recorded session for a particular retail establishment is replayed to simulate the retail establishment visit for the original user or a new user. One such embodiment includes the case where images are captured (for example, while the user walks through the retail establishment) and data/information virtualization and projection is applied later to produce the augmented walk-through session. Another embodiment includes a session (produced by any of the techniques described above) involving a series of real and virtual objects presented in the session, with subsequent application of the cognitive agent 120 to answer user questions and record user thoughts and expertise as the user watches the previously recorded session.
- Referring now to
FIG. 7 , thesystem 40 ofFIG. 3 can be an enhanced,system 140 adapted for other modes. The enhanced system in addition to including the features ofFIG. 3 , also includes feeds from social media systems 144 a-144 n. This system can be adapted for various modes of operation that execute algorithms that apply user filtered social media feeds and merchandizing parameters with respect to physical items. - One such mode is a merchandising/training mode. In this adaption of the system of
FIG. 3 , As discussed above, during a session the user walks around the retail store and views (real/tangible) items for sale and other tangible items and things of interest (such as advertising signage, models and displays). While in a suitable mode of operation (e.g., merchandising mode, general use mode, sales analysis mode, etc.) the user views a set of such tangible items (visible through the display of themixed reality device 13 a) and select one item or a small group of related items from the view in themixed reality device 13 a and see a set of virtual objects projected by the display of themixed reality device 13 a, as generally discussed above. - The
enhanced system 140, assystem 40 inFIG. 3 , includes a processing system 142 that accesses data from the databases 42-48 to construct these virtual objects that represent and correspond to data trends related to the tangible item or items, but with the processing to construct this virtual objects modified to include external data trend information. One such example of external data trend data are data related to social media. The processing system 142 receives feeds from social media systems 144 a-144 n via the Internet and executes social mediasales promotion application 150 that accesses 152 feeds from social media systems 144 a-144 n. The processing system 142 accesses these various types of social media feeds, filters 154 those feeds and extracts 156 relevant data for use by the processing system 142 in construction of virtual objects. - For example, one of the virtual objects that may be constructed is a virtual object that represents a pie chart that shows interest levels associated with the real-world object when rendered on the
mixed reality device 13 a. These objects in the form of the pie chart can depict percent positive and percent negative comments, e.g., from mining of contents of messages from various social media platforms, such as Twitter, Facebook, Instagram, etc. In each of these instances, the processing system 142 obtains content from posted references to the item or current promotion or news about the item, along with top, e.g., three, top five, etc. positive and negative words and short phrases in the positive and negative references. - The virtual objects are constructed in a similar manner as discussed above and include a set of icons (e.g., several small pie charts) rendered in, e.g., a corner of the display of the
mixed reality device 13 a, and when one of these is selected by the user, a fuller and richer set of related data are shown to the user on the display. Such data can include trending history of positive/negative response ratio, occurrence trends and history of selected words and phrases in the social media, and so forth). The processing system 142 executes various algorithms that present such data to the user, according to merchandizing principles used or to be developed in the industry. The algorithms also apply one or more merchandizing parameters associated with the set of physical objects. - Referring now to
FIG. 8 , processing 150 for construction of such virtual objects is shown. The process receives a set of social media feeds, filters the feeds according to criterion orcriteria 152. Theprocessing 150 generates 154 a set of virtual objects that will be send to themixed reality device 13 a to be rendered in juxtaposition with physical views of the objects. Theprocessing 150 can also invoke 158 the cognitive agent that receives user inputs regarding the physical items, analyzes these inputs and detects ambiguities, 160. Theprocessing 150 resolves 162 ambiguities by accessing data (e.g., highlighted object inmixed reality device 13 a display), directed from themixed reality device 13 a. - In a related embodiment, the user walking through the retail establishment, studying tangible items and shelving and other display fixtures, and sees highlighted by “virtual object enhancement” an item that is exceptional in some way with respect to treatment in social media e.g., Twitter, Facebook, Instagram, consumer blogs, etc. By “virtual object enhancement” is meant that the view of the tangible item on the screen of the AR system is highlighted or augmented (e.g., by color outline, color hue and brightness change, and so forth). By “exceptional in some way” is meant that the item in view is discussed in social media with above-average frequency or intensity, with respect to a judging criterion that are empirically set.
- In a related embodiment, the selection of highlighted items are filtered with respect to other analytical results such as inferred demographics of persons posting feeds to social media platforms.
- In another embodiment the user walks around the store and sees highlighted all items that are trending upward in social media, relative to some threshold (e.g., 10% more social media daily references in past seven days relative to daily references in the preceding seven days). Similarly, the user see other items trending downward. The time frame for comparison may be selected to correspond to promotions, and may be depend upon the particular item. For example, in one embodiment, if items A, B, and C are in the field of view, and if item A is undergoing a special promotion that started in the local advertising media five days ago, and if item C is in the midst of a promotion that started nine days ago, the comparison to determine whether or not the item is highlighted (augmented) in the AR system view is based on five days compared to baseline for item A and nine days compared to baseline for item C. Here “baseline” could be any convenient number of days in the recent past which provide a good bench mark for the normal (non-promoted) daily frequency of mention in social media for the item.
- In a related embodiment an outside retail analytics service could be used to obtain a list of discussed items (e.g., the hottest 100 items in the retail store). The session mode manager (
FIG. 5 ) uses this data along with query results from another service that obtains data on one or more merchandizing parameters associated with the set of physical objects, e.g.,—inventory stock level and location service—to produce a (virtual object based) map of the store showing the location of each hot item. The user may physically walk through the store to each item or take a virtual tour with themixed reality device 13 a showing the most recent images of the display area for each item in turn. - Execution of the one or more merchandizing algorithms using the data derived from the filtered social media feeds and in some implementations also using one or more merchandizing parameters associated with the set of physical objects, provide as an output from execution of such algorithms, data that describe one or more social media derived characteristics ascribed to the products, and which can also include in some implementations, one or more merchandizing values associated with one or more of the physical objects in the view of the set of physical objects
- During this real or virtual walk, the user may select items (those items on the “hot list” or those other items in the vicinity of hot items) and see detailed results of social media analytics, including but not limited to:
- (1) key words and phrases;
- (2) trending up or down over time of those key words and phrases;
- (3) correlation over time between advertising schedules and social media discussion trends of the item;
- (4) correlation over time between sales or item turnover frequency and social media trends;
- (5) variability or “noise” level in social media discussion of the item (e.g., in the absence of any advertising or promotions);
- (6) variability in social media mention with respect to geographic location (e.g., state or country) of the commenters in social media;
- (7) correlation between sales or turnover frequency of the item and social media discussion intensity for related items (i.e., other items in the category such as competing items or items whose sales generally tend to move up and down with the item of interest).
- These embodiments described above use configured filters for filtering the social media feeds. For example, the session mode manager provides via a configuration interface, a list of well-known social media sources, each with a check box adjacent to it, such that the user may turn on or off each source, one by one, and thus allowing fine control in how the social media sources are filtered for use as inputs to the applications described herein. This configuration should also allow the user to input new sources not available in pre-configured or pre-populated versions of the list.
- In another embodiment, the virtual objects showing social media reference intensity details (i.e., virtual objects displayed when an item is selected by the user for close scrutiny) will include a list of those social media sources which have contributed most heavily (relative to some threshold criterion) to the displayed results.
- In another embodiment the augmented reality system and applications described above are enhanced with annotation functionality and/or a comments log in which a store manager or other system user filling some investigatory role is able to record (as voice recording or voice-to-text notes, hand-written notes captured on a portable device via stylus, or typed notes) thoughts, comments, reactions, action items, and/or future priorities and plans that occur to them as they take a real or virtual tour of the retail store (or a virtual tour of a list of items related to a real or imaged hypothetical store). These annotations are stored in a log accompanying and synchronized with the AR session. The log of annotations are produced through multiple tours by one or more users and thus, in the most general case, will be composite log of the idea and insight contributions of multiple users.
- The system specifically includes functionality in the “session mode manager” and “session log and notes” modules (see
FIG. 5 ), which allows a user to find recordings, via queries, of previously recorded real or virtual tour sessions matching desired criteria and use those to communicate deep insights about the retail operation among retail enterprise employees, partner representatives, vendors, and retail customers. - Enabling functionality in these modules is implemented as sub-modules or replicated instances of modules or sub-modules, and that these modules and sub-modules may be local-server-based, AR-device based, cloud-based, or a combination of those deployment options. In the general case the number of session log files will grow large and these can be stored economically, e.g., in a cloud-based web-service-accessible database. These implementation option details are omitted from
FIG. 2 for the sake of simplicity.) - As an example, consider a regional manager who wishes to train new store managers in promotional display placement. The regional manager may use search menu functionality exposed in the session mode manager interface to request all recorded sessions made in stores that have sales performance in the top 3% of all stores (to use a specific number for purposes of illustration) for a particular product or product category over (say, for example) the past year. The session mode manager queries an external database web-service or analytics engine service to find the store IDs corresponding to the top sales for that item over the desired time interval. After the session mode manager is in possession of these store IDs the session mode manager uses the store IDs to search session log and nodes module to find the list of all recorded sessions for those stores over the time period of interest. In the general case where such a list is large, the session manager can sort the recorded sessions in the list according to appropriateness as judged by the product or product category relevance (i.e., push to the top of the search list all of those session recordings which contain the largest numbers of references to the product of interest).
- In an additional embodiment, a user such as the regional manager mentioned above, may use search methods similar to those described above to find session recordings matching a single criterion or multiple combined search criteria. While viewing or reviewing those sessions, the user may add new annotations and notes (as text, audio recording, or both) that describe how the sessions might be used for a specific purpose (e.g., training or other business initiatives such as presentations to partners or trade groups, public press, standards organizations, vendor representatives, etc.). Later the same regional manager or other individuals (for example, an individual responsible for organizing training activities for an upcoming new store manager orientation event) may search for these specific annotations regarding the specific purpose just mentioned and thereby easily retrieve them. Through this functionality of session annotation made while viewing through the
mixed reality device 13 a, a previously recorded session, and subsequent retrieval of all sessions appropriately annotated, provides an effective tool for training and other types of teaching and communications on retail best practices. - In the various embodiments described here, “annotation” refers to a single key word or phrase, or a sentence or sentences that contain a set or subset of key words or phrases. The annotation may also be associated with a set of key words or phrases through synonyms or phrases of equivalent meaning as mapped through some appropriate lexicon or semantic mapping. It should also be noted that in the various embodiments, the organization of modules in
FIG. 5 is for purposes of illustration, and it is feasible to combine modules into one module or to split a module into multiple modules of more specific purposes. - In another embodiment sessions are searched for and annotated or tagged, and subsequently used to demonstrate to various users differences in or contrasting methods related to retail item merchandising aspects including but not limited to store layout options, promotions practices details, product placement, product grouping, signage, and so forth.
- In another embodiment a user interested in investigating retail best practices could search for sessions recorded in high performance stores, or in any store as reviewed and annotated by expert users skilled in noting good and bad retail practices. The user could then view those sessions to hear (or read) the annotations of the expert users in order to gain insights into their expert skills.
- In another embodiment a user views a previously recorded session and ask questions or make comments about their thoughts and concerns, which are then recorded as new annotations in the session. The newly annotated session (copy) is viewed by an expert or “teacher” user who records answers to the questions and concerns, and this copy with answer annotations is subsequently reviewed by the first user in order to obtain those answers.
- The system can be configured to allow recorded sessions to be used to communicate ideas about retail practices between retail experts and representatives of consumer product manufacturers. For example, a merchandising department of a particular consumer product manufacturer (i.e., brand owner) may take a virtual or real tour of a retailer's store in order to understand the physical and environmental aspects of the retail operation while planning the new promotion of a particular product. The consumer product manufacturer may add their own annotations about promotions options and this can be subsequently viewed in the recorded session by merchandising people in the retail organization. This may be facilitated by having shared access areas of storage in the session log and notes module or by providing specific functionality in the session mode manager that allows the original producer of a session to specifically authorize specific users or user organizations access to the session. The session mode manager further allows subsequent users (reviewers adding new annotations) to modify (more specifically to narrow) access control to the session.
- In another embodiment, the original producer or annotator of a previously recorded session controls access by recording secret keywords or phrases in the session, preferably but not necessarily tagged for “access control”, such that any user who knows the keyword or phrase may gain access to the session, or to a particular annotated version of the session. In another embodiment, the session producer or subsequent annotator use annotations to mark a session for copy to one or more areas or lists in the log set aside for specific other users inside or outside of the organization or company of the first user.
- The system includes functionality to allow the superposition of shelf layout plan views (i.e., “plan-o-grams”) by one user onto the image or scene of a particular part of the store for view and consideration by a second user. For example, a merchandising department of a brand manufacturer may produce a promotions plan or a product, with a particular shelf layout and with specific shelf edge advertising, and/or floor decal advertising (for placement on the floor in front of the display), and/or other promotions details. These promotions plan or a product is captured by producing an AR session in a real store or demo store convenient for the merchandising department employees, and then cutting out and saving the images from that session. These are subsequently conveyed into a library or some other suitable repository where they can be accessed automatically or manually and superimposed into a new session (recorded previously by the retailer employees) which shows a different store layout. In another embodiment, the retailer produces a session of a new store layout and convey this to the brand manufacturer representatives so that they can superimpose signage or other images and thus fabricate a virtual scene of one or more new promotions options for a particular retail product.
- Apart from store merchandising and promotions, other uses include a user walks through a store or views a session previously recorded and noted badly placed items, misplaced items, out-of-stock items, dirty floors, clutter or litter, and any other issue needing attention, and produces a work list using annotation methods like those described above. The annotation record can then be detached as a separate file and sent to a particular text message recipient or email recipient using menu options on the AR device screen, or by voice command. Alternatively the session with annotations are tagged for a specific user in the store (i.e., a subordinate referenced by name or function), and a text or email message sent to that user instructing them to view the session using the AR device or on a computer terminal, tablet, or smart phone application emulating the screen of the AR device.
- In another embodiment, a user may view two related sessions which are synchronized so as to facilitate comparison. For example, if two sessions (A and B) are recorded at two different times (for example, several months apart) in the same store, these are viewed subsequently by a user, and the user may physically or virtually walk through the store, requesting the presentation of views A and B (toggling between the two) to see differences between the two at any given location in the store. This might be done in order to understand and appreciate differences between two stores or versions of the same store corresponding to two differing performance situations (i.e., the change in the store has led to significant increase or decrease in store performance, and the A-to-B session comparison is made to help the user see and appreciate how the stores differ in the eyes of shoppers).
- In another embodiment, a session is recorded at a previously agreed upon time in a selected store or stores by a store representative, and the session is made available (e.g., in a shared storage area or via an access-controlled web service) to other employees in the retail organization, and/or to employees of a consumer brand manufacturer so that these reviewers may verify that promotions of items, placement and stocking of items, shelf allocation, and other details of display and stocking, as prescribed by contractual agreement, are being complied with by the retailer. The actual producer of the session may be an employee of the retailer, a representative of the brand manufacturer, a paid mystery shopper, or any other appropriate third party whose responsibility it is to visit the store and produce the session record.
- The system of
FIG. 7 (as with that ofFIG. 5 ) and the applications described above are enhanced using an (artificial intelligence based) cognitive assistant 120, as generally described forFIG. 6 . This cognitive assistant would be an expert system that executes natural language processing (NLP). These cognitive agents 120 and similar AI based agents can assist with obtaining information regarding input queries generated by the user in the form of input text for questions especially as the questions pertain to activity regarding social media. These questions enter the cognitive agent 120 via theNLP pre-processor 122 generally as discussed above forFIG. 6 and are processed as inFIG. 4 . - Referencing some of the applications described above, the cognitive agent 120 may improve the performance of the system as follows. When searching for a session or sessions corresponding to item performance regarding social media, the search criteria and search constraints are complex enough that a simple and traditional menu-based search is not feasible (or impractical).
- For example, the user may ask the cognitive agent 120 to find to find the best comparison store with similar demographics, same store sales, and weather history or to find the best item having the highest social media activity. The cognitive agent 120 may search its knowledge base and find that it needs to clarify the meaning of some of these constraints before it can conduct an effective search. The agent may ask if “weather” in this case means average temperature, number of severe storm days, or some other qualifier. The user may then think for a moment and then decide that the key issue is average high temperature, which is communicated to the cognitive agent. The cognitive agent 120 may then conduct a search and find a list of five stores which match (roughly) demographics and same store sales. The agent may ask if “highest” in this case means highest positive or highest negative or merely just an aggregation of activity. The user may then think for a moment and then decide that the key issue is highest aggregated activity which is communicated to the cognitive agent. The cognitive agent 120 may then conduct a search and find a list of item(s) that have the highest activity. The cognitive agent 120 may summarize the findings and then ask the user which is of most interest. The user specifies one, and then the conversation goes on from there as the agent and user interact with each other until the user has a session that they wish to view.
- The cognitive agent 120 can produce annotation or log structures of a session review as discussed above, since the cognitive agent 120 specializes in capturing the context and semantic-specific history of conversation in its episodic memory. These structures would include a session ID and cross references to other similar related sessions, tags to content regarding the session as well as any queries and/or results.
- In one embodiment, the cognitive agent 120 notes (via an interface with the session mode manager) the session ID and cross-references this to the specific episodes or times of conversation in its own memory. When a user reviews a session and makes specific notes and comments, these can therefore be cross-references with similar notes (semantically speaking) made by other users of the current session. The cognitive agent 120 alerts the various users (via outside messaging such as email or text message) of their apparently mutual interest. Via its semantic network and episodic memory the cognitive agent 120 makes an ideal medium through which to associate related issues in different stores with a common store item, problem type, and so forth. For instance, a user may ask the cognitive agent 120 to show a list of all sessions or session clips associated with mis-implemented promotions for a particular product from a particular manufacturer (and perhaps narrowed with other criteria).
- In another embodiment the cognitive agent 120 records a conversation or soliloquy made by one or two employees with respect to the correct way to produce a store display, or perform some store task, and then the agent is used to associate this recording to all logged sessions that are relevant (i.e., that involve review of the product, or discuss the process).
- The above-described applications include specific features such as reading in of the annotation record of a session into the NLP based knowledge base for expansion capability of the agent. In other words, the agent is “taught” about a session using its ID, keywords, annotation text, and any other readable or hearable information about the session. Subsequently the agent may retrieve the session ID or information rooted in the session's annotations to respond (in whole or in part) to queries of its own memory that it might make while in conversation with a user about retail practices and issues.
- Servers can be any of a variety of computing devices capable of receiving information, such as a server, a distributed
computing system 10, a rack-mounted server and so forth. Server may be a single server or a group of servers that are at a same location or at different locations. Servers can receive information from client device user device via interfaces. Interfaces can be any type of interface capable of receiving information over a network, such as an Ethernet interface, a wireless networking interface, a fiber-optic networking interface, a modem, and so forth. Server also includes a processor and memory and a bus system including, for example, an information bus and a motherboard, can be used to establish and to control information communication between the components of server. - Processor may include one or more microprocessors. Generally, processor may include any appropriate processor and/or logic that is capable of receiving and storing information, and of communicating over a network (not shown). Memory can include a hard drive and a random access memory storage device, such as a dynamic random access memory computer readable hardware storage devices and media and other types of non-transitory storage devices.
- Embodiments can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof. Computer programs can be implemented in a high-level procedural or object oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and information from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing information files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and information include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD_ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
- Other embodiments are within the scope and spirit of the description claims. For example, due to the nature of software, functions described above can be implemented using software, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Other embodiments are within the scope of the following claims.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/381,555 US20180018708A1 (en) | 2016-07-12 | 2016-12-16 | Holographic Technology Implemented Retail Solutions |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662361053P | 2016-07-12 | 2016-07-12 | |
US201662361669P | 2016-07-13 | 2016-07-13 | |
US15/381,555 US20180018708A1 (en) | 2016-07-12 | 2016-12-16 | Holographic Technology Implemented Retail Solutions |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180018708A1 true US20180018708A1 (en) | 2018-01-18 |
Family
ID=60940733
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/379,647 Active 2037-05-29 US10650593B2 (en) | 2016-07-12 | 2016-12-15 | Holographic technology implemented security solution |
US15/379,657 Active US10769854B2 (en) | 2016-07-12 | 2016-12-15 | Holographic technology implemented security solution |
US15/381,262 Abandoned US20180018681A1 (en) | 2016-07-12 | 2016-12-16 | Holographic Technology Implemented Retail Solutions |
US15/381,555 Abandoned US20180018708A1 (en) | 2016-07-12 | 2016-12-16 | Holographic Technology Implemented Retail Solutions |
US15/381,396 Active US10614627B2 (en) | 2016-07-12 | 2016-12-16 | Holographic technology implemented security solution |
US15/381,588 Active US10147238B2 (en) | 2016-07-12 | 2016-12-16 | Holographic technology implemented retail solutions |
US16/200,341 Active US10521968B2 (en) | 2016-07-12 | 2018-11-26 | Systems and methods for mixed reality with cognitive agents |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/379,647 Active 2037-05-29 US10650593B2 (en) | 2016-07-12 | 2016-12-15 | Holographic technology implemented security solution |
US15/379,657 Active US10769854B2 (en) | 2016-07-12 | 2016-12-15 | Holographic technology implemented security solution |
US15/381,262 Abandoned US20180018681A1 (en) | 2016-07-12 | 2016-12-16 | Holographic Technology Implemented Retail Solutions |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/381,396 Active US10614627B2 (en) | 2016-07-12 | 2016-12-16 | Holographic technology implemented security solution |
US15/381,588 Active US10147238B2 (en) | 2016-07-12 | 2016-12-16 | Holographic technology implemented retail solutions |
US16/200,341 Active US10521968B2 (en) | 2016-07-12 | 2018-11-26 | Systems and methods for mixed reality with cognitive agents |
Country Status (1)
Country | Link |
---|---|
US (7) | US10650593B2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180293596A1 (en) * | 2017-04-10 | 2018-10-11 | International Business Machines Corporation | Shelf image recognition analytics |
US10521968B2 (en) | 2016-07-12 | 2019-12-31 | Tyco Fire & Security Gmbh | Systems and methods for mixed reality with cognitive agents |
US10616419B1 (en) * | 2018-12-12 | 2020-04-07 | Mitel Networks Corporation | Devices, systems and methods for communications that include social media clients |
US10775776B2 (en) * | 2017-11-17 | 2020-09-15 | Accenture Global Solutions Limited | Digital manufacturing with product tracking and social media analysis of the product |
US20230230109A1 (en) * | 2022-01-19 | 2023-07-20 | Martin A. Alpert | Trend prediction |
US20230245152A1 (en) * | 2022-02-03 | 2023-08-03 | Capital One Services, Llc | Local trend and influencer identification using machine learning predictive models |
US11854046B2 (en) | 2020-02-14 | 2023-12-26 | Walmart Apollo, Llc | Systems and methods for presenting augmented reality promotion indicators |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10360572B2 (en) * | 2016-03-07 | 2019-07-23 | Ricoh Company, Ltd. | Image processing system, method and computer program product for evaluating level of interest based on direction of human action |
US10951643B2 (en) * | 2017-03-15 | 2021-03-16 | Refinitiv Us Organization Llc | Systems and methods for detecting and locating unsecured sensors in a network |
US11495118B2 (en) * | 2017-06-27 | 2022-11-08 | Oneevent Technologies, Inc. | Augmented reality of a building |
US10366599B1 (en) * | 2017-09-15 | 2019-07-30 | Global Tel*Link Corporation | Communication devices for guards of controlled environments |
US10403123B2 (en) * | 2017-10-31 | 2019-09-03 | Global Tel*Link Corporation | Augmented reality system for guards of controlled environment residents |
KR102619621B1 (en) * | 2018-02-07 | 2023-12-29 | 삼성전자주식회사 | Electronic device and method for communicating with chatbot |
US10846532B2 (en) * | 2018-02-27 | 2020-11-24 | Motorola Solutions, Inc. | Method and apparatus for identifying individuals using an augmented-reality application |
US10613505B2 (en) * | 2018-03-29 | 2020-04-07 | Saudi Arabian Oil Company | Intelligent distributed industrial facility safety system |
CN108650245A (en) * | 2018-04-24 | 2018-10-12 | 上海奥孛睿斯科技有限公司 | Internet of things system based on augmented reality and operation method |
TWI674814B (en) | 2018-04-30 | 2019-10-11 | 奇邑科技股份有限公司 | Communication method between gateways and wireless gateway system thereof |
US11417064B2 (en) * | 2018-07-10 | 2022-08-16 | Motorola Solutions, Inc. | Method, apparatus and system for mapping an incident type to data displayed at an incident scene |
US11850514B2 (en) | 2018-09-07 | 2023-12-26 | Vulcan Inc. | Physical games enhanced by augmented reality |
CN109144014B (en) * | 2018-10-10 | 2021-06-25 | 北京交通大学 | System and method for detecting operation condition of industrial equipment |
US11670080B2 (en) * | 2018-11-26 | 2023-06-06 | Vulcan, Inc. | Techniques for enhancing awareness of personnel |
WO2020131049A1 (en) * | 2018-12-19 | 2020-06-25 | Hewlett-Packard Development Company, L.P. | Security detection analytics |
CN109658573A (en) * | 2018-12-24 | 2019-04-19 | 上海爱观视觉科技有限公司 | A kind of intelligent door lock system |
US11030814B1 (en) * | 2019-01-15 | 2021-06-08 | Facebook, Inc. | Data sterilization for post-capture editing of artificial reality effects |
US11950577B2 (en) | 2019-02-08 | 2024-04-09 | Vale Group Llc | Devices to assist ecosystem development and preservation |
CN109920203A (en) * | 2019-02-12 | 2019-06-21 | 合肥极光科技股份有限公司 | A kind of campus security intelligent monitor system based on technology of Internet of things |
CN109903438A (en) * | 2019-02-21 | 2019-06-18 | 安徽鸿延传感信息有限公司 | A kind of phonetic warning system of enterprise security risk management and control inspection |
US11912382B2 (en) | 2019-03-22 | 2024-02-27 | Vulcan Inc. | Underwater positioning system |
WO2020219643A1 (en) * | 2019-04-23 | 2020-10-29 | Apple Inc. | Training a model with human-intuitive inputs |
CN110177252A (en) * | 2019-05-13 | 2019-08-27 | 安徽银点电子科技有限公司 | A kind of monitoring system of entering based on electronic equipment |
CN111327925A (en) * | 2019-06-04 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Data processing method and device, electronic equipment and machine-readable storage medium |
FR3103955A1 (en) * | 2019-11-29 | 2021-06-04 | Orange | Device and method for environmental analysis, and device and voice assistance method implementing them |
CN111105218A (en) * | 2020-01-07 | 2020-05-05 | 国网福建省电力有限公司 | Power distribution network operation monitoring method based on holographic image technology |
CN111476171B (en) * | 2020-04-09 | 2021-03-26 | 腾讯科技(深圳)有限公司 | Distributed object recognition system and method and edge computing equipment |
US11945404B2 (en) * | 2020-04-23 | 2024-04-02 | Toyota Motor Engineering & Manufacturing North America, Inc. | Tracking and video information for detecting vehicle break-in |
CN111541876A (en) * | 2020-05-18 | 2020-08-14 | 上海未高科技有限公司 | Method for realizing high-altitude cloud anti-AR technology |
US11126405B1 (en) * | 2020-06-19 | 2021-09-21 | Accenture Global Solutions Limited | Utilizing augmented reality and artificial intelligence to automatically generate code for a robot |
CN111784986B (en) * | 2020-07-13 | 2021-02-09 | 和宇健康科技股份有限公司 | Intelligent security alarm method based on big data |
US11956324B2 (en) * | 2021-01-07 | 2024-04-09 | Stmicroelectronics S.R.L. | Sensor device, system and method |
US11893551B2 (en) | 2021-04-15 | 2024-02-06 | Bank Of America Corporation | Information security system and method for augmented reality check generation |
US11769066B2 (en) | 2021-11-17 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin triggers and actions |
US11934966B2 (en) | 2021-11-17 | 2024-03-19 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin inferences |
Family Cites Families (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5357148A (en) | 1992-12-29 | 1994-10-18 | Sgs-Thomson Microelectronics, Inc. | Device for biasing an RF device operating in quasi-linear modes with voltage compensation |
US7023913B1 (en) | 2000-06-14 | 2006-04-04 | Monroe David A | Digital security multimedia sensor |
US20030025599A1 (en) | 2001-05-11 | 2003-02-06 | Monroe David A. | Method and apparatus for collecting, sending, archiving and retrieving motion video and still images and notification of detected events |
US20040075738A1 (en) | 1999-05-12 | 2004-04-22 | Sean Burke | Spherical surveillance system architecture |
US8520068B2 (en) * | 1999-07-20 | 2013-08-27 | Comcast Cable Communications, Llc | Video security system |
WO2001064481A2 (en) | 2000-03-02 | 2001-09-07 | Donnelly Corporation | Video mirror systems incorporating an accessory module |
US20110058036A1 (en) | 2000-11-17 | 2011-03-10 | E-Watch, Inc. | Bandwidth management and control |
CA2327847C (en) * | 2000-12-07 | 2010-02-23 | Phasys Limited | System for transmitting and verifying alarm signals |
US7253732B2 (en) | 2001-09-10 | 2007-08-07 | Osann Jr Robert | Home intrusion confrontation avoidance system |
US6970083B2 (en) | 2001-10-09 | 2005-11-29 | Objectvideo, Inc. | Video tripwire |
US20030158771A1 (en) | 2002-01-16 | 2003-08-21 | Ncr Corporation | Retention modeling methodology for airlines |
US7321386B2 (en) | 2002-08-01 | 2008-01-22 | Siemens Corporate Research, Inc. | Robust stereo-driven video-based surveillance |
US20050010649A1 (en) | 2003-06-30 | 2005-01-13 | Ray Payne | Integrated security suite architecture and system software/hardware |
US7801833B2 (en) | 2003-12-22 | 2010-09-21 | Endicott Interconnect Technologies, Inc. | Item identification control method |
US7249064B1 (en) | 2004-01-16 | 2007-07-24 | Carmen Billy W | Method for consumer referral of products to retailers |
US8965460B1 (en) | 2004-01-30 | 2015-02-24 | Ip Holdings, Inc. | Image and augmented reality based networks using mobile devices and intelligent electronic glasses |
US8963713B2 (en) | 2005-03-16 | 2015-02-24 | Icontrol Networks, Inc. | Integrated security network with security alarm signaling system |
US20060136575A1 (en) | 2004-05-11 | 2006-06-22 | Ray Payne | Integrated security suite architecture and system software/hardware |
US20060179463A1 (en) | 2005-02-07 | 2006-08-10 | Chisholm Alpin C | Remote surveillance |
WO2006119323A2 (en) | 2005-05-03 | 2006-11-09 | Palomar Technology, Llc | Trusted monitoring system and method |
US7731588B2 (en) | 2005-09-28 | 2010-06-08 | The United States Of America As Represented By The Secretary Of The Navy | Remote vehicle control system |
EP2035854B1 (en) | 2006-06-27 | 2014-04-16 | Telefonaktiebolaget LM Ericsson (publ) | A radio frequency emitter detection and location method and system |
US20080071559A1 (en) | 2006-09-19 | 2008-03-20 | Juha Arrasvuori | Augmented reality assisted shopping |
US20080189169A1 (en) | 2007-02-01 | 2008-08-07 | Enliven Marketing Technologies Corporation | System and method for implementing advertising in an online social network |
JP5584474B2 (en) | 2007-03-05 | 2014-09-03 | インヴェンサス・コーポレイション | Chip with rear contact connected to front contact by through via |
WO2009012289A1 (en) | 2007-07-16 | 2009-01-22 | Cernium Corporation | Apparatus and methods for video alarm verification |
US8180396B2 (en) | 2007-10-18 | 2012-05-15 | Yahoo! Inc. | User augmented reality for camera-enabled mobile devices |
EP2166493A1 (en) | 2008-09-12 | 2010-03-24 | BRITISH TELECOMMUNICATIONS public limited company | Control of supply networks and verification of items |
US8937658B2 (en) | 2009-10-15 | 2015-01-20 | At&T Intellectual Property I, L.P. | Methods, systems, and products for security services |
US8310365B2 (en) | 2010-01-08 | 2012-11-13 | Utc Fire & Security Americas Corporation, Inc. | Control system, security system, and method of monitoring a location |
US20120249797A1 (en) | 2010-02-28 | 2012-10-04 | Osterhout Group, Inc. | Head-worn adaptive display |
US20120242698A1 (en) | 2010-02-28 | 2012-09-27 | Osterhout Group, Inc. | See-through near-eye display glasses with a multi-segment processor-controlled optical layer |
US20110213664A1 (en) | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
WO2011112941A1 (en) | 2010-03-12 | 2011-09-15 | Tagwhat, Inc. | Purchase and delivery of goods and services, and payment gateway in an augmented reality-enabled distribution network |
US20110254680A1 (en) | 2010-04-16 | 2011-10-20 | Infrasafe, Inc. | Security monitoring system |
KR101016556B1 (en) | 2010-05-06 | 2011-02-24 | 전성일 | Method, server and computer-readable recording medium for accessing information on person using augmented reality |
US10456209B2 (en) | 2010-10-13 | 2019-10-29 | Gholam A. Peyman | Remote laser treatment system with dynamic imaging |
US20140002236A1 (en) | 2010-12-02 | 2014-01-02 | Viscount Security Systems Inc. | Door Lock, System and Method for Remotely Controlled Access |
KR101292463B1 (en) | 2011-01-27 | 2013-07-31 | 주식회사 팬택 | Augmented reality system and method that share augmented reality service to remote |
US8886581B2 (en) | 2011-05-11 | 2014-11-11 | Ari M. Frank | Affective response predictor for a stream of stimuli |
US8223088B1 (en) * | 2011-06-09 | 2012-07-17 | Google Inc. | Multimode input field for a head-mounted display |
TWI461721B (en) | 2012-03-16 | 2014-11-21 | Quadlink Technology Inc | Object detection device and method thereof |
US9148341B2 (en) | 2012-03-26 | 2015-09-29 | Jds Uniphase Corporation | Upgrading a programmable logic gate array in an in-service pluggable transceiver |
US20140081858A1 (en) | 2012-09-14 | 2014-03-20 | Diebold Self-Service Systems Division Of Diebold, Incorporated | Banking system controlled responsive to data read from data bearing records |
US9607025B2 (en) | 2012-09-24 | 2017-03-28 | Andrew L. DiRienzo | Multi-component profiling systems and methods |
US10977701B2 (en) | 2012-12-04 | 2021-04-13 | Crutchfield Corporation | Techniques for providing retail customers a seamless, individualized discovery and shopping experience between online and brick and mortar retail locations |
US10110805B2 (en) | 2012-12-06 | 2018-10-23 | Sandisk Technologies Llc | Head mountable camera system |
US9852381B2 (en) | 2012-12-20 | 2017-12-26 | Nokia Technologies Oy | Method and apparatus for providing behavioral pattern generation for mixed reality objects |
US9721373B2 (en) | 2013-03-14 | 2017-08-01 | University Of Southern California | Generating instructions for nonverbal movements of a virtual character |
US10243786B2 (en) | 2013-05-20 | 2019-03-26 | Citrix Systems, Inc. | Proximity and context aware mobile workspaces in enterprise systems |
CN105359063B (en) | 2013-06-09 | 2018-08-17 | 索尼电脑娱乐公司 | Utilize the head-mounted display of tracking |
US20150020086A1 (en) | 2013-07-11 | 2015-01-15 | Samsung Electronics Co., Ltd. | Systems and methods for obtaining user feedback to media content |
WO2015088057A1 (en) | 2013-12-10 | 2015-06-18 | 엘지전자 주식회사 | 3d camera module |
US9384656B2 (en) * | 2014-03-10 | 2016-07-05 | Tyco Fire & Security Gmbh | False alarm avoidance in security systems filtering low in network |
US9715613B2 (en) | 2014-05-02 | 2017-07-25 | The Boeing Company | Systems and methods for use in authenticating an object |
US20150317418A1 (en) * | 2014-05-02 | 2015-11-05 | Honeywell International Inc. | Providing three-dimensional monitoring of a facility |
AU2015297035B2 (en) | 2014-05-09 | 2018-06-28 | Google Llc | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US20160070343A1 (en) * | 2014-09-09 | 2016-03-10 | Beijing Lenovo Software Ltd. | Information processing method and electronic device |
US9945928B2 (en) | 2014-10-30 | 2018-04-17 | Bastille Networks, Inc. | Computational signal processing architectures for electromagnetic signature analysis |
IL236752B (en) * | 2015-01-15 | 2019-10-31 | Eran Jedwab | An integrative security system and method |
US10300361B2 (en) | 2015-01-23 | 2019-05-28 | Playsight Interactive Ltd. | Ball game training |
CN107211195B (en) * | 2015-02-12 | 2020-04-24 | 日商可乐普拉股份有限公司 | Apparatus and system for viewing and listening to content using head mounted display |
KR102348812B1 (en) * | 2015-03-09 | 2022-01-07 | 삼성전자주식회사 | User information processing method and electronic device supporting the same |
AU2016228525B2 (en) * | 2015-03-12 | 2021-01-21 | Alarm.Com Incorporated | Virtual enhancement of security monitoring |
US10650593B2 (en) | 2016-07-12 | 2020-05-12 | Tyco Fire & Security Gmbh | Holographic technology implemented security solution |
US10540550B2 (en) | 2017-03-20 | 2020-01-21 | Mastercard International Incorporated | Augmented reality systems and methods for service providers |
US11270510B2 (en) | 2017-04-04 | 2022-03-08 | David Peter Warhol | System and method for creating an augmented reality interactive environment in theatrical structure |
-
2016
- 2016-12-15 US US15/379,647 patent/US10650593B2/en active Active
- 2016-12-15 US US15/379,657 patent/US10769854B2/en active Active
- 2016-12-16 US US15/381,262 patent/US20180018681A1/en not_active Abandoned
- 2016-12-16 US US15/381,555 patent/US20180018708A1/en not_active Abandoned
- 2016-12-16 US US15/381,396 patent/US10614627B2/en active Active
- 2016-12-16 US US15/381,588 patent/US10147238B2/en active Active
-
2018
- 2018-11-26 US US16/200,341 patent/US10521968B2/en active Active
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10521968B2 (en) | 2016-07-12 | 2019-12-31 | Tyco Fire & Security Gmbh | Systems and methods for mixed reality with cognitive agents |
US10614627B2 (en) * | 2016-07-12 | 2020-04-07 | Tyco Fire & Security Gmbh | Holographic technology implemented security solution |
US10650593B2 (en) | 2016-07-12 | 2020-05-12 | Tyco Fire & Security Gmbh | Holographic technology implemented security solution |
US10769854B2 (en) | 2016-07-12 | 2020-09-08 | Tyco Fire & Security Gmbh | Holographic technology implemented security solution |
US20180293596A1 (en) * | 2017-04-10 | 2018-10-11 | International Business Machines Corporation | Shelf image recognition analytics |
US10775776B2 (en) * | 2017-11-17 | 2020-09-15 | Accenture Global Solutions Limited | Digital manufacturing with product tracking and social media analysis of the product |
US10616419B1 (en) * | 2018-12-12 | 2020-04-07 | Mitel Networks Corporation | Devices, systems and methods for communications that include social media clients |
US10958793B2 (en) | 2018-12-12 | 2021-03-23 | Mitel Networks Corporation | Devices, systems and methods for communications that include social media clients |
US11854046B2 (en) | 2020-02-14 | 2023-12-26 | Walmart Apollo, Llc | Systems and methods for presenting augmented reality promotion indicators |
US20230230109A1 (en) * | 2022-01-19 | 2023-07-20 | Martin A. Alpert | Trend prediction |
US20230245152A1 (en) * | 2022-02-03 | 2023-08-03 | Capital One Services, Llc | Local trend and influencer identification using machine learning predictive models |
Also Published As
Publication number | Publication date |
---|---|
US20180018823A1 (en) | 2018-01-18 |
US20190139315A1 (en) | 2019-05-09 |
US10521968B2 (en) | 2019-12-31 |
US10147238B2 (en) | 2018-12-04 |
US20180018867A1 (en) | 2018-01-18 |
US10614627B2 (en) | 2020-04-07 |
US10650593B2 (en) | 2020-05-12 |
US20180018681A1 (en) | 2018-01-18 |
US20180018861A1 (en) | 2018-01-18 |
US10769854B2 (en) | 2020-09-08 |
US20180018824A1 (en) | 2018-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180018708A1 (en) | Holographic Technology Implemented Retail Solutions | |
US20230039354A1 (en) | System and method for providing unified workflows integrating multiple computer network resources | |
Yang et al. | Social media data analytics for business decision making system to competitive analysis | |
Lee et al. | The Internet of Things (IoT): Applications, investments, and challenges for enterprises | |
Andrejevic et al. | Defining the sensor society | |
US20190052701A1 (en) | System, method and platform for user content sharing with location-based external content integration | |
Xu | Managing digital enterprise: ten essential topics | |
US7962426B2 (en) | Role/persona based applications | |
US9503412B1 (en) | Systems and methods for IT services and social knowledge management using social objects and activity streams | |
WO2015036817A1 (en) | Structured updated status, requests, user data & programming based presenting & accessing of connections | |
US20180240158A1 (en) | Computer implemented system and method for customer profiling using micro-conversions via machine learning | |
US10909606B2 (en) | Real-time in-venue cognitive recommendations to user based on user behavior | |
Guo et al. | Crowd-ai camera sensing in the real world | |
Dinh et al. | A survey on context awareness in big data analytics for business applications | |
CN106068498A (en) | By local scene information radiography to calculating system based on cloud | |
Nguyen-Duc et al. | A multiple case study of artificial intelligent system development in industry | |
Knight | Innovations in unobtrusive methods | |
US20150262312A1 (en) | Management system and method | |
Ramirez-Asis et al. | A conceptual analysis on the impact of big data analytics toward on digital marketing transformation | |
JP2021503652A (en) | Automatically connect external data to business analysis processing | |
Stănciulescu et al. | Optimizing the IT structures of tourism SMEs using modern applications and resources (Cloud) | |
Tarka et al. | On the Unstructured Big Data Analytical Methods in Firms: Conceptual Model, Measurement, and Perception | |
Dlamini | The potential use of the Internet of Things (IoT) in South African retail businesses | |
Dostál et al. | Knowledge Management in Service Desk Environment: An Overview of Methods, Tools and Techniques. | |
Adamczewski | Digital transformation of business entities in competitive environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: TYCO FIRE & SECURITY GMBH, SWITZERLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RASBAND, PAUL B.;REEL/FRAME:044513/0929 Effective date: 20171006 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: TYCO FIRE & SECURITY GMBH, SWITZERLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LOCKE, ROBERT B.;REEL/FRAME:052273/0575 Effective date: 20200330 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |