WO2011100436A1 - System and method of determining an area of concentrated focus and controlling an image displayed in response - Google Patents

System and method of determining an area of concentrated focus and controlling an image displayed in response Download PDF

Info

Publication number
WO2011100436A1
WO2011100436A1 PCT/US2011/024357 US2011024357W WO2011100436A1 WO 2011100436 A1 WO2011100436 A1 WO 2011100436A1 US 2011024357 W US2011024357 W US 2011024357W WO 2011100436 A1 WO2011100436 A1 WO 2011100436A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
sensor
processor
concentrated focus
individual
Prior art date
Application number
PCT/US2011/024357
Other languages
French (fr)
Inventor
Isaac S. Daniel
Original Assignee
Lead Technology Capital Management, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lead Technology Capital Management, Llc filed Critical Lead Technology Capital Management, Llc
Publication of WO2011100436A1 publication Critical patent/WO2011100436A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • the entire disclosures of the afore -mentioned patent applications are incorporated by reference as if fully stated herein.
  • the present disclosure relates generally to electronic systems, and more particularly, to systems, methods, and various other disclosures related to determining an area of concentrated focus and controlling an image displayed in response.
  • the prior art teaches a navigational system for navigating in open spaces, where a user's location is determined based on a location determining device in direct contact with a satellite network system.
  • a location determining device loses its sky view, e.g. entering a tunnel or an enclosed structure, the signal is compromised and the navigational system becomes non-functional.
  • An objective of the invention is to provide a system and method of determining an area of concentrated focus and controlling an image display in response.
  • Another objective of the invention is to provide a system and method of tracking an area of concentrated focus and highlighting a list segment that corresponds to the area of concentrated focus on a map.
  • Another objective of the invention is to provide a system and method of tracking an area of concentrated focus on advertisement(s) for gathering statistical information regarding the areas of fixation based on the area of concentrated focus.
  • the systems and methods described herein disclose in particular, a system and method comprising of: at least one sensor configured for detecting an area of concentrated focus on a display device; and at least one processor electronically connected to the at least one sensor as functional components of a tracking system, wherein the at least one processor is configured for controlling the image displayed on the display device electronically connected to the at least one processor, based on the area of concentrated focus.
  • the image displayed may include but is not limited to any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions, and the like.
  • System and method further comprises of: at least one means for electronically connecting a display device to the at least one processor; and computer executable instructions, executable by the at least one processor, configured for performing any one or more of the following: controlling the at least one sensor to detect the area of concentrated focus on the display device; controlling the image displayed on the display device, electronically connected to the at least one processor, based on the area of concentrated focus; and generating content for the area of concentrated focus.
  • the at least one sensor includes but is not limited to any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, an audio sensor, a tactile sensor, a thermal sensor, a chemical sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, a thumbprint scanner, fingerprint scanner and a microphone.
  • the area of concentrated focus includes any one or more of the following locations: an individual's gaze point, an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements and the like.
  • Content as used herein may include but is not limited to any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, navigational directions, and the like.
  • the at least one sensor and the processor are functional components of an eye tracking system, wherein the eye tracking system is configured to detect an individual's gaze point.
  • the at least one sensor is configured to determine if an individual's gaze point is focused on a list or an image, e.g. a map. Controlling the image displayed includes but is not limited to: exploding the image of the area of concentrated focus displayed on the display device; highlighting the area of concentrated focus; highlighting an area on a map that corresponds to the area of concentrated focus on a list segment; and highlighting a list segment that corresponds to the area of concentrated focus on a map.
  • system and method also includes providing a kiosk containing the at least one processor, wherein the kiosk contains the at least one sensor and at least one means for connecting the display device with the at least one sensor.
  • the kiosk is adapted to include a display device.
  • system and method comprises of at least one communications means in electrical communication with the display device and configured for transmitting the generated content for the area of concentrated focus to at least one multimedia device based on the area of concentrated focus.
  • the at least one communications means is configured to communicate wirelessly.
  • Multimedia device as described herein includes but is not limited to portable computers, laptop computers, cellular phones, smart phones, personal digital assistants ("PDAs"), tablet individual computers, notebooks, iPads, portable screens, a portable processing devices, and other like portable wireless communication devices used in the arts.
  • Positioned within the multimedia device is at least one communications means configured to receive generated content for an area of concentrated focus on a display device.
  • system and method comprises of: at least one communications means positioned within a multimedia device configured to receive generated content for an area of concentrated focus on a display device; and at least one processor positioned within the multimedia device in electrical communication with the multimedia device's at least one communications means, wherein the at least one processor is configured to determine a location of the multimedia device based on a positioning system signal.
  • the at least one communications means is configured to receive at least one signal from a positioning system.
  • System and method may further comprising of: at least one sensor configured for detecting an area of concentrated focus on a display device; at least one processor electronically connected to the at least one sensor as functional components of a tracking system, wherein the at least one processor is configured for generating content for the area of concentrated focus; at least one means for electronically connecting a display device to the at least one processor; and computer executable instructions executable by the at least one processor configured for performing any one or more of the following: (i) controlling the at least one sensor to detect the area of concentrated focus on the display device; (ii) controlling the image displayed based on the area of concentrated focus; (iii) generating content for the area of concentrated focus; and (iv) determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal.
  • the multimedia device's at least one processor is configured for determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal.
  • the at least one communications means may comprise of a radio frequency transceiver, and is configured to communicate wirelessly.
  • the multimedia device's at least one communications means is configured to receive the at least one signal from a positioning system over a wireless area network.
  • the multimedia device comprises at least one positioning system receiver, wherein the positioning system receiver includes, but is not limited to: a positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a wi-fi positioning system receiver.
  • FIG. 1 A is an illustrative diagram of an exemplary embodiment of the system.
  • FIG. IB is an illustrative diagram of an exemplary embodiment of the system.
  • FIG. 1C is an illustrative diagram of the system according to one embodiment.
  • FIG. ID is an illustrative diagram of the system according to one embodiment.
  • FIG. IE is an illustrative diagram of the system according to one embodiment.
  • FIG. IF is an illustrative diagram of the system according to one embodiment.
  • FIG. 2A is an illustrative diagram of the system according to another embodiment.
  • FIG. 2B is an illustrative diagram of the system according to another embodiment.
  • FIG. 3 is a sample flowchart of an exemplary method of controlling an image responsive to an area of concentrated focus.
  • FIG. 4 is a sample flowchart of an exemplary method of controlling the image displayed on the display device according to one embodiment.
  • FIG. 5 is a sample flowchart of an exemplary method of controlling the image displayed according to an alternate embodiment.
  • FIG. 6 is a sample flowchart of an exemplary method of transmitting generated content for the area of concentrated focus according to one embodiment.
  • FIG. 7 is a sample flowchart of an exemplary method of determining a location of a multimedia device based on the received generated content according to one embodiment.
  • FIG. 8 is a block diagram representing an article according to various embodiments.
  • FIGs. 1A & IB are illustrative diagrams of exemplary embodiments of the system 100.
  • System 100 comprises of at least one sensor 102 configured for detecting an area of concentrated focus 104 on a display device 106; and at least one computer processor 108 electronically connected to the at least one sensor 102 as functional components of a tracking system 100, wherein the at least one computer processor 108 is configured for controlling an image 110 displayed on the display device 106 that is electronically connected to the at least one processor 108 or a multimedia device 112, based on the area of concentrated focus 104.
  • Controlling the image 110 displayed includes but is not limited to any one or more of the following: exploding the image of the area of concentrated focus 104 displayed on the display device 106; highlighting the area of concentrated focus 104; highlighting an area on a map that corresponds to the area of concentrated focus 104 on a list segment; and highlighting a list segment that corresponds to the area of concentrated focus 105 on a map.
  • the at least one sensor 102 comprises of any kind of sensor and includes but is not limited to, any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, e.g. a camera, an audio sensor, e.g. a microphone, a tactile sensor, e.g. a vibration sensor, a thermal sensor, e.g. a heat sensor and/or infrared camera, a chemical sensor, e.g. an odor sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, e.g. imaging camera, a thumbprint scanner, fingerprint scanner, and any other sensors that are known and used in the arts.
  • an eye scanner e.g. a camera
  • an audio sensor e.g. a microphone
  • a tactile sensor e.g. a vibration sensor
  • a thermal sensor e.g. a heat sensor and/or infrared camera
  • a chemical sensor e.g
  • the at least one sensor 102 may be positioned in the same enclosure as the at least one processor 108.
  • An area of concentrated focus 104 may include but is not limited to any one or more of the following locations: an individual's gaze point, direction of an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements and the like.
  • the at least one sensor 102 is able to detect precisely the object of the concentrated focus 104, e.g. an image 110 or a list.
  • the at least one means 114 (not shown) for electronically connecting a display device 106 to the at least one processor 108 of the system 100 may be any kind of means, such as a video connector, a coaxial cable, an HDMI cable, an s-video component connector, a WiFi video transceiver, a Bluetooth video transceiver, an internal video cable socket, a DVI connector, and the like.
  • Means 114 (not shown) for electronically connecting a display device 106 to the at least one processor 108 of the system 100 may include a cable, but it should be noted that means 106 may include, but such means 114 may not include a cable.
  • the display device 106 may be any kind of display device 106, such as, but not limited to, a television, a computer monitor, a projector, or any other kind of screen and/or display device 106.
  • Processor 108 may be any type of processor, such as, but not limited to, a central processing unit (CPU), a microprocessor, a video processor, a front end processor, a coprocessor, a single-core processor, a multi-core processor, and the like.
  • Processor 108 and the at least one sensor 102 are electronically connected to each other forming functional components of a tracking system 100, wherein the tracking system 100 is configured for detecting an area of concentrated focus 104, i.e. the object of fixation on a display device 106.
  • the at least one sensor 102, and the processor 108 are functional components of an eye tracking system 100, wherein the eye tracking system 100 is configured to detect an individual's gaze point.
  • the at least one sensor 102 reflects a beam of infrared light upon an individual's eye, where the eye movements, gaze point(s) and reflection patterns are tracked and recorded.
  • Processor 108 uses the exact gaze point from the recorded reflection patterns to calculate the area of concentrated focus 104.
  • the at least one sensor 102 may be configured to determine if an individual's gaze point is focused on a list or an image 110.
  • system 100 may further comprise of computer executable instructions 116 executable by the at least one processor 108 and configured for performing any one or more of the various functions of the system 100 and methods disclosed herein.
  • the computer executable instructions 116 executable by the at least one processor 108 are configured for controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106; controlling the image 110 displayed based on the area of concentrated focus 104; and generating content 118 for the area of concentrated focus 104.
  • the computer executable instructions 1 16 may be loaded directly on the processor 108, or may be stored on storage means 120, such as, but not limited to, computer readable media, such as, but not limited to, a hard drive, a solid state drive, a flash memory, random access memory, CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-R, DVD-RW, and the like.
  • Computer executable instructions 116 may be any type of computer executable instructions 116, which may be in the form of a computer program.
  • the computer program being composed in any suitable programming language or source code, such as C++, C, JAVA, JavaScript, HTML, XML, and other programming languages.
  • Controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106 may include using for example a fingerprint scanner to detect the location of the individual's fingerprint on the display device 106.
  • the computer executable instructions 1 16 may be programmed to control for example an eye tracker, an eye scanner, an iris scanner, a face scanner, a visual sensor, e.g. a camera, an audio sensor, e.g. a microphone, a tactile sensor, e.g. a vibration sensor, a thermal sensor, e.g. a heat sensor and/or infrared camera, a chemical sensor, e.g.
  • an odor sensor an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, e.g. imaging camera, a thumbprint scanner, fingerprint scanner, a microphone and the like to detect for example an individual's eye movements, gaze point, body heat, scent, thumbprint, fingerprint, the direction of the individual's voice, and the like.
  • a thermal imaging camera e.g. imaging camera, a thumbprint scanner, fingerprint scanner, a microphone and the like to detect for example an individual's eye movements, gaze point, body heat, scent, thumbprint, fingerprint, the direction of the individual's voice, and the like.
  • FIG. 1C is an illustrative diagram of the system 100 according to one embodiment.
  • the area of concentrated focus 104 on the display device 106 may be a particular location on a map 122.
  • processor 108 is configured to control the image 110 displayed on the display device 106 based on the detected area of concentrated focus 104.
  • Image 110 as used herein may comprise of a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisements, merchandise, navigational directions and the like. Illustratively as shown in FIG.
  • controlling the image 110 displayed on the display device 106 may include but is not limited to: exploding the image 110 of the area of concentrated focus 104 displayed on the display device 106 by enlarging that portion of the image 110, i.e. the map 122, as shown.
  • FIG. ID is an illustrative diagram of the system 100 according to one embodiment.
  • processor 108 is configured to control the image 110 displayed on the display device 106 based on the detected area of concentrated focus 104.
  • controlling the image 110 displayed includes highlighting the area of concentrated focus 104.
  • Such highlighting may require processor 108 activating display means 124 (not shown) in electrical communication with the processor 108, where display means 124 may include: a liquid crystal display (“LCD”) screen, a light emitting diode (“LED”) screen, or a monitor and the like.
  • display means 124 is electronically connected to processor 108.
  • display means 124 is wirelessly connected to processor 108.
  • display means 124 may include a control means, such as, but not limited to, a keyboard, a mouse, a touch screen, a stylus, and the like. In either event, processor 108 may activate display means 124 (not shown) such that highlighting the area of concentrated focus 104 will display the image 110 as a projected hologram or a lighted display.
  • FIG. IE is an illustrative diagram of the system 100 according to one embodiment.
  • processor 108 is configured to highlight an area on a map 122 on the display device 106 that corresponds to the area of concentrated focus 104 on a list segment 126.
  • a list segment 126 e.g. a listing of stores, and in particular fixated on the listing for "ABC Store.”
  • processor 108 highlights the area on the map 122 that corresponds to the area of concentrated focus 104 on the list segment 126, i.e. that portion on the map 122 corresponding to the location of ABC Store.
  • processor 108 is configured to accomplish the reverse, i.e. once the area of concentrated focus 104 is detected processor 108 is configured to highlight a list segment 126 on the display device 106 that corresponds to the area of concentrated focus 104 on a map 122 based on an area of concentrated focus 104.
  • processor 108 may be fixated on a particular location on a map 122, i.e. the location of "ABC Store.”
  • Processor 108 detects the area of concentrated focus 104, that portion of the map 122 and highlights the list segment 126 corresponding to the area of concentrated focus 104 on the map 122.
  • Kiosk 128 as shown in FIG. 1A - IF as used herein describes an open, electronic, computerized booth for which the at least one processor 108 is positioned within, and wherein the kiosk 128 contains the at least one sensor 102 and at least one means 114 (not shown) for connecting the display device 106 to the at least one sensor 106.
  • Kiosk 128 is adapted to include the display device 106, as it may house a computer terminal with computer software and hardware to perform the varied functions of the systems 100 and methods disclosed herein, e.g. displaying images 110 thereon.
  • the electronic kiosk 128 may be interactive without allowing the individual to access the system 100 functions, while in other embodiments, the kiosk 128 is not interactive.
  • Kiosk 128 as used herein includes touch screens, trackballs, computer keyboards, and pushbuttons and the like, typically used as information booths and often located at malls and other large indoor or outdoor structures.
  • the area of concentrated focus 104 on the display device 106 may include one or more images 110, 110' of advertisements as displayed on the display device 106.
  • one, two or a plurality of images 110, 110' of advertisements may be displayed on the display device 106, for which statistical information regarding the area(s) of concentrated focus 104 for the images 110, 110' of advertisements may be gathered and analyzed.
  • the display device 106 may be configured to display the advertising images 110, 110' singly, a plurality, or in combination thereof, e.g. display device 106 may be configured to electronically rotate consecutive images 110 of advertisements such that a plurality of advertisements may be displayed within a predetermined period.
  • the at least one sensor 102 detects the area of concentrated focus 104 for the images 110, 110 of advertisements displayed, while the processor 108 collects statistical data regarding the areas of concentrated focus 104, 104'. Accordingly, the areas of concentrated focus 104 for at least one image 110 of advertisement, e.g. specific merchandise, or the advertising image 110 that received the most areas of concentrated focuses 104, 104', e.g. gaze points, are tracked and recorded.
  • FIG. IF is an illustrative diagram of the system 100 according to one embodiment.
  • System 100 may also include at least one communication means 130 configured for transmitting generated content 118 to at least one multimedia device 112 based on the area of concentrated focus 104.
  • Content 118 as used herein may comprise of a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions and the like.
  • Communication means 130 may be a wireless communication means 130', which employ short range wireless protocol, such as, but not limited to, a radio frequency transceiver, a radio frequency receiver, and/or a radio frequency transmitter.
  • the radio frequency receiver may be any type of radio frequency receiver, including, but not limited to, a positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a Wi-Fi positioning system receiver.
  • communication means 130 may employ wireless protocols like Blue Tooth, ZigBee, 802.11 series, or a wireless modem, such as, but not limited to, a global system for mobile communications (GSM) modem, or any other short range wireless protocol that is well known and used in the arts.
  • GSM global system for mobile communications
  • Multimedia device 112 may be any type of device configured with means for communicating wirelessly and/or wired, such as but not limited to, portable computers, laptop computers, cellular phones, smart phones, PDAs, tablet individual computers, notebooks, iPads, portable screens, a portable processing device and/or any other WLAN communication devices that are readily used in the arts to transmit and/or receive wireless communications.
  • Multimedia device 112 is also equipped with at least one communications means 130' positioned within the multimedia device 112 configured to receive generated content 118 for the area of concentrated focus 104.
  • processor 108 includes computer executable instructions 116 executable by the at least one processor 108 and configured for generating content 118 for the area of concentrated focus 104, e.g. map, navigational directions, charts, and the like.
  • processor 108 generates content 118, e.g. a graphical map 122 that may optionally be transmitted to the multimedia device 112, for the individual's use.
  • FIGs. 2A and 2B are illustrative diagrams of the system 200 according to another embodiment.
  • System 200 comprises of: at least one communications means 130' positioned within the multimedia device 112 configured to receive generated content 118 from the display device 106; and at least one processor 108' positioned within the multimedia device 112 and in electrical communication with the multimedia' s at least one communications means 130', wherein the multimedia' s at least one processor 108' is configured to determine a location of the multimedia device 112 based on a positioning system signal.
  • Multimedia device 112 is equipped with location determining means 202 either electronically or mechanically connected to its processor 108'.
  • the electronic connections may be wired and/or wireless.
  • Location determining means 202 may comprise of communications means 130', where the communications means 130' may be a wireless communications means 130' employ short range wireless protocol as described above. Such short range wireless protocol may include but is not limited to, a radio frequency transceiver, a radio frequency receiver, and/or a radio frequency transmitter.
  • the wireless communication means 130' is a radio frequency receiver
  • the radio frequency receiver may be any type of radio frequency receiver, including, but not limited to, at least one positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a Wi-Fi positioning system receiver.
  • communication means 130' may employ wireless protocols like Blue Tooth, ZigBee, 802.11 series, or a wireless modem, such as, but not limited to, a global system for mobile communications (GSM) modem, or any other short range wireless protocol that is well known and used in the arts.
  • GSM global system for mobile communications
  • the multimedia device's at least one communication means 130' is configured to communicate wirelessly and is configured to receive at least one signal from a positioning system 200 over a wireless area network.
  • Determining the geographical location of the multimedia device 112 may include triangulating the location of multimedia device 112 based on at least one positioning system signal 204, e.g. local positioning system 206, received by the multimedia's location determining means 202.
  • processor 108 may be wirelessly connected to location determining means 202 so that the location determination may be performed remotely.
  • determining geographical position of the multimedia device 112 includes determining the latitude and longitude coordinates of the current geographical position of the multimedia device 112, such as, the device's location determining means 202 receiving a signal 204, such as a location determination signal, from a positioning system 206, such as a global positioning system (GPS) 208, or local positioning system, such as a Wi-Fi positioning system, which may originate from a satellite, or a ground based antenna.
  • a positioning system 206 such as a global positioning system (GPS) 208
  • local positioning system such as a Wi-Fi positioning system, which may originate from a satellite, or a ground based antenna.
  • System 200 may also comprise of at least one sensor 102 configured for detecting an area of concentrated focus 104 on a display device 106; and at least one processor 108 electronically connected to the at least one sensor 102 as functional components of a tracking system 200, wherein the at least one processor 102 is configured for generating content 118 for the area of concentrated focus 104.
  • system 200 includes at least one means 114 for electronically connecting the display device 106 to the at least one processor 108; and computer executable instructions 116 executable by the at least one processor 108 and configured for performing any one or more of the following: controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106; controlling the image 110 displayed based on the area of concentrated focus 104; generating content 118 for the area of concentrated focus 104; and determining the location of the multimedia device 112 by triangulating the location of the multimedia device 112 based on at least one positioning system signal.
  • the generated content 118 may be conveniently stored on the multimedia device's storage means 120' for personal use.
  • an individual may receive generated content 118 comprising of navigational directions, which may be published in text, audio or graphical display and used to locate e.g. merchandise or an object for purchase.
  • the received generated content 118 may be stored in the multimedia device's storage means 120'.
  • both the display device 106 and the multimedia device 112 may include at least one storage means 120, 120' either electronically or mechanically connected to their respective processors 108, 108'. In the case of electronic connections, the electronic connections may be wired and/or wireless connections.
  • Storage means 120, 120' may comprise of a storage device and may include memory, such as, but is not limited to, read-only memory, such as CD-ROMs, DVDs, floppy disks, and the like, read and write memory, such as a hard drive, floppy disc, CD-RW, DVD-RW, solid state memory, such as solid state hard drives, flash memory, and the like, and random access memory.
  • Storage means 120, 120' may be used to store the generated content 118, e.g. maps 122, 122', navigational directions, and the like. The information may be retrieved from the storage means 120, 120' using their respective processors 108, 108'.
  • FIG. 3 is a sample flowchart of an exemplary method 300 of controlling an image 110 responsive to an area of concentrated focus 104.
  • Method 300 comprises of using at least one sensor 102 (step 302) configured for detecting an area of concentrated focus 104 on a display device 106 that may be housed in a kiosk 128.
  • Kiosk 128 may contain at least one means for connecting the display device 106 to the at least one sensor 102 as well as house the at least one processor 108 that is electronically connected to at least one sensor 102. Accordingly, kiosk 128 is adapted to include a display device 106.
  • the at least one sensor 102 comprises of any kind of sensor that are used in the arts and may include any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, an audio sensor, a tactile sensor, a thermal sensor, a chemical sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, a thumbprint scanner, fingerprint scanner and a microphone and the like.
  • the area of concentrated focus 104 may include but is not limited to any one or more of the following locations: an individual's gaze point, direction of an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements and the like.
  • the at least one sensor 102 is configured for precisely detecting the object of the concentrated focus 104, e.g. an image 110 or a portion on a list segment 126.
  • Image 110 may comprise of a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisements, merchandise, navigational directions and the like.
  • Method 300 also comprises of using the at least one processor 108 configured for controlling an image 110 displayed (step 304), based on the area of concentrated focus 104.
  • the at least one processor 108 is electronically connected to the at least one sensor 102.
  • the electronic connections may be wired and/or wireless.
  • Controlling the image 110 displayed on the display device 106 includes but is not limited to: exploding (e.g.
  • FIG. 4 is a sample flowchart of an exemplary method 400 of controlling the image 110 displayed on the display device 106 according to one embodiment.
  • Method 400 comprises of detecting an area of concentrated focus 104 on a display device 106 (step 402) by using at least one sensor 102 electronically connected to at least one processor 108.
  • the at least one processor 108 may have stored thereon computer executable instructions 116, executable by the at least one processor 108, configured for controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106 as well as controlling the image 110 displayed (step 404) on the display device 106 that is electronically connected the at least one processor 108, based on the area of concentrated focus 104.
  • FIG. 5 is a sample flowchart of an exemplary method 500 of controlling the image 110 displayed according to an alternate embodiment.
  • the at least one sensor 102 and the processor 108 are functional components of an eye tracking system 200.
  • the eye tracking system 200 is configured to track the individual's eye movements (step 502).
  • the eye tracking system 100 detects the individual's gaze point (step 504) and is able to determine if an individual's gaze point is focused on a list segment 116 or an image 110.
  • the at least one processor 108 of the eye tracking system 100 is configured for controlling the image 110 displayed (step 506), based on a location of the gaze point.
  • the display device 106 is electronically connected to the at least one processor 108 via at least one means 114.
  • processor 108 controls the image 110 displayed thereon based on the gaze point.
  • Controlling the image 110 displayed on the display device 106 includes but is not limited to: exploding (e.g. enlarging) the area of concentrated focus 104 displayed on the display device 106; highlighting the location of the gaze point; highlighting an area on a map 122 that corresponds to the gaze point on a list segment 126; and highlighting a list segment 126 that corresponds to the gaze point on a map 122 as described above.
  • FIG. 6 is a sample flowchart of an exemplary method 600 of transmitting generated content 118 for the area of concentrated focus 104 according to one embodiment.
  • Method 600 comprises of at least one sensor 102 configured to detect an area of concentrated focus 104 (step 602).
  • the at least one sensor 102 comprises of any kind of sensors as described above as well as any other sensors that are well known and used in the arts.
  • the step of detecting an area of concentrated focus may include detecting for example a fingerprint, a thumbprint, body heat and the like, from the display device 106.
  • the at least one sensor 102 senses the area of concentrated focus 104 and the processor controls the image 110 displayed (step 604) on the display device 106 in much the same manner as discussed in conjunction with step 404, 506 of FIGs. 4 & 5, respectively.
  • processor 108 may be programmed using the computer executable instructions 116 to initiate inquiry to solicit a response (step 606) from the individual as to whether the individual wants the displayed image 110 to be transmitted to individual's multimedia device 112 via a user interface. As such, processor 108 prompts the individual as to whether or not a copy of the image 110 is being requested (step 608).
  • processor 108 If so, processor 108 generates content 118 (step 610) for the area of concentrated focus 104 and transmits the generated content 118 (step 612) based on the area of concentrated focus 104 to the individual's to at least one multimedia device via it's at least one communications means 130 by communicating wirelessly.
  • content 118 includes for example a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions and the like.
  • Communication means 130 may be a wireless communication means 130', which employ short range wireless protocol, such as, but not limited to, a radio frequency transceiver, and the like, as described above in accordance with the systems 100, 200 of the invention.
  • the at least one communication means 130 is configured to communicate wirelessly with the multimedia device 112 over a wireless area network.
  • FIG. 7 is a sample flowchart of an exemplary method 700 of determining a location of a multimedia device 112 based on the received generated content 1 18 according to one embodiment.
  • Multimedia device 112 is equipped with at least one communications means 130 which may be a wireless communication means 130', employing short range wireless protocol to transmit and receive communications from a global positioning system receiver and/or a local positioning system receiver, such as a Wi-Fi positioning system receiver.
  • the communication means 130' may employ wireless protocols like Blue Tooth, ZigBee, 802.11 series, or a wireless modem, such as, but not limited to, a global system for mobile communications (GSM) modem, or any other short range wireless protocol that is well known and used in the arts.
  • GSM global system for mobile communications
  • Method 700 includes the multimedia device 112 receiving generated content 118 (step 702) from the display device 106 from which directions may be obtained for navigation in outdoor and indoor spaces.
  • Multimedia device 112 also includes location determining means 202, which is the same as the communications means 130' as described above.
  • Communications means 130' are either electronically or mechanically connected to processor 108 and are configured to determine a location of the multimedia device 112 (step 704) based on the positioning system signal.
  • the at least one communications means 130' may include a radio frequency transceiver configured to communicate wirelessly that is configured to receive the at least one signal from a positioning system over a wireless area network.
  • the multimedia device 112 comprises at least one positioning system receiver.
  • multimedia device's at least one processor 108' is configured for determining the proximity of the multimedia device 112 to the subject of the generated content 118 obtained by providing at least one sensor 102 configured for detecting an area of concentrated focus 102 on a display device 106; and providing at least one processor 108 electronically connected to the at least one sensor 102 as functional components of a tracking system 100.
  • the at least one processor 108 is configured for generating content 118 for the area of concentrated focus 104, and includes computer executable instructions 116 configured for performing any one or more of the following: controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106; controlling the image 110 displayed based on the area of concentrated focus 104; generating content 108 for the area of concentrated focus 104; and determining the location of the multimedia device 112 by triangulating the location of the multimedia device 112 based on at least one positioning system signal.
  • Determining the location may include the location determining means 202 receiving at least one signal 112 from a positioning system 106 to determine the geographical position of the multimedia device 112. Determining geographical position of the multimedia device 112 may include determining the latitude and longitude coordinates of the geographical position of the multimedia device 112. Such determinations may include triangulating the location of the multimedia device 112 based on at least one positioning system signal 112 received by the location determining means 202 as described above.
  • determining geographical position of the multimedia device 112 includes the device's location determining means 202 receiving a signal 112, such as a location determination signal, from a positioning system 106, such as a global positioning system (GPS), or local positioning system, such as a Wi-Fi positioning system, which may originate from a satellite, or a ground based antenna 118.
  • a positioning system 106 such as a global positioning system (GPS), or local positioning system, such as a Wi-Fi positioning system, which may originate from a satellite, or a ground based antenna 118.
  • GPS global positioning system
  • Wi-Fi positioning system such as Wi-Fi positioning system
  • the location determining means 202 dynamically receives the latitude and longitude coordinates of the geographical position of the subject of the generated content 118, where the coordinates are stored on the storage means 120' and thereafter accessed by the processor 108'.
  • a software program may be launched from a computer readable medium in a computer- based system 100 to execute the functions defined in the software program.
  • Various programming languages may be employed to create software programs designed to implement and perform the methods disclosed herein.
  • the programs may be structured in an object- orientated format using an object-oriented language such as Java or C++.
  • the programs may be structured in a procedure-oriented format using a procedural language, such as assembly or C.
  • the software components may communicate using a number of mechanisms, such as application program interfaces, or inter-process communication techniques, including remote procedure calls.
  • the teachings of various embodiments are not limited to any particular programming language or environment. Thus, other embodiments may be realized, as discussed regarding Fig. 8 below.
  • FIG. 8 is a block diagram representing an article 800 according to various embodiments.
  • Such embodiments may comprise a computer, a memory system, a magnetic or optical disk, some other storage device, or any type of electronic device or system 100.
  • the article 800 may include one or more processor(s) 108 coupled to a machine-accessible medium such as a storage means 120 used for storing data in memory (e.g., a memory including electrical, optical, or electromagnetic elements).
  • the medium may contain associated information 804 (e.g., computer program instructions, data, or both) which, when accessed, results in a machine (e.g., the processor(s) 108) performing the activities previously described herein.

Abstract

The present disclosure relates generally to electronic systems, and more particularly, to systems, methods, and various other disclosures related to determining an area of concentrated focus and controlling an image display in response.

Description

TITLE
SYSTEM AND METHOD OF DETERMINING AN AREA OF CONCENTRATED FOCUS AND CONTROLLING AN IMAGE DISPLAYED IN RESPONSE
PRIORITY CLAIM
This patent application is a continuation in part of, and claims priority to: United States
Non-Provisional Patent Application Serial Number 12/703,376 titled: System and Method of Determining an Area of Concentrated Focus and Controlling an Image Displayed in Response filed February 10, 2010 and United States Non-Provisional Patent Application Serial Number 12/716,328 titled: System and Method of Determining an Object Of Concentrated Focus For an Advertising Display filed March 3, 2010. The entire disclosures of the afore -mentioned patent applications are incorporated by reference as if fully stated herein.
FIELD OF THE INVENTION
The present disclosure relates generally to electronic systems, and more particularly, to systems, methods, and various other disclosures related to determining an area of concentrated focus and controlling an image displayed in response.
BACKGROUND OF THE INVENTION
Shopping malls are ideally designed with anchor stores positioned at strategic locations to direct shoppers to visit an area of the mall that they may not otherwise visit, but for the presence of the anchor store. The expectation is that shoppers will make additional purchases in stores located en route to/from the anchor store. In this business model, a non-anchor store is therefore heavily dependent on the sales traffic to the proximal anchor store. However, in large malls, this business model fails as too often the non-anchor stores cannot be readily located as tired, frustrated shoppers will choose to forego searching for a particular store or item.
Shopping malls attempt to mitigate the issue by placing location maps at various locations throughout the malls. However, many shoppers are directionally challenged and/or have difficulty transposing the visual location information from the location maps to their immediate surroundings. Others are able to successfully locate an intended destination on a display listing but cannot locate the corresponding area on the map, or vice versa. As such, most shoppers remain lost and/or unable to locate their intended destinations. Thus, there is a need for shoppers to have a simple system and method of locating indoor or outdoor structures that is easy to use where the location information is readily understood by all.
The prior art teaches a navigational system for navigating in open spaces, where a user's location is determined based on a location determining device in direct contact with a satellite network system. However, once the location determining device loses its sky view, e.g. entering a tunnel or an enclosed structure, the signal is compromised and the navigational system becomes non-functional. Thus, there is a need for a system and method that remains operable in enclosed spaces, e.g. malls, or other areas that lack a sky view that are independent of a satellite signal for determining location.
Accordingly, the various embodiments and disclosures described herein satisfies these long felt needs and solves the limitations of the prior art in a new and novel manner.
SUMMARY
An objective of the invention is to provide a system and method of determining an area of concentrated focus and controlling an image display in response.
Another objective of the invention is to provide a system and method of tracking an area of concentrated focus and highlighting an area on a map or a list segment. Another objective of the invention is to provide a system and method of tracking an area of concentrated focus and highlighting an area on a map that corresponds to the area of concentrated focus on a list segment.
Another objective of the invention is to provide a system and method of tracking an area of concentrated focus and highlighting a list segment that corresponds to the area of concentrated focus on a map.
Another objective of the invention is to provide a system and method of tracking an area of concentrated focus on advertisement(s) for gathering statistical information regarding the areas of fixation based on the area of concentrated focus.
The systems and methods described herein disclose in particular, a system and method comprising of: at least one sensor configured for detecting an area of concentrated focus on a display device; and at least one processor electronically connected to the at least one sensor as functional components of a tracking system, wherein the at least one processor is configured for controlling the image displayed on the display device electronically connected to the at least one processor, based on the area of concentrated focus. The image displayed may include but is not limited to any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions, and the like.
System and method further comprises of: at least one means for electronically connecting a display device to the at least one processor; and computer executable instructions, executable by the at least one processor, configured for performing any one or more of the following: controlling the at least one sensor to detect the area of concentrated focus on the display device; controlling the image displayed on the display device, electronically connected to the at least one processor, based on the area of concentrated focus; and generating content for the area of concentrated focus.
The at least one sensor includes but is not limited to any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, an audio sensor, a tactile sensor, a thermal sensor, a chemical sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, a thumbprint scanner, fingerprint scanner and a microphone. The area of concentrated focus includes any one or more of the following locations: an individual's gaze point, an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements and the like. Content as used herein may include but is not limited to any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, navigational directions, and the like.
In some embodiments, the at least one sensor and the processor are functional components of an eye tracking system, wherein the eye tracking system is configured to detect an individual's gaze point. In that embodiment, the at least one sensor is configured to determine if an individual's gaze point is focused on a list or an image, e.g. a map. Controlling the image displayed includes but is not limited to: exploding the image of the area of concentrated focus displayed on the display device; highlighting the area of concentrated focus; highlighting an area on a map that corresponds to the area of concentrated focus on a list segment; and highlighting a list segment that corresponds to the area of concentrated focus on a map.
In some embodiments, system and method also includes providing a kiosk containing the at least one processor, wherein the kiosk contains the at least one sensor and at least one means for connecting the display device with the at least one sensor. The kiosk is adapted to include a display device.
In some embodiments, system and method comprises of at least one communications means in electrical communication with the display device and configured for transmitting the generated content for the area of concentrated focus to at least one multimedia device based on the area of concentrated focus. The at least one communications means is configured to communicate wirelessly. Multimedia device as described herein includes but is not limited to portable computers, laptop computers, cellular phones, smart phones, personal digital assistants ("PDAs"), tablet individual computers, notebooks, iPads, portable screens, a portable processing devices, and other like portable wireless communication devices used in the arts. Positioned within the multimedia device is at least one communications means configured to receive generated content for an area of concentrated focus on a display device.
In some embodiments, system and method comprises of: at least one communications means positioned within a multimedia device configured to receive generated content for an area of concentrated focus on a display device; and at least one processor positioned within the multimedia device in electrical communication with the multimedia device's at least one communications means, wherein the at least one processor is configured to determine a location of the multimedia device based on a positioning system signal. The at least one communications means is configured to receive at least one signal from a positioning system.
System and method may further comprising of: at least one sensor configured for detecting an area of concentrated focus on a display device; at least one processor electronically connected to the at least one sensor as functional components of a tracking system, wherein the at least one processor is configured for generating content for the area of concentrated focus; at least one means for electronically connecting a display device to the at least one processor; and computer executable instructions executable by the at least one processor configured for performing any one or more of the following: (i) controlling the at least one sensor to detect the area of concentrated focus on the display device; (ii) controlling the image displayed based on the area of concentrated focus; (iii) generating content for the area of concentrated focus; and (iv) determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal.
The multimedia device's at least one processor is configured for determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal. The at least one communications means may comprise of a radio frequency transceiver, and is configured to communicate wirelessly. The multimedia device's at least one communications means is configured to receive the at least one signal from a positioning system over a wireless area network. The multimedia device comprises at least one positioning system receiver, wherein the positioning system receiver includes, but is not limited to: a positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a wi-fi positioning system receiver.
Accordingly, the various embodiments and disclosures described herein solve the limitations of the prior art in a new and novel manner.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 A is an illustrative diagram of an exemplary embodiment of the system.
FIG. IB is an illustrative diagram of an exemplary embodiment of the system.
FIG. 1C is an illustrative diagram of the system according to one embodiment. FIG. ID is an illustrative diagram of the system according to one embodiment.
FIG. IE is an illustrative diagram of the system according to one embodiment.
FIG. IF is an illustrative diagram of the system according to one embodiment.
FIG. 2A is an illustrative diagram of the system according to another embodiment.
FIG. 2B is an illustrative diagram of the system according to another embodiment.
FIG. 3 is a sample flowchart of an exemplary method of controlling an image responsive to an area of concentrated focus.
FIG. 4 is a sample flowchart of an exemplary method of controlling the image displayed on the display device according to one embodiment.
FIG. 5 is a sample flowchart of an exemplary method of controlling the image displayed according to an alternate embodiment.
FIG. 6 is a sample flowchart of an exemplary method of transmitting generated content for the area of concentrated focus according to one embodiment.
FIG. 7 is a sample flowchart of an exemplary method of determining a location of a multimedia device based on the received generated content according to one embodiment.
FIG. 8 is a block diagram representing an article according to various embodiments.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
System Level Overview
The following discussion describes in detail an embodiment of the system and methods for determining an area of concentrated focus and controlling an image displayed on a display device in response to the concentrated focus. However, this discussion should not be construed, as limiting the invention to those particular embodiments, as practitioners skilled in the art will appreciate that a system may vary as to configuration and as to details of the parts, and that a method may vary as to the specific steps and sequence, without departing from the basic concepts as disclosed herein. Similarly, the elements described herein may be implemented separately, or in various combinations without departing from the teachings of the present invention. Turning now descriptively to the drawings, in which similar reference characters denote similar elements throughout the several views.
FIGs. 1A & IB are illustrative diagrams of exemplary embodiments of the system 100. System 100 comprises of at least one sensor 102 configured for detecting an area of concentrated focus 104 on a display device 106; and at least one computer processor 108 electronically connected to the at least one sensor 102 as functional components of a tracking system 100, wherein the at least one computer processor 108 is configured for controlling an image 110 displayed on the display device 106 that is electronically connected to the at least one processor 108 or a multimedia device 112, based on the area of concentrated focus 104. Controlling the image 110 displayed includes but is not limited to any one or more of the following: exploding the image of the area of concentrated focus 104 displayed on the display device 106; highlighting the area of concentrated focus 104; highlighting an area on a map that corresponds to the area of concentrated focus 104 on a list segment; and highlighting a list segment that corresponds to the area of concentrated focus 105 on a map.
The at least one sensor 102 comprises of any kind of sensor and includes but is not limited to, any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, e.g. a camera, an audio sensor, e.g. a microphone, a tactile sensor, e.g. a vibration sensor, a thermal sensor, e.g. a heat sensor and/or infrared camera, a chemical sensor, e.g. an odor sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, e.g. imaging camera, a thumbprint scanner, fingerprint scanner, and any other sensors that are known and used in the arts. The at least one sensor 102 may be positioned in the same enclosure as the at least one processor 108. An area of concentrated focus 104 may include but is not limited to any one or more of the following locations: an individual's gaze point, direction of an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements and the like. In this manner, the at least one sensor 102 is able to detect precisely the object of the concentrated focus 104, e.g. an image 110 or a list.
In some embodiments, the at least one means 114 (not shown) for electronically connecting a display device 106 to the at least one processor 108 of the system 100 may be any kind of means, such as a video connector, a coaxial cable, an HDMI cable, an s-video component connector, a WiFi video transceiver, a Bluetooth video transceiver, an internal video cable socket, a DVI connector, and the like. Means 114 (not shown) for electronically connecting a display device 106 to the at least one processor 108 of the system 100 may include a cable, but it should be noted that means 106 may include, but such means 114 may not include a cable. The display device 106 may be any kind of display device 106, such as, but not limited to, a television, a computer monitor, a projector, or any other kind of screen and/or display device 106.
Processor 108 may be any type of processor, such as, but not limited to, a central processing unit (CPU), a microprocessor, a video processor, a front end processor, a coprocessor, a single-core processor, a multi-core processor, and the like. Processor 108 and the at least one sensor 102 are electronically connected to each other forming functional components of a tracking system 100, wherein the tracking system 100 is configured for detecting an area of concentrated focus 104, i.e. the object of fixation on a display device 106. In some embodiments, the at least one sensor 102, and the processor 108 are functional components of an eye tracking system 100, wherein the eye tracking system 100 is configured to detect an individual's gaze point. For example, using an eye or iris scanner, the at least one sensor 102 reflects a beam of infrared light upon an individual's eye, where the eye movements, gaze point(s) and reflection patterns are tracked and recorded. Processor 108 uses the exact gaze point from the recorded reflection patterns to calculate the area of concentrated focus 104. Accordingly, the at least one sensor 102 may be configured to determine if an individual's gaze point is focused on a list or an image 110.
In some embodiments, system 100 may further comprise of computer executable instructions 116 executable by the at least one processor 108 and configured for performing any one or more of the various functions of the system 100 and methods disclosed herein. The computer executable instructions 116 executable by the at least one processor 108 are configured for controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106; controlling the image 110 displayed based on the area of concentrated focus 104; and generating content 118 for the area of concentrated focus 104. The computer executable instructions 1 16 may be loaded directly on the processor 108, or may be stored on storage means 120, such as, but not limited to, computer readable media, such as, but not limited to, a hard drive, a solid state drive, a flash memory, random access memory, CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-R, DVD-RW, and the like. Computer executable instructions 116 may be any type of computer executable instructions 116, which may be in the form of a computer program. The computer program being composed in any suitable programming language or source code, such as C++, C, JAVA, JavaScript, HTML, XML, and other programming languages. Controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106 may include using for example a fingerprint scanner to detect the location of the individual's fingerprint on the display device 106. In other embodiments, the computer executable instructions 1 16 may be programmed to control for example an eye tracker, an eye scanner, an iris scanner, a face scanner, a visual sensor, e.g. a camera, an audio sensor, e.g. a microphone, a tactile sensor, e.g. a vibration sensor, a thermal sensor, e.g. a heat sensor and/or infrared camera, a chemical sensor, e.g. an odor sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, e.g. imaging camera, a thumbprint scanner, fingerprint scanner, a microphone and the like to detect for example an individual's eye movements, gaze point, body heat, scent, thumbprint, fingerprint, the direction of the individual's voice, and the like.
FIG. 1C is an illustrative diagram of the system 100 according to one embodiment. The area of concentrated focus 104 on the display device 106 may be a particular location on a map 122. Once the area of concentrated focus 104 is detected, processor 108 is configured to control the image 110 displayed on the display device 106 based on the detected area of concentrated focus 104. Image 110 as used herein may comprise of a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisements, merchandise, navigational directions and the like. Illustratively as shown in FIG. 1C, controlling the image 110 displayed on the display device 106 may include but is not limited to: exploding the image 110 of the area of concentrated focus 104 displayed on the display device 106 by enlarging that portion of the image 110, i.e. the map 122, as shown.
FIG. ID is an illustrative diagram of the system 100 according to one embodiment. In this embodiment, once the area of concentrated focus 104 is detected, e.g. the gaze point on the display device 106, processor 108 is configured to control the image 110 displayed on the display device 106 based on the detected area of concentrated focus 104. Illustratively, controlling the image 110 displayed includes highlighting the area of concentrated focus 104. Such highlighting may require processor 108 activating display means 124 (not shown) in electrical communication with the processor 108, where display means 124 may include: a liquid crystal display ("LCD") screen, a light emitting diode ("LED") screen, or a monitor and the like. In some embodiments, display means 124 is electronically connected to processor 108. In other embodiments, display means 124 is wirelessly connected to processor 108. In yet further embodiments, display means 124 may include a control means, such as, but not limited to, a keyboard, a mouse, a touch screen, a stylus, and the like. In either event, processor 108 may activate display means 124 (not shown) such that highlighting the area of concentrated focus 104 will display the image 110 as a projected hologram or a lighted display.
FIG. IE is an illustrative diagram of the system 100 according to one embodiment. In this embodiment, once the area of concentrated focus 104 is detected, processor 108 is configured to highlight an area on a map 122 on the display device 106 that corresponds to the area of concentrated focus 104 on a list segment 126. For example, an individual may be focused on a list segment 126, e.g. a listing of stores, and in particular fixated on the listing for "ABC Store." Accordingly, once the area of concentrated focus 104 is detected, processor 108 highlights the area on the map 122 that corresponds to the area of concentrated focus 104 on the list segment 126, i.e. that portion on the map 122 corresponding to the location of ABC Store.
In some embodiments, processor 108 is configured to accomplish the reverse, i.e. once the area of concentrated focus 104 is detected processor 108 is configured to highlight a list segment 126 on the display device 106 that corresponds to the area of concentrated focus 104 on a map 122 based on an area of concentrated focus 104. Using the prior example, an individual's area of concentrated focus 104 may be fixated on a particular location on a map 122, i.e. the location of "ABC Store." Processor 108 then detects the area of concentrated focus 104, that portion of the map 122 and highlights the list segment 126 corresponding to the area of concentrated focus 104 on the map 122.
Kiosk 128 as shown in FIG. 1A - IF as used herein describes an open, electronic, computerized booth for which the at least one processor 108 is positioned within, and wherein the kiosk 128 contains the at least one sensor 102 and at least one means 114 (not shown) for connecting the display device 106 to the at least one sensor 106. Kiosk 128 is adapted to include the display device 106, as it may house a computer terminal with computer software and hardware to perform the varied functions of the systems 100 and methods disclosed herein, e.g. displaying images 110 thereon. In some embodiments, the electronic kiosk 128 may be interactive without allowing the individual to access the system 100 functions, while in other embodiments, the kiosk 128 is not interactive. Kiosk 128 as used herein includes touch screens, trackballs, computer keyboards, and pushbuttons and the like, typically used as information booths and often located at malls and other large indoor or outdoor structures.
In some embodiments, the area of concentrated focus 104 on the display device 106 may include one or more images 110, 110' of advertisements as displayed on the display device 106. Depending on the size of the kiosk 128, one, two or a plurality of images 110, 110' of advertisements may be displayed on the display device 106, for which statistical information regarding the area(s) of concentrated focus 104 for the images 110, 110' of advertisements may be gathered and analyzed. The display device 106 may be configured to display the advertising images 110, 110' singly, a plurality, or in combination thereof, e.g. display device 106 may be configured to electronically rotate consecutive images 110 of advertisements such that a plurality of advertisements may be displayed within a predetermined period. In either event, the at least one sensor 102 detects the area of concentrated focus 104 for the images 110, 110 of advertisements displayed, while the processor 108 collects statistical data regarding the areas of concentrated focus 104, 104'. Accordingly, the areas of concentrated focus 104 for at least one image 110 of advertisement, e.g. specific merchandise, or the advertising image 110 that received the most areas of concentrated focuses 104, 104', e.g. gaze points, are tracked and recorded.
FIG. IF is an illustrative diagram of the system 100 according to one embodiment. System 100 may also include at least one communication means 130 configured for transmitting generated content 118 to at least one multimedia device 112 based on the area of concentrated focus 104. Content 118 as used herein may comprise of a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions and the like. Communication means 130 may be a wireless communication means 130', which employ short range wireless protocol, such as, but not limited to, a radio frequency transceiver, a radio frequency receiver, and/or a radio frequency transmitter. In embodiments where the wireless communication means 130 is a radio frequency receiver, the radio frequency receiver may be any type of radio frequency receiver, including, but not limited to, a positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a Wi-Fi positioning system receiver. In other embodiments, communication means 130 may employ wireless protocols like Blue Tooth, ZigBee, 802.11 series, or a wireless modem, such as, but not limited to, a global system for mobile communications (GSM) modem, or any other short range wireless protocol that is well known and used in the arts. Multimedia device 112 may be any type of device configured with means for communicating wirelessly and/or wired, such as but not limited to, portable computers, laptop computers, cellular phones, smart phones, PDAs, tablet individual computers, notebooks, iPads, portable screens, a portable processing device and/or any other WLAN communication devices that are readily used in the arts to transmit and/or receive wireless communications. Multimedia device 112 is also equipped with at least one communications means 130' positioned within the multimedia device 112 configured to receive generated content 118 for the area of concentrated focus 104.
In this embodiment, processor 108 includes computer executable instructions 116 executable by the at least one processor 108 and configured for generating content 118 for the area of concentrated focus 104, e.g. map, navigational directions, charts, and the like. Once the individual's area of concentrated focus, e.g. gaze point, is detected, processor 108 generates content 118, e.g. a graphical map 122 that may optionally be transmitted to the multimedia device 112, for the individual's use.
FIGs. 2A and 2B are illustrative diagrams of the system 200 according to another embodiment. System 200 comprises of: at least one communications means 130' positioned within the multimedia device 112 configured to receive generated content 118 from the display device 106; and at least one processor 108' positioned within the multimedia device 112 and in electrical communication with the multimedia' s at least one communications means 130', wherein the multimedia' s at least one processor 108' is configured to determine a location of the multimedia device 112 based on a positioning system signal.
Multimedia device 112 is equipped with location determining means 202 either electronically or mechanically connected to its processor 108'. In the case of electronic connections, the electronic connections may be wired and/or wireless. Location determining means 202 may comprise of communications means 130', where the communications means 130' may be a wireless communications means 130' employ short range wireless protocol as described above. Such short range wireless protocol may include but is not limited to, a radio frequency transceiver, a radio frequency receiver, and/or a radio frequency transmitter. In embodiments where the wireless communication means 130' is a radio frequency receiver, the radio frequency receiver may be any type of radio frequency receiver, including, but not limited to, at least one positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a Wi-Fi positioning system receiver. In other embodiments, communication means 130' may employ wireless protocols like Blue Tooth, ZigBee, 802.11 series, or a wireless modem, such as, but not limited to, a global system for mobile communications (GSM) modem, or any other short range wireless protocol that is well known and used in the arts. The multimedia device's at least one communication means 130' is configured to communicate wirelessly and is configured to receive at least one signal from a positioning system 200 over a wireless area network.
Determining the geographical location of the multimedia device 112 may include triangulating the location of multimedia device 112 based on at least one positioning system signal 204, e.g. local positioning system 206, received by the multimedia's location determining means 202. In another embodiment, processor 108 may be wirelessly connected to location determining means 202 so that the location determination may be performed remotely. In some embodiments, determining geographical position of the multimedia device 112 includes determining the latitude and longitude coordinates of the current geographical position of the multimedia device 112, such as, the device's location determining means 202 receiving a signal 204, such as a location determination signal, from a positioning system 206, such as a global positioning system (GPS) 208, or local positioning system, such as a Wi-Fi positioning system, which may originate from a satellite, or a ground based antenna.
System 200 may also comprise of at least one sensor 102 configured for detecting an area of concentrated focus 104 on a display device 106; and at least one processor 108 electronically connected to the at least one sensor 102 as functional components of a tracking system 200, wherein the at least one processor 102 is configured for generating content 118 for the area of concentrated focus 104.
In some embodiments, system 200 includes at least one means 114 for electronically connecting the display device 106 to the at least one processor 108; and computer executable instructions 116 executable by the at least one processor 108 and configured for performing any one or more of the following: controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106; controlling the image 110 displayed based on the area of concentrated focus 104; generating content 118 for the area of concentrated focus 104; and determining the location of the multimedia device 112 by triangulating the location of the multimedia device 112 based on at least one positioning system signal.
As such, once an individual receives the generated content 118, the generated content 118 may be conveniently stored on the multimedia device's storage means 120' for personal use. For example, an individual may receive generated content 118 comprising of navigational directions, which may be published in text, audio or graphical display and used to locate e.g. merchandise or an object for purchase. The received generated content 118 may be stored in the multimedia device's storage means 120'. In some embodiments of the systems 100, 200 disclosed herein, both the display device 106 and the multimedia device 112 may include at least one storage means 120, 120' either electronically or mechanically connected to their respective processors 108, 108'. In the case of electronic connections, the electronic connections may be wired and/or wireless connections. Storage means 120, 120' may comprise of a storage device and may include memory, such as, but is not limited to, read-only memory, such as CD-ROMs, DVDs, floppy disks, and the like, read and write memory, such as a hard drive, floppy disc, CD-RW, DVD-RW, solid state memory, such as solid state hard drives, flash memory, and the like, and random access memory. Storage means 120, 120' may be used to store the generated content 118, e.g. maps 122, 122', navigational directions, and the like. The information may be retrieved from the storage means 120, 120' using their respective processors 108, 108'.
Methods
FIG. 3 is a sample flowchart of an exemplary method 300 of controlling an image 110 responsive to an area of concentrated focus 104. Method 300 comprises of using at least one sensor 102 (step 302) configured for detecting an area of concentrated focus 104 on a display device 106 that may be housed in a kiosk 128. Kiosk 128 may contain at least one means for connecting the display device 106 to the at least one sensor 102 as well as house the at least one processor 108 that is electronically connected to at least one sensor 102. Accordingly, kiosk 128 is adapted to include a display device 106.
The at least one sensor 102 comprises of any kind of sensor that are used in the arts and may include any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, an audio sensor, a tactile sensor, a thermal sensor, a chemical sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, a thumbprint scanner, fingerprint scanner and a microphone and the like. The area of concentrated focus 104 may include but is not limited to any one or more of the following locations: an individual's gaze point, direction of an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements and the like. Accordingly, the at least one sensor 102 is configured for precisely detecting the object of the concentrated focus 104, e.g. an image 110 or a portion on a list segment 126. Image 110 may comprise of a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisements, merchandise, navigational directions and the like.
Method 300 also comprises of using the at least one processor 108 configured for controlling an image 110 displayed (step 304), based on the area of concentrated focus 104. The at least one processor 108 is electronically connected to the at least one sensor 102. In the case of electronic connections, the electronic connections may be wired and/or wireless. Controlling the image 110 displayed on the display device 106 includes but is not limited to: exploding (e.g. enlarging) the image 110 of the area of concentrated focus 104 displayed on the display device 106; highlighting the area of concentrated focus 104; highlighting an area on a map 122 that corresponds to the area of concentrated focus 104 on a list segment 126; and highlighting a list segment 126 that corresponds to the area of concentrated focus 104 on a map 122 as described in greater detail above.
FIG. 4 is a sample flowchart of an exemplary method 400 of controlling the image 110 displayed on the display device 106 according to one embodiment. Method 400 comprises of detecting an area of concentrated focus 104 on a display device 106 (step 402) by using at least one sensor 102 electronically connected to at least one processor 108. The at least one processor 108 may have stored thereon computer executable instructions 116, executable by the at least one processor 108, configured for controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106 as well as controlling the image 110 displayed (step 404) on the display device 106 that is electronically connected the at least one processor 108, based on the area of concentrated focus 104.
FIG. 5 is a sample flowchart of an exemplary method 500 of controlling the image 110 displayed according to an alternate embodiment. In some embodiments, the at least one sensor 102 and the processor 108 are functional components of an eye tracking system 200. The eye tracking system 200 is configured to track the individual's eye movements (step 502). As the individual scans the display device 106, the eye tracking system 100 detects the individual's gaze point (step 504) and is able to determine if an individual's gaze point is focused on a list segment 116 or an image 110. The at least one processor 108 of the eye tracking system 100 is configured for controlling the image 110 displayed (step 506), based on a location of the gaze point. The display device 106 is electronically connected to the at least one processor 108 via at least one means 114. As such, processor 108 controls the image 110 displayed thereon based on the gaze point. Controlling the image 110 displayed on the display device 106 includes but is not limited to: exploding (e.g. enlarging) the area of concentrated focus 104 displayed on the display device 106; highlighting the location of the gaze point; highlighting an area on a map 122 that corresponds to the gaze point on a list segment 126; and highlighting a list segment 126 that corresponds to the gaze point on a map 122 as described above.
FIG. 6 is a sample flowchart of an exemplary method 600 of transmitting generated content 118 for the area of concentrated focus 104 according to one embodiment. Method 600 comprises of at least one sensor 102 configured to detect an area of concentrated focus 104 (step 602). The at least one sensor 102 comprises of any kind of sensors as described above as well as any other sensors that are well known and used in the arts. Accordingly, the step of detecting an area of concentrated focus (step 602) may include detecting for example a fingerprint, a thumbprint, body heat and the like, from the display device 106. The at least one sensor 102 senses the area of concentrated focus 104 and the processor controls the image 110 displayed (step 604) on the display device 106 in much the same manner as discussed in conjunction with step 404, 506 of FIGs. 4 & 5, respectively.
However, an individual may desire to have the resulting image 110 displayed on the display device 106 also displayed on this or her own portable multimedia device 112, e.g. a cell phone, thereby allowing the individual to obtain for example a portable downloadable copy of the image 110. As such, processor 108 may be programmed using the computer executable instructions 116 to initiate inquiry to solicit a response (step 606) from the individual as to whether the individual wants the displayed image 110 to be transmitted to individual's multimedia device 112 via a user interface. As such, processor 108 prompts the individual as to whether or not a copy of the image 110 is being requested (step 608). If so, processor 108 generates content 118 (step 610) for the area of concentrated focus 104 and transmits the generated content 118 (step 612) based on the area of concentrated focus 104 to the individual's to at least one multimedia device via it's at least one communications means 130 by communicating wirelessly. As previously described, content 118 includes for example a map 122, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions and the like.
Communication means 130 may be a wireless communication means 130', which employ short range wireless protocol, such as, but not limited to, a radio frequency transceiver, and the like, as described above in accordance with the systems 100, 200 of the invention. The at least one communication means 130 is configured to communicate wirelessly with the multimedia device 112 over a wireless area network.
FIG. 7 is a sample flowchart of an exemplary method 700 of determining a location of a multimedia device 112 based on the received generated content 1 18 according to one embodiment. Multimedia device 112 is equipped with at least one communications means 130 which may be a wireless communication means 130', employing short range wireless protocol to transmit and receive communications from a global positioning system receiver and/or a local positioning system receiver, such as a Wi-Fi positioning system receiver. In other embodiments, the communication means 130' may employ wireless protocols like Blue Tooth, ZigBee, 802.11 series, or a wireless modem, such as, but not limited to, a global system for mobile communications (GSM) modem, or any other short range wireless protocol that is well known and used in the arts.
Method 700 includes the multimedia device 112 receiving generated content 118 (step 702) from the display device 106 from which directions may be obtained for navigation in outdoor and indoor spaces. Multimedia device 112 also includes location determining means 202, which is the same as the communications means 130' as described above. Communications means 130' are either electronically or mechanically connected to processor 108 and are configured to determine a location of the multimedia device 112 (step 704) based on the positioning system signal. The at least one communications means 130' may include a radio frequency transceiver configured to communicate wirelessly that is configured to receive the at least one signal from a positioning system over a wireless area network. In some embodiments, the multimedia device 112 comprises at least one positioning system receiver. In some embodiments, multimedia device's at least one processor 108' is configured for determining the proximity of the multimedia device 112 to the subject of the generated content 118 obtained by providing at least one sensor 102 configured for detecting an area of concentrated focus 102 on a display device 106; and providing at least one processor 108 electronically connected to the at least one sensor 102 as functional components of a tracking system 100. The at least one processor 108 is configured for generating content 118 for the area of concentrated focus 104, and includes computer executable instructions 116 configured for performing any one or more of the following: controlling the at least one sensor 102 to detect the area of concentrated focus 104 on the display device 106; controlling the image 110 displayed based on the area of concentrated focus 104; generating content 108 for the area of concentrated focus 104; and determining the location of the multimedia device 112 by triangulating the location of the multimedia device 112 based on at least one positioning system signal.
Determining the location may include the location determining means 202 receiving at least one signal 112 from a positioning system 106 to determine the geographical position of the multimedia device 112. Determining geographical position of the multimedia device 112 may include determining the latitude and longitude coordinates of the geographical position of the multimedia device 112. Such determinations may include triangulating the location of the multimedia device 112 based on at least one positioning system signal 112 received by the location determining means 202 as described above.
In some embodiments, determining geographical position of the multimedia device 112 includes the device's location determining means 202 receiving a signal 112, such as a location determination signal, from a positioning system 106, such as a global positioning system (GPS), or local positioning system, such as a Wi-Fi positioning system, which may originate from a satellite, or a ground based antenna 118. In some embodiments the latitude and longitude coordinates of the geographical position of the subject of the generated content 118 are preloaded and stored in the multimedia device's storage means 120' as part of a user interface or application. In some embodiments, as the multimedia device 112 approaches a predetermined radius to the subject of the generated content 118, the location determining means 202 dynamically receives the latitude and longitude coordinates of the geographical position of the subject of the generated content 118, where the coordinates are stored on the storage means 120' and thereafter accessed by the processor 108'.
Hardware and Operating Environment
This section provides an overview of example hardware and the operating environments in conjunction with which embodiments of the inventive subject matter can be implemented.
A software program may be launched from a computer readable medium in a computer- based system 100 to execute the functions defined in the software program. Various programming languages may be employed to create software programs designed to implement and perform the methods disclosed herein. The programs may be structured in an object- orientated format using an object-oriented language such as Java or C++. Alternatively the programs may be structured in a procedure-oriented format using a procedural language, such as assembly or C. The software components may communicate using a number of mechanisms, such as application program interfaces, or inter-process communication techniques, including remote procedure calls. The teachings of various embodiments are not limited to any particular programming language or environment. Thus, other embodiments may be realized, as discussed regarding Fig. 8 below. FIG. 8 is a block diagram representing an article 800 according to various embodiments. Such embodiments may comprise a computer, a memory system, a magnetic or optical disk, some other storage device, or any type of electronic device or system 100. The article 800 may include one or more processor(s) 108 coupled to a machine-accessible medium such as a storage means 120 used for storing data in memory (e.g., a memory including electrical, optical, or electromagnetic elements). The medium may contain associated information 804 (e.g., computer program instructions, data, or both) which, when accessed, results in a machine (e.g., the processor(s) 108) performing the activities previously described herein.
While certain novel features of this invention have been shown and described and are pointed out in the annexed claims, it is not intended to be limited to the details above, since it will be understood that various omissions, modifications, substitutions and changes in the forms, method, steps and system illustrated and in its operation can be made by those skilled in the art without departing in any way from the spirit of the present invention.

Claims

CLAIMS What is claimed is:
1. A system comprising:
a. at least one sensor configured for detecting an area of concentrated focus on a display device; and
b. at least one processor electronically connected to the at least one sensor, wherein the at least one processor is configured for controlling an image displayed, based on the area of concentrated focus.
2. The system of claim 1, further comprising of:
a. at least one means for electronically connecting a display device to the at least one processor; and
b. computer executable instructions executable by the at least one processor and configured for performing any one or more of the following:
(i) controlling the at least one sensor to detect the area of concentrated focus on the display device;
(ii) controlling the image displayed based on the area of concentrated focus; and
(iii) generating content for the area of concentrated focus.
3. The system of claim 1, wherein the at least one sensor includes but is not limited to any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, an audio sensor, a tactile sensor, a thermal sensor, a chemical sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, a thumbprint scanner, fingerprint scanner and a microphone.
4. The system of claim 1, wherein the area of concentrated focus includes any one or more of the following locations: an individual's gaze point, an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements.
5. The system of claim 2, wherein content may include but is not limited to any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions.
6. The system of claim 1, wherein the at least one sensor and the processor are functional components of an eye tracking system.
7. The system of claim 6, wherein the eye tracking system is configured to detect an individual's gaze point.
8. The system of claim 1, wherein the at least one sensor is configured to determine if an individual's gaze point is focused on a list or an image.
9. The system of claim 1, wherein the image may comprise of any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions.
10. The system of claim 1, wherein controlling the image displayed includes but is not limited to any one or more of the following: exploding the image of the area of concentrated focus displayed on the display device; highlighting the area of concentrated focus; highlighting an area on a map that corresponds to the area of concentrated focus on a list segment; and highlighting a list segment that corresponds to the area of concentrated focus on a map.
11. The system of claim 1, further comprising a kiosk containing the at least one processor.
12. The system of claim 11, wherein the kiosk contains the at least one sensor and at least one means for connecting a display device to the at least one sensor.
13. The system of claim 11, wherein the kiosk is adapted to include a display device.
14. The system of claim 1, comprising of at least one communications means configured for transmitting generated content to at least one multimedia device based on the area of concentrated focus.
15. The system of claim 14, wherein the at least one communications means is configured to communicate wirelessly.
16. The system of claim 14, wherein the multimedia device includes but is not limited to: portable computers, laptop computers, cellular phones, smart phones, personal digital assistants, tablet individual computers, notebooks, iPads, portable screens, and a portable processing device.
17. The system of claim 16, further comprising at least one communications means positioned within the multimedia device configured to receive generated content for the area of concentrated focus.
18. A system comprising of:
a. at least one communications means positioned within a multimedia device configured to receive generated content; and
b. at least one processor positioned within the multimedia device and in electrical communication with the multimedia device's at least one communications means, wherein the at least one processor is configured to determine a location of the multimedia device based on a positioning system signal.
19. The system of claim 18, wherein the at least one communications means is configured to receive at least one signal from a positioning system.
20. The system of claim 18, further comprising:
(a) at least one sensor configured for detecting an area of concentrated focus on a display device; and
(b) at least one processor electronically connected to the at least one sensor as functional components of a tracking system, wherein the at least one processor is configured for generating content for the area of concentrated focus.
21. The system of claim 18, further comprising of:
a. at least one means for electronically connecting a display device to the at least one processor; and
b. computer executable instructions executable by the at least one processor and configured for performing any one or more of the following:
(a) controlling the at least one sensor to detect the area of concentrated focus on the display device;
(b) controlling the image displayed based on the area of concentrated focus;
(c) generating content for the area of concentrated focus; and
(d) determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal.
22. The system of claim 18, wherein the multimedia device's at least one processor is configured for determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal.
23. The system of claim 18, wherein the at least one communications means is a radio frequency transceiver.
24. The system of claim 18, wherein the at least one communications means is configured to communicate wirelessly.
25. The system of claim 18, wherein the multimedia device comprises of at least one positioning system receiver.
26. The system of claim 25, wherein the positioning system receiver includes, but is not limited to: a positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a Wi-Fi positioning system receiver.
27. A method comprising:
a. using at least one sensor configured for detecting an area of concentrated focus on a display device; and
b. using at least one processor electronically connected to the at least one sensor, wherein the at least one processor is configured for controlling an image displayed, based on the area of concentrated focus.
28. The method of claim 27, further comprising of providing at least one means for electronically connecting a display device to the at least one processor.
29. The method of claim 27, further comprising of:
(a) detecting the area of concentrated focus on the display device; and
(b) controlling the image displayed on the display device electronically connected to the at least one processor, based on the area of concentrated focus.
30. The method of claim 27, wherein the at least one sensor includes but is not limited to any one or more of the following: an eye scanner, an iris scanner, a face scanner, a visual sensor, an audio sensor, a tactile sensor, a thermal sensor, a chemical sensor, an electrical sensor, a capacitive sensor, a resistive sensor, a camera, a thermal imaging camera, a thumbprint scanner, fingerprint scanner and a microphone.
31. The method of claim 27, wherein the area of concentrated focus includes any one or more of the following locations: an individual's gaze point, an individual's voice, individual's touch, individual's body heat, an individual's scent, an individual's thumbprint, an individual's fingerprint, eye movements.
32. The method of claim 27, further comprising generating content for the area of concentrated focus.
33. The method of claim 32, wherein content may include but is not limited to any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions.
34. The method of claim 27, wherein the at least one sensor and the processor are functional components of an eye tracking system.
35. The method of claim 34, wherein the eye tracking system is configured to detect an individual's gaze point.
36. The method of claim 27, wherein the at least one sensor is configured to determine if an individual's gaze point is focused on a list segment or an image.
37. The method of claim 27, wherein the image may comprise of any one or more of the following: a map, a graphical display, text, audio, charts, photographs, pictures, advertisement, merchandise, and navigational directions.
38. The method of claim 27, wherein controlling the image displayed includes but is not limited to: exploding the image of the area of concentrated focus displayed on the display device; highlighting the area of concentrated focus; highlighting an area on a map that corresponds to the area of concentrated focus on a list segment; and highlighting a list segment that corresponds to the area of concentrated focus on a map.
39. The method of claim 27, further comprising providing a kiosk containing the at least one processor.
40. The method of claim 39, further comprising providing the kiosk with the at least one sensor and at least one means for connecting the display device to the at least one sensor.
41. The method of claim 39, wherein the kiosk is adapted to include a display device.
42. The method of claim 27, comprising transmitting generated content to at least one multimedia device based on the area of concentrated focus via at least one communication means.
43. The method of claim 42, further comprising communicating wirelessly to a multimedia device.
44. A method comprising of:
a. receive generated content; and
b. determining a location of a multimedia device based on the positioning system signal.
45. The method of claim 44, comprising receiving at least one communication means from a positioning signal.
46. The method of claim 44, further comprising:
(a) providing at least one sensor configured for detecting an area of concentrated focus on a display device; and (b) providing at least one processor electronically connected to the at least one sensor as functional components of a tracking system, wherein the at least one processor is configured for generating content for the area of concentrated focus.
47. The method of claim 44, further comprising performing any one or more of the following:
(a) controlling the at least one sensor to detect the area of concentrated focus on the display device;
(b) controlling the image displayed based on the area of concentrated focus;
(c) generating content for the area of concentrated focus; and
(d) determining the location of the multimedia device by triangulating the location of the multimedia device based on at least one positioning system signal.
48. The method of claim 44 wherein the at least one communications means is a radio frequency transceiver.
49. The method of claim 44, wherein the at least one communications means is configured to communicate wirelessly.
50. The method of claim 44, wherein the at least one communications means is configured to receive the at least one signal from a positioning system over a wireless area network.
51. The method of claim 44, wherein the multimedia device comprises at least one positioning system receiver.
52. The method of claim 44, wherein the positioning system receiver includes, but is not limited to: a positioning system receiver, such as a global positioning system receiver and a local positioning system receiver, such as a Wi-Fi positioning system receiver.
PCT/US2011/024357 2010-02-10 2011-02-10 System and method of determining an area of concentrated focus and controlling an image displayed in response WO2011100436A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US70337610A 2010-02-10 2010-02-10
US12/703,376 2010-02-10
US71632810A 2010-03-03 2010-03-03
US12/716,328 2010-03-03

Publications (1)

Publication Number Publication Date
WO2011100436A1 true WO2011100436A1 (en) 2011-08-18

Family

ID=44114292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/024357 WO2011100436A1 (en) 2010-02-10 2011-02-10 System and method of determining an area of concentrated focus and controlling an image displayed in response

Country Status (1)

Country Link
WO (1) WO2011100436A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033842A1 (en) * 2011-09-07 2013-03-14 Tandemlaunch Technologies Inc. System and method for using eye gaze information to enhance interactions
WO2013110846A1 (en) * 2012-01-26 2013-08-01 Nokia Corporation Capacitive eye tracking sensor
WO2013169237A1 (en) * 2012-05-09 2013-11-14 Intel Corporation Eye tracking based selective accentuation of portions of a display
WO2014039449A1 (en) * 2012-09-05 2014-03-13 Apple Inc. Delay of display event based on user gaze
CN103869946A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Display control method and electronic device
EP2804093A1 (en) * 2013-05-13 2014-11-19 Sony Corporation A method for stabilization and a system thereto
WO2015187294A1 (en) * 2014-06-06 2015-12-10 Intel Corporation Technologies for viewer attention area estimation
EP2795572A4 (en) * 2011-12-20 2015-12-30 Hewlett Packard Development Co Transformation of image data based on user position

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183749A1 (en) * 2003-03-21 2004-09-23 Roel Vertegaal Method and apparatus for communication between humans and devices
JP2006047890A (en) * 2004-08-09 2006-02-16 Seiko Epson Corp Image display device
WO2007125285A1 (en) * 2006-04-21 2007-11-08 David Cumming System and method for targeting information
US20070279591A1 (en) * 2006-05-31 2007-12-06 Sony Ericsson Mobile Communications Ab Display based on eye information
US7561143B1 (en) * 2004-03-19 2009-07-14 The University of the Arts Using gaze actions to interact with a display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183749A1 (en) * 2003-03-21 2004-09-23 Roel Vertegaal Method and apparatus for communication between humans and devices
US7561143B1 (en) * 2004-03-19 2009-07-14 The University of the Arts Using gaze actions to interact with a display
JP2006047890A (en) * 2004-08-09 2006-02-16 Seiko Epson Corp Image display device
WO2007125285A1 (en) * 2006-04-21 2007-11-08 David Cumming System and method for targeting information
US20070279591A1 (en) * 2006-05-31 2007-12-06 Sony Ericsson Mobile Communications Ab Display based on eye information

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033842A1 (en) * 2011-09-07 2013-03-14 Tandemlaunch Technologies Inc. System and method for using eye gaze information to enhance interactions
US9691125B2 (en) 2011-12-20 2017-06-27 Hewlett-Packard Development Company L.P. Transformation of image data based on user position
EP2795572A4 (en) * 2011-12-20 2015-12-30 Hewlett Packard Development Co Transformation of image data based on user position
WO2013110846A1 (en) * 2012-01-26 2013-08-01 Nokia Corporation Capacitive eye tracking sensor
US9414746B2 (en) 2012-01-26 2016-08-16 Nokia Technologies Oy Eye tracking
EP2847648A4 (en) * 2012-05-09 2016-03-02 Intel Corp Eye tracking based selective accentuation of portions of a display
WO2013169237A1 (en) * 2012-05-09 2013-11-14 Intel Corporation Eye tracking based selective accentuation of portions of a display
CN104395857A (en) * 2012-05-09 2015-03-04 英特尔公司 Eye tracking based selective accentuation of portions of a display
WO2014039449A1 (en) * 2012-09-05 2014-03-13 Apple Inc. Delay of display event based on user gaze
US10162478B2 (en) 2012-09-05 2018-12-25 Apple Inc. Delay of display event based on user gaze
US9189064B2 (en) 2012-09-05 2015-11-17 Apple Inc. Delay of display event based on user gaze
CN103869946A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Display control method and electronic device
US9361854B2 (en) 2013-05-13 2016-06-07 Sony Corporation Method for stabilization and a system thereto
WO2014184730A1 (en) * 2013-05-13 2014-11-20 Sony Corporation A method for stabilization and a system thereto
EP2804093A1 (en) * 2013-05-13 2014-11-19 Sony Corporation A method for stabilization and a system thereto
WO2015187294A1 (en) * 2014-06-06 2015-12-10 Intel Corporation Technologies for viewer attention area estimation

Similar Documents

Publication Publication Date Title
US10509477B2 (en) Data services based on gesture and location information of device
WO2011100436A1 (en) System and method of determining an area of concentrated focus and controlling an image displayed in response
US9104293B1 (en) User interface points of interest approaches for mapping applications
RU2417437C2 (en) Displaying network objects on mobile devices based on geolocation
US8615257B2 (en) Data synchronization for devices supporting direction-based services
JP6208654B2 (en) Method and system for pushing point of interest information
US9020537B2 (en) Systems and methods for associating virtual content relative to real-world locales
JP2013545154A (en) RF fingerprint for content location
CN102300205A (en) Publicity and transfer of mobile application program based on position and context
CN103328930A (en) Non-map-based mobile interface
KR20180109229A (en) Method and apparatus for providing augmented reality function in electornic device
CN104094183A (en) System and method for wirelessly sharing data amongst user devices
AU2017219142A1 (en) Location Based Augmented Reality Property Listing Method, Software and System
US20170115749A1 (en) Systems And Methods For Presenting Map And Other Information Based On Pointing Direction
US11074292B2 (en) Voice tagging of video while recording
US10410303B1 (en) Method and system for a mobile computerized multiple function real estate users assistant
KR20160016579A (en) Mobile device and method for executing an application based on a specific zone
WO2016005799A1 (en) Social networking system and method
KR102332524B1 (en) Method for providing service by bending mobile device and mobile device
ES2736408T3 (en) Procedure and apparatus for displaying an object related to content reproduced by a second device
US9730008B2 (en) Method for guiding location, machine-readable saving medium, and mobile communication terminal
US9538319B1 (en) Synchronization for mapping applications
US20120059582A1 (en) System and method of locating a structure in large spaces
RU2744626C2 (en) Device for location-based services
US20150358782A1 (en) Catch the screen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11712360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11712360

Country of ref document: EP

Kind code of ref document: A1