EP3667456A2 - Priority-based adjustment to display content - Google Patents
Priority-based adjustment to display content Download PDFInfo
- Publication number
- EP3667456A2 EP3667456A2 EP19204793.4A EP19204793A EP3667456A2 EP 3667456 A2 EP3667456 A2 EP 3667456A2 EP 19204793 A EP19204793 A EP 19204793A EP 3667456 A2 EP3667456 A2 EP 3667456A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- observer
- display device
- display
- relative
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000012937 correction Methods 0.000 claims description 21
- 230000008859 change Effects 0.000 claims description 11
- 230000007704 transition Effects 0.000 claims description 11
- 230000001815 facial effect Effects 0.000 claims description 8
- 230000033001 locomotion Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000007340 echolocation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009429 electrical wiring Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3231—Monitoring the presence, absence or movement of users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B3/00—Applications of devices for indicating or signalling operating conditions of elevators
- B66B3/002—Indicators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B3/00—Applications of devices for indicating or signalling operating conditions of elevators
- B66B3/002—Indicators
- B66B3/006—Indicators for guiding passengers to their assigned elevator car
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1415—Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/003—Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/068—Adjustment of display parameters for control of viewing angle adjustment
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the subject matter disclosed herein generally relates to the field of displays, and more particularly to priority-based adjustments to display content.
- Display content can include text and graphics, for instance, to give wayfinding guidance to a person, where it is assumed that the person is directly facing a display device rather than viewing from an oblique angle.
- users often do not stop in front of a stationary monitor but prefer to continue walking to avoid disrupting the flow of people following or to reduce the time wasted in stopping and re-starting.
- another user may attempt to access the display device while content associated with the previous user is still being displayed.
- the later user may have to wait for a timeout period to elapse or manually request to return to an input screen as displayed content, which results in an extended interaction period.
- a method of priority-based adjustment to display content includes determining a position of a first observer relative to a display device and determining a position of a second observer relative to the display device.
- the method also includes displaying, on the display device, a first content screen associated with the first observer based on determining that the first observer has a greater display priority than the second observer based at least in part on the position of the first observer and the position of the second observer.
- the method further includes displaying, on the display device, a second content screen associated with the second observer based on determining that the second observer has a greater display priority than the first observer based at least in part on the position of the first observer and the position of the second observer.
- further embodiments may include where the first content screen includes a first elevator car assignment for the first observer and the second content screen includes a second elevator car assignment for the second observer.
- further embodiments may include switching display content of the display device to a user input screen based on detecting a transition between the first observer having the greater display priority than the second observer.
- further embodiments may include switching display content of the display device to a user input screen based on reaching a timeout period after displaying the first content screen or the second content screen.
- further embodiments may include switching the display device to a screen saver mode based on detecting no observers within a threshold distance of display device for a screen saver timeout period.
- further embodiments may include tracking a position of an observer who most recently interacted with the display device as a most recent observer, determining a distortion correction to apply based on the position of the most recent observer relative to the display device, and outputting the distortion correction to the display device.
- further embodiments may include where the greater display priority is determined based on one or more of: a distance relative to the display device, an orientation relative to the display device, a change in position relative to the display device, and a gaze relative to the display device.
- further embodiments may include adjusting the distortion correction as the most recent observer continues to change position relative to the display device, where the distortion correction includes at least a rotation and rescaling operation.
- further embodiments may include determining the position of the most recent observer relative to the display device based on one or more angular differences between the most recent observer and the display device.
- further embodiments may include where the position is determined based on one or more of: a depth sensor, a radar sensor, an ultrasonic sensor, a multi-camera system, an infrared sensor, and one or more floor pressure sensors.
- further embodiments may include distinguishing the first observer from the second observer based on one or more of: a mobile credential, an observed physical characteristic, and facial recognition.
- a system includes a display device, one or more sensors operable to detect a position of an observer, and a display adjustment system operably coupled to the display device and the one or more sensors.
- the display adjustment system configured to perform a plurality of operations including determining a position of a first observer relative to the display device, and determining a position of a second observer relative to the display device.
- the display adjustment system configured to display, on the display device, a first content screen associated with the first observer based on determining that the first observer has a greater display priority than the second observer based at least in part on the position of the first observer and the position of the second observer.
- the display adjustment system is also configured to display, on the display device, a second content screen associated with the second observer based on determining that the second observer has a greater display priority than the first observer based at least in part on the position of the first observer and the position of the second observer.
- inventions of the present disclosure include applying priority-based adjustments to display content on a display device to account for transitioning between observers of the display content.
- embodiments adjust display content, such as text and graphics, with respect to an observer as the observer changes positions relative to a display device.
- a display adjustment system can track the position of multiple observers relative to the display device to determine which of the observers has a greater display priority. For example, once a first observer interacts with the display device, a first content screen associated with the first observer can be displayed, such as an elevator car assignment. The first observer may also be the closest observer with the shortest distance to the display device. The first content screen can remain displayed on the display device for a timeout period or until a second observer approaches the display device and has a shorter distance to the display device than the first observer, as one example of a greater display priority.
- display priority may be based on an orientation relative to the display device, a change in position relative to the display device, and/or a gaze relative to the display device to determine which observers are actively approaching and/or viewing the display device.
- the display device can switch display content to a user input screen based on detecting a transition between the first observer having the greater display priority than the second observer.
- the second observer can then make a selection from the user input screen, resulting in displaying a second content screen associated with the second observer.
- the second observer does not have to wait for a full timeout period to elapse or to physically press any buttons to switch the display device from displaying the first content screen to displaying the user input screen on the display device.
- the display device can switch directly between observer-specific content without requiring direct user input.
- Embodiments use one or more sensors to track the position (e.g., location and/or orientation) and trajectory of an observer of the display device, which may include tracking a portion of the observer, such as the head or gaze of the observer.
- a distortion correction can be applied to display content to correct the display content with respect to the observer while having the greater/greatest display priority.
- an observer 102A can view a display device 104 (e.g., a computer monitor) while transitioning between a location 106 proximate to the display device 104 and a desired end-point location 108.
- the display device 104 can be a kiosk 110 in a substantially fixed position that is viewable by a first observer 102A while transitioning across the location 106.
- display content to be displayed on the display device 104 includes directions in text and/or graphic form to assist the first observer 102A in navigating through a building.
- One or more sensors 112 can be used to determine a position of the first observer 102A, where the position includes a location and/or orientation of the first observer 102A relative to the display device 104.
- the one or more sensors 112 can also determine a position of one or more other observers, such as a second observer 102B.
- the one or more sensors 112 can include one or more of: a depth sensor, a radar sensor, an ultrasonic sensor, a multi-camera system (e.g., a stereoscopic camera system), an infrared sensor, floor pressure sensors, and the like, configured to capture at least one feature of the first observer 102A and/or second observer 102B, such as a head shot and/or body shot of the first observer 102A and/or second observer 102B.
- a depth sensor e.g., a radar sensor, an ultrasonic sensor, a multi-camera system (e.g., a stereoscopic camera system), an infrared sensor, floor pressure sensors, and the like.
- FIG. 1 is an elevator lobby 120 that includes multiple elevator access points 122A, 122B, 122C, 122D (generally, 122). Each of the elevator access points 122A, 122B, 122C, 122D can be associated with a set of elevator doors 124A, 124B, 124C, 124D. There may be various impediments for the observers 102 to travel between the location 106 proximate to the display device 104 and the desired end-point location 108, such as objects 128A, 128B.
- the objects 128A, 128B are examples of path movement constraints (e.g., walls, furniture, support columns, various structures, etc.) that limit the likely future position of the observers 102.
- the objects 128A, 128B may limit the movement options of the observers 102, for instance, such that the observers 102 are most likely to pass between the objects 128A, 128B.
- the one or more sensors 112 can be used to determine a position and a trajectory 130 of the observers 102 by monitoring movement over a period of time.
- sensors 112 can include one or more of a 2D red, green, blue (RGB) video camera and/or a depth sensor providing three-dimensional (3D) information that includes the distance and angle between the object and the depth sensor.
- 3D depth sensing technologies and devices that can be used include, but are not limited to, a structured light measurement, phase shift measurement, time of flight measurement, stereo triangulation device, sheet of light triangulation device, light field cameras, coded aperture cameras, computational imaging techniques, simultaneous localization and mapping (SLAM), imaging radar, imaging sonar, echolocation, laser radar, scanning light detection and ranging (LIDAR), flash LIDAR or a combination including at least one of the foregoing.
- SLAM simultaneous localization and mapping
- LIDAR scanning light detection and ranging
- a floor pressure mat can detect the observer's position on the floor which can be used to, for example, compute the distance and orientation of the observer relative to the location of the display device 104.
- a 3D depth sensor may be operable to produce 3D information from defocus, a focal stack of images or structure from motion.
- a plurality of 2D depth sensors can be used to provide two-dimensional information that includes the distance between the object and the depth sensor.
- a display adjustment system 114 can be incorporated into or operably coupled to the display device 104 and/or the one or more sensors 112 in a local, networked, or distributed configuration.
- the display adjustment system 114 can include a processing system 115, a memory system 116, and a device interface 118.
- the processing system 115 may be but is not limited to a single-processor or multiprocessor system of any of a wide array of possible architectures, including field programmable gate array (FPGA), central processing unit (CPU), application specific integrated circuits (ASIC), digital signal processor (DSP) or graphics processing unit (GPU) hardware arranged homogenously or heterogeneously.
- FPGA field programmable gate array
- CPU central processing unit
- ASIC application specific integrated circuits
- DSP digital signal processor
- GPU graphics processing unit
- the memory system 116 may be a storage device such as, for example, a random access memory (RAM), read only memory (ROM), or other electronic, optical, magnetic or any other computer readable storage medium.
- the memory system 116 can include computer-executable instructions that, when executed by the processing system 115, cause the processing system 115 to perform operations as further described herein.
- the device interface 118 can include wired, wireless, and/or optical communication links to the display device 104 and the one or more sensors 112. Although only a single instance of the display adjustment system 114 is depicted in FIG.
- the display adjustment system 114 is operable to capture data from the one or more sensors 112 and perform facial and/or body orientation detection.
- the facial and/or body orientation detection may use known image processing techniques to identify features of the observers 102. For instance, facial recognition performed by the display adjustment system 114 can track relative geometry of facial features such as eyes, mouth, nose, forehead, chin, and other features to both estimate the position (including orientation and/or gaze) of the observers 102 and also distinguish between multiple observers.
- the display adjustment system 114 is also operable to apply a distortion correction to display content based on the position and trajectory of the observers 102 relative to display device 104 to correct the display content with respect to the observers 102. For example, as the first observer 102A approaches and passes location 106, the angle of a line-of-sight of the first observer 102A changes relative to a display surface of the display device 104.
- the first observer 102A can be at a position with a distance D1 from the display device 104 while the second observer 102B can be at a position with a distance D2 from the display device 104. While distance D1 is less than distance D2, a first content screen associated with the first observer 102A can be displayed, such as directions to navigate to an assigned elevator access point 122, as one example of having a greater display priority.
- the display device 104 can switch from displaying a first content screen associated with the first observer 102A to accept input from the second observer 102B and display a second content screen associated with the second observer 102B. Similar examples of transitions can apply for changes in position, orientation, and/or gaze to determine whether the first observer 102A or the second observer 102B has the greater display priority. As another example, visibility of the display device 104 can impact priority.
- an observer 102 who is behind the display device 104 may receive zero priority, even if the observer 102 is very close to the display device 104 and gazing at the back of the display device 104.
- FIG. 2 depicts a configuration of system 200 as a variation of the system 100 of FIG. 1 for priority-based adjustment to display content.
- the display adjustment system 114 interfaces to a display device 204 that is an embodiment of the display device 104 mounted in a corridor 220.
- Sensors 212A and 212B are embodiments of the one or more sensors 112 of FIG.1 mounted at different positions relative to the display device 204.
- the sensors 212A, 212B can be positioned such that a field-of-view 213A of sensor 212A overlaps with a field-of-view 213B of sensor 212B.
- the overlapping of the field-of-view 213A, 213B can be used to better track the position and trajectory of observers 202 (A, B, C, D) and assist in confirming which observers 202 are most likely viewing the display device 204.
- observers 202A, 202B may both be in close enough proximity to the display device 204 to interact with the display device 204 (within an interaction distance threshold 205), but observers 202C, 202D may transition to positions beyond a threshold distance 207 and be too far from the display device 204 to read the display content.
- the thresholds 205, 207 may provide a hysteresis band, where interactions can be initiated and display content may be maintained as observers 202 transition away from the display device 204.
- observers 202A, 202B are observed as having trajectories moving toward the display device 204 and may have variations in both position and trajectory, while observers 202C, 202D are moving further away from the display device 204.
- Transitioning from an observer 202A to at least one or more additional observers 202B facing the display device 204 can be performed, for instance, based on one or more of: detecting a change in a number of observers 202A, 202B, and/or detecting a change in observer proximity/trajectory/orientation/gaze relative to the display device 204.
- the display device 204 may output an informational notice to indicate which of the observers 202A, 202B are currently being tracked for distortion correction.
- the display adjustment system 114 may interface with a mobile device 203 of one or more of the observers 202A-D to assist in determining position and/or trajectory.
- an application loaded on the mobile devices 203 that is compatible with the display adjustment system 114 can assist the observers 202A-D with wayfinding through the display device 204, for instance, through a Wi-Fi or Bluetooth link between one or more mobile devices 203 and the display adjustment system 114.
- position and movement data captured by the mobile devices 203 e.g., through accelerometers, global positioning system data, and/or other sensors incorporated in the mobile devices 203 can be used to inform the display adjustment system 114 of observers' 202A-D trajectories.
- the observers 202A-D need not actively view the mobile devices 203 for display content; rather, the display content can be output on the display device 204, for instance, to assist in heads-up navigation through the corridor 220 with further instructions/directions provided on the display device 204 as a next turn or set of turns for the observers 202A, 202B to make.
- the position and/or movement data provided by the mobile devices 203 can assist the display adjustment system 114 in anticipating the approach of observers 202A-D prior to the observers 202A-D entering either or both of the field-of-view 213A, 213B.
- FIG. 3 depicts a display system 300 transitioning between a user input screen 302, such as an elevator destination floor selection interface 304, and observer-specific display content 306, which can include text 308 and trajectory graphics 310.
- the user input screen 302 and observer-specific display content 306 are examples of content that can be displayed on the display device 104 of FIG. 1 .
- the display system 300 can transition from the user input screen 302 to the observer-specific display content 306 based on an observer selection 312 (e.g., a user presses a button).
- the display system 300 can transition from the observer-specific display content 306 to the user input screen 302 based on a timeout 314 or determining that a different one of a plurality of observers 102 has a greater display priority.
- FIGS. 4 and 5 illustrate an example of an observer 202 relative to the display device 204 establishing a first observation angle ( ⁇ ) and a second observation angle ( ⁇ ).
- object positions may be initially calculated using a sensor-based coordinate system (u,v,d), where (u,v) is the horizontal and vertical position according to the sensor and d is the distance from the sensor.
- the (u,v,d) coordinates can be transformed to "world coordinates" based on a more useful frame of reference.
- a frame of reference can be defined with respect to the display device 204.
- a world coordinate system can include (x,y) coordinates as a horizontal and vertical position on the display device 204 and z as the direction facing away from the display device 204. The origin is located at the center of the display device 204.
- vectors or any other desired coordinate system can be used.
- an example of a method for calculating distortion correction is as follows.
- ⁇ depends only on the (x,y) coordinates and not on z.
- ⁇ tan ⁇ 1 sqrt x 2 + y 2 / z
- FIG. 6 depicts a flow chart of a method 400 of priority-based adjustment to display content in accordance with an embodiment of the disclosure.
- the method 400 can be performed, for example, by the display adjustment system 114 of FIGS. 1 and 2 .
- the display adjustment system 114 can determine a position of a first observer 102A relative to a display device 104.
- the display adjustment system 114 can determine a position of a second observer 102B relative to the display device 104.
- the first content screen can include a first elevator car assignment for the first observer 102A
- the display adjustment system 114 can display on the display device 104 a first content screen associated with the first observer 102A based on determining that the first observer 102A has a greater display priority than the second observer 102B based at least in part on the position of the first observer 102A and the position of the second observer 102B.
- the display adjustment system 114 can display on the display device 104 a second content screen associated with the second observer 102B based on determining that the second observer 102B has a greater display priority than the first observer 102A based at least in part on the position of the first observer 102A and the position of the second observer 102B.
- displaying the second content screen associated with the second observer 102B based on determining that the second observer 102B has a greater display priority than the first observer 102A can include confirming that the first observer 102A is outside of an interaction distance threshold 205 such that it is less likely that the first observer 102A is still gazing at the display device 104.
- the greater display priority can be determined, for example, based on one or more of: a distance relative to the display device 104, an orientation relative to the display device 104, a change in position relative to the display device 104, and a gaze relative to the display device 104.
- the display adjustment system 114 can switch display content of the display device 104 to a user input screen 302 based on detecting a transition between the first observer 102A having the greater display priority than the second observer 102B.
- the display adjustment system 114 can switch display content of the display device 104 to a user input screen 302 based on reaching a timeout period after displaying the first content screen or the second content screen.
- the display adjustment system 114 can switch the display device 104 to a screen saver mode based on detecting no observers 102 within a threshold distance 207 of display device 104 for a screen saver timeout period.
- Embodiments may distinguish the first observer 102A from the second observer 102B based on one or more of: a mobile credential, an observed physical characteristic, and facial recognition.
- the display device 104 may avoid transitioning to the user input screen 302, as an existing floor preference of an identified observer 102 may be known.
- an association of displayed content can be based on detecting some aspect of the observer 102, such as a mobile credential (e.g., mobile device 203 emitting a unique address, a radio frequency identifier tag reading a badge, etc.), by recognizing the face of the observer 102 or a physical characteristic (e.g., age, gender, emotional state, etc.) of the observer 102.
- a mobile credential e.g., mobile device 203 emitting a unique address, a radio frequency identifier tag reading a badge, etc.
- a physical characteristic e.g., age, gender, emotional state, etc.
- the display device 104 may display customized information to the observer 102 with the highest priority absent direct user input into the display device 104. For example, as people stream into a building, observers 102 can walk past the display device 104 that shows an elevator assignment, one at a time, without prompting the observers 102 to input a destination floor if the destination floor can be detected in another way (e.g., based on a default destination floor associated with a mobile credential or associated through other recognition).
- the display device 104 can provide information customized for the highest-priority observer 102 (e.g., a closest observer facing a display surface of the display device 104). As the first observer 102A moves past the display device 104 and a second observer 102B emerges as the highest priority, the display device 104 switches to show information customized for the second observer 102B.
- the highest-priority observer 102 e.g., a closest observer facing a display surface of the display device 104.
- the display adjustment system 114 may also track a position of an observer who most recently interacted with the display device 104 as a most recent observer, determine a distortion correction to apply based on the position of the most recent observer relative to the display device 104, and output the distortion correction to the display device 104.
- the display adjustment system 114 can adjust the distortion correction as the most recent observer continues to change position relative to the display device 104, where the distortion correction includes at least a rotation and rescaling operation.
- the display adjustment system 114 can determine the position of the most recent observer relative to the display device 104 based on one or more angular differences between the most recent observer and the display device 104.
- embodiments can be in the form of processor-implemented processes and devices for practicing those processes, such as a processor.
- Embodiments can also be in the form of computer program code containing instructions embodied in tangible media, such as network cloud storage, SD cards, flash drives, floppy diskettes, CD ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes a device for practicing the embodiments.
- Embodiments can also be in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into an executed by a computer, the computer becomes an device for practicing the embodiments.
- the computer program code segments configure the microprocessor to create specific logic circuits.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Controls And Circuits For Display Device (AREA)
- Digital Computer Display Output (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
- The subject matter disclosed herein generally relates to the field of displays, and more particularly to priority-based adjustments to display content.
- Display content can include text and graphics, for instance, to give wayfinding guidance to a person, where it is assumed that the person is directly facing a display device rather than viewing from an oblique angle. However, in the context of a shared public display device, users often do not stop in front of a stationary monitor but prefer to continue walking to avoid disrupting the flow of people following or to reduce the time wasted in stopping and re-starting. As one user transitions away from the display device, another user may attempt to access the display device while content associated with the previous user is still being displayed. Thus, the later user may have to wait for a timeout period to elapse or manually request to return to an input screen as displayed content, which results in an extended interaction period.
- According to one embodiment, a method of priority-based adjustment to display content is provided. The method includes determining a position of a first observer relative to a display device and determining a position of a second observer relative to the display device. The method also includes displaying, on the display device, a first content screen associated with the first observer based on determining that the first observer has a greater display priority than the second observer based at least in part on the position of the first observer and the position of the second observer. The method further includes displaying, on the display device, a second content screen associated with the second observer based on determining that the second observer has a greater display priority than the first observer based at least in part on the position of the first observer and the position of the second observer.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include where the first content screen includes a first elevator car assignment for the first observer and the second content screen includes a second elevator car assignment for the second observer.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include switching display content of the display device to a user input screen based on detecting a transition between the first observer having the greater display priority than the second observer.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include switching display content of the display device to a user input screen based on reaching a timeout period after displaying the first content screen or the second content screen.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include switching the display device to a screen saver mode based on detecting no observers within a threshold distance of display device for a screen saver timeout period.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include tracking a position of an observer who most recently interacted with the display device as a most recent observer, determining a distortion correction to apply based on the position of the most recent observer relative to the display device, and outputting the distortion correction to the display device.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include where the greater display priority is determined based on one or more of: a distance relative to the display device, an orientation relative to the display device, a change in position relative to the display device, and a gaze relative to the display device.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include adjusting the distortion correction as the most recent observer continues to change position relative to the display device, where the distortion correction includes at least a rotation and rescaling operation.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include determining the position of the most recent observer relative to the display device based on one or more angular differences between the most recent observer and the display device.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include where the position is determined based on one or more of: a depth sensor, a radar sensor, an ultrasonic sensor, a multi-camera system, an infrared sensor, and one or more floor pressure sensors.
- In addition to one or more of the features described above or below, or as an alternative, further embodiments may include distinguishing the first observer from the second observer based on one or more of: a mobile credential, an observed physical characteristic, and facial recognition.
- According to another embodiment, a system is provided. The system includes a display device, one or more sensors operable to detect a position of an observer, and a display adjustment system operably coupled to the display device and the one or more sensors. The display adjustment system configured to perform a plurality of operations including determining a position of a first observer relative to the display device, and determining a position of a second observer relative to the display device. The display adjustment system configured to display, on the display device, a first content screen associated with the first observer based on determining that the first observer has a greater display priority than the second observer based at least in part on the position of the first observer and the position of the second observer. The display adjustment system is also configured to display, on the display device, a second content screen associated with the second observer based on determining that the second observer has a greater display priority than the first observer based at least in part on the position of the first observer and the position of the second observer.
- Technical effects of embodiments of the present disclosure include applying priority-based adjustments to display content on a display device to account for transitioning between observers of the display content.
- The foregoing features and elements may be combined in various combinations without exclusivity, unless expressly indicated otherwise. These features and elements as well as the operation thereof will become more apparent in light of the following description and the accompanying drawings. It should be understood, however, that the following description and drawings are intended to be illustrative and explanatory in nature and non-limiting.
- The following descriptions should not be considered limiting in any way. With reference to the accompanying drawings, like elements are numbered alike:
-
FIG. 1 illustrates a general schematic system diagram of a display system, in accordance with an embodiment of the disclosure; -
FIG. 2 illustrates a general schematic system diagram of a display system, in accordance with an embodiment of the disclosure; -
FIG. 3 illustrates a general schematic system diagram of a display system transitioning between states, in accordance with an embodiment of the disclosure; -
FIG. 4 illustrates an observer relative to a display device establishing a first observation angle, in accordance with an embodiment of the disclosure; -
FIG. 5 illustrates an observer relative to a display device establishing a second observation angle, in accordance with an embodiment of the disclosure; and -
FIG. 6 is a flow diagram illustrating a method, according to an embodiment of the present disclosure. - A detailed description of one or more embodiments of the disclosed apparatus and method are presented herein by way of exemplification and not limitation with reference to the Figures.
- As will be described below, embodiments adjust display content, such as text and graphics, with respect to an observer as the observer changes positions relative to a display device. A display adjustment system can track the position of multiple observers relative to the display device to determine which of the observers has a greater display priority. For example, once a first observer interacts with the display device, a first content screen associated with the first observer can be displayed, such as an elevator car assignment. The first observer may also be the closest observer with the shortest distance to the display device. The first content screen can remain displayed on the display device for a timeout period or until a second observer approaches the display device and has a shorter distance to the display device than the first observer, as one example of a greater display priority. In some embodiments, display priority may be based on an orientation relative to the display device, a change in position relative to the display device, and/or a gaze relative to the display device to determine which observers are actively approaching and/or viewing the display device. The display device can switch display content to a user input screen based on detecting a transition between the first observer having the greater display priority than the second observer. The second observer can then make a selection from the user input screen, resulting in displaying a second content screen associated with the second observer. Thus, the second observer does not have to wait for a full timeout period to elapse or to physically press any buttons to switch the display device from displaying the first content screen to displaying the user input screen on the display device. Alternatively, where the observers are identifiable and preferences of the observers are known, the display device can switch directly between observer-specific content without requiring direct user input.
- Further, in the context of wayfinding guidance displays that may display directional arrows and navigation information, observers often do not intend to stop in front of a display device but continue looking at the display device while passing by the display device. As an observer moves relative to a display device in a fixed position, the observer may have a difficult time understanding the content, as information or directional arrows can appear distorted due to the changing perspective of the observer. Embodiments use one or more sensors to track the position (e.g., location and/or orientation) and trajectory of an observer of the display device, which may include tracking a portion of the observer, such as the head or gaze of the observer. By determining a position of an observer relative to the display device and a trajectory of the observer relative to the display device, a distortion correction can be applied to display content to correct the display content with respect to the observer while having the greater/greatest display priority.
- With reference to
FIG. 1 , asystem 100 for priority-based adjustment to display content is illustrated, in accordance with an embodiment of the present disclosure. As seen inFIG. 1 , anobserver 102A can view a display device 104 (e.g., a computer monitor) while transitioning between alocation 106 proximate to thedisplay device 104 and a desired end-point location 108. Thedisplay device 104 can be akiosk 110 in a substantially fixed position that is viewable by afirst observer 102A while transitioning across thelocation 106. In some embodiments, display content to be displayed on thedisplay device 104 includes directions in text and/or graphic form to assist thefirst observer 102A in navigating through a building. One ormore sensors 112 can be used to determine a position of thefirst observer 102A, where the position includes a location and/or orientation of thefirst observer 102A relative to thedisplay device 104. The one ormore sensors 112 can also determine a position of one or more other observers, such as asecond observer 102B. For instance, the one ormore sensors 112 can include one or more of: a depth sensor, a radar sensor, an ultrasonic sensor, a multi-camera system (e.g., a stereoscopic camera system), an infrared sensor, floor pressure sensors, and the like, configured to capture at least one feature of thefirst observer 102A and/orsecond observer 102B, such as a head shot and/or body shot of thefirst observer 102A and/orsecond observer 102B. - The example of
FIG. 1 is anelevator lobby 120 that includes multipleelevator access points elevator access points elevator doors observers 102 to travel between thelocation 106 proximate to thedisplay device 104 and the desired end-point location 108, such asobjects objects observers 102. Given that the desired end-point location 108 is one of theelevator access points 122 inFIG. 1 , theobjects observers 102, for instance, such that theobservers 102 are most likely to pass between theobjects more sensors 112 can be used to determine a position and atrajectory 130 of theobservers 102 by monitoring movement over a period of time. - Further examples of the
sensors 112 can include one or more of a 2D red, green, blue (RGB) video camera and/or a depth sensor providing three-dimensional (3D) information that includes the distance and angle between the object and the depth sensor. Various 3D depth sensing technologies and devices that can be used include, but are not limited to, a structured light measurement, phase shift measurement, time of flight measurement, stereo triangulation device, sheet of light triangulation device, light field cameras, coded aperture cameras, computational imaging techniques, simultaneous localization and mapping (SLAM), imaging radar, imaging sonar, echolocation, laser radar, scanning light detection and ranging (LIDAR), flash LIDAR or a combination including at least one of the foregoing. Different technologies can include active (transmitting and receiving a signal) or passive (only receiving a signal) and may operate in a band of the electromagnetic or acoustic spectrum such as visual, infrared, ultrasonic, etc. In another embodiment, a floor pressure mat can detect the observer's position on the floor which can be used to, for example, compute the distance and orientation of the observer relative to the location of thedisplay device 104. In various embodiments, a 3D depth sensor may be operable to produce 3D information from defocus, a focal stack of images or structure from motion. Similarly, a plurality of 2D depth sensors can be used to provide two-dimensional information that includes the distance between the object and the depth sensor. - A
display adjustment system 114 can be incorporated into or operably coupled to thedisplay device 104 and/or the one ormore sensors 112 in a local, networked, or distributed configuration. Thedisplay adjustment system 114 can include aprocessing system 115, amemory system 116, and adevice interface 118. Theprocessing system 115 may be but is not limited to a single-processor or multiprocessor system of any of a wide array of possible architectures, including field programmable gate array (FPGA), central processing unit (CPU), application specific integrated circuits (ASIC), digital signal processor (DSP) or graphics processing unit (GPU) hardware arranged homogenously or heterogeneously. Thememory system 116 may be a storage device such as, for example, a random access memory (RAM), read only memory (ROM), or other electronic, optical, magnetic or any other computer readable storage medium. Thememory system 116 can include computer-executable instructions that, when executed by theprocessing system 115, cause theprocessing system 115 to perform operations as further described herein. Thedevice interface 118 can include wired, wireless, and/or optical communication links to thedisplay device 104 and the one ormore sensors 112. Although only a single instance of thedisplay adjustment system 114 is depicted inFIG. 1 , it will be understood that there can be multiple instances of thedisplay adjustment system 114 to support multiple instances of thedisplay device 104 and the one ormore sensors 112 or a single instance of thedisplay adjustment system 114 can support multiple instances of thedisplay device 104 and the one ormore sensors 112. - The
display adjustment system 114 is operable to capture data from the one ormore sensors 112 and perform facial and/or body orientation detection. The facial and/or body orientation detection may use known image processing techniques to identify features of theobservers 102. For instance, facial recognition performed by thedisplay adjustment system 114 can track relative geometry of facial features such as eyes, mouth, nose, forehead, chin, and other features to both estimate the position (including orientation and/or gaze) of theobservers 102 and also distinguish between multiple observers. Thedisplay adjustment system 114 is also operable to apply a distortion correction to display content based on the position and trajectory of theobservers 102 relative to displaydevice 104 to correct the display content with respect to theobservers 102. For example, as thefirst observer 102A approaches and passeslocation 106, the angle of a line-of-sight of thefirst observer 102A changes relative to a display surface of thedisplay device 104. - As depicted in the example of
FIG. 1 , at a point in time, thefirst observer 102A can be at a position with a distance D1 from thedisplay device 104 while thesecond observer 102B can be at a position with a distance D2 from thedisplay device 104. While distance D1 is less than distance D2, a first content screen associated with thefirst observer 102A can be displayed, such as directions to navigate to an assignedelevator access point 122, as one example of having a greater display priority. As thefirst observer 102A moves away from thedisplay device 104 and thesecond observer 102B moves closer to thedisplay device 104 such that distance D2 becomes less than distance D1, thedisplay device 104 can switch from displaying a first content screen associated with thefirst observer 102A to accept input from thesecond observer 102B and display a second content screen associated with thesecond observer 102B. Similar examples of transitions can apply for changes in position, orientation, and/or gaze to determine whether thefirst observer 102A or thesecond observer 102B has the greater display priority. As another example, visibility of thedisplay device 104 can impact priority. For example, if a display surface of thedisplay device 104 is vertically oriented, anobserver 102 who is behind thedisplay device 104 may receive zero priority, even if theobserver 102 is very close to thedisplay device 104 and gazing at the back of thedisplay device 104. -
FIG. 2 depicts a configuration ofsystem 200 as a variation of thesystem 100 ofFIG. 1 for priority-based adjustment to display content. In the example ofFIG. 2 , thedisplay adjustment system 114 interfaces to adisplay device 204 that is an embodiment of thedisplay device 104 mounted in acorridor 220.Sensors more sensors 112 ofFIG.1 mounted at different positions relative to thedisplay device 204. For example, thesensors view 213A ofsensor 212A overlaps with a field-of-view 213B ofsensor 212B. The overlapping of the field-of-view observers 202 are most likely viewing thedisplay device 204. For example,observers display device 204 to interact with the display device 204 (within an interaction distance threshold 205), butobservers 202C, 202D may transition to positions beyond athreshold distance 207 and be too far from thedisplay device 204 to read the display content. Thethresholds observers 202 transition away from thedisplay device 204. Alternatively, only asingle threshold 205 may be used. In the example ofFIG. 2 ,observers display device 204 and may have variations in both position and trajectory, whileobservers 202C, 202D are moving further away from thedisplay device 204. - Transitioning from an
observer 202A to at least one or moreadditional observers 202B facing thedisplay device 204 can be performed, for instance, based on one or more of: detecting a change in a number ofobservers display device 204. Thedisplay device 204 may output an informational notice to indicate which of theobservers - In some embodiments, the
display adjustment system 114 may interface with amobile device 203 of one or more of theobservers 202A-D to assist in determining position and/or trajectory. For example, an application loaded on themobile devices 203 that is compatible with thedisplay adjustment system 114 can assist theobservers 202A-D with wayfinding through thedisplay device 204, for instance, through a Wi-Fi or Bluetooth link between one or moremobile devices 203 and thedisplay adjustment system 114. For instance, position and movement data captured by themobile devices 203, e.g., through accelerometers, global positioning system data, and/or other sensors incorporated in themobile devices 203 can be used to inform thedisplay adjustment system 114 of observers' 202A-D trajectories. Theobservers 202A-D need not actively view themobile devices 203 for display content; rather, the display content can be output on thedisplay device 204, for instance, to assist in heads-up navigation through thecorridor 220 with further instructions/directions provided on thedisplay device 204 as a next turn or set of turns for theobservers mobile devices 203 can assist thedisplay adjustment system 114 in anticipating the approach ofobservers 202A-D prior to theobservers 202A-D entering either or both of the field-of-view -
FIG. 3 depicts adisplay system 300 transitioning between auser input screen 302, such as an elevator destinationfloor selection interface 304, and observer-specific display content 306, which can includetext 308 andtrajectory graphics 310. Theuser input screen 302 and observer-specific display content 306 are examples of content that can be displayed on thedisplay device 104 ofFIG. 1 . Thedisplay system 300 can transition from theuser input screen 302 to the observer-specific display content 306 based on an observer selection 312 (e.g., a user presses a button). Thedisplay system 300 can transition from the observer-specific display content 306 to theuser input screen 302 based on atimeout 314 or determining that a different one of a plurality ofobservers 102 has a greater display priority. -
FIGS. 4 and 5 illustrate an example of anobserver 202 relative to thedisplay device 204 establishing a first observation angle (θ) and a second observation angle (φ). In embodiments using depth sensors, object positions may be initially calculated using a sensor-based coordinate system (u,v,d), where (u,v) is the horizontal and vertical position according to the sensor and d is the distance from the sensor. The (u,v,d) coordinates can be transformed to "world coordinates" based on a more useful frame of reference. For instance, a frame of reference can be defined with respect to thedisplay device 204. Accordingly, a world coordinate system can include (x,y) coordinates as a horizontal and vertical position on thedisplay device 204 and z as the direction facing away from thedisplay device 204. The origin is located at the center of thedisplay device 204. Alternatively, vectors or any other desired coordinate system can be used. - Any
observer 202 that can see the front of thedisplay device 204 is at a position where z > 0. Furthermore, if z > 0 and x = 0 and y = 0, then theobserver 202 is directly in front of thedisplay device 204 and no correction is needed. In the examples ofFIGS. 4 and 5 , z > 0, x < 0 (to the left side of the display device 204) and y < 0 (positioned closer to the bottom of thedisplay device 204 than the top of the display device 204). - Accordingly, an example of a method for calculating distortion correction is as follows. There are two primary angles of interest: the first observation angle θ which is the angle of rotation with respect to the (x,y) orientation of the
display device 204 and the second observation angle φ which is the obliqueness to the surface of thedisplay device 204. Note that in the example ofFIG. 4 , θ depends only on the (x,y) coordinates and not on z. The computation of θ can be as follows: -
- Once again for z > 0, when φ = 0, the
observer 202 is directly in front of thedisplay device 204; when φ is closer to 90°, the angle is quite oblique. - Given θ and φ, visual content can be distorted to correct for the perspective of the
observer 202 so that the display content appears undistorted to theobserver 202. Consider that the position of the displayed content is represented in the (x,y) coordinates. For example, the letter "E" may include 4 line segments with the (x,y) coordinates. To transform the content defined by (x,y) into screen coordinates (p,q), an example transform is as follows: - Referring now to
FIG. 6 with continued reference toFIGS. 1-5 ,FIG. 6 depicts a flow chart of amethod 400 of priority-based adjustment to display content in accordance with an embodiment of the disclosure. Themethod 400 can be performed, for example, by thedisplay adjustment system 114 ofFIGS. 1 and2 . - At
block 402, thedisplay adjustment system 114 can determine a position of afirst observer 102A relative to adisplay device 104. Atblock 404, thedisplay adjustment system 114 can determine a position of asecond observer 102B relative to thedisplay device 104. The first content screen can include a first elevator car assignment for thefirst observer 102A, and the second content screen can include a second elevator car assignment for thesecond observer 102B. Determining the position of theobservers 102 relative to thedisplay device 104 can include determining an angular difference between theobservers 102 and a display surface of thedisplay device 104. - At
block 406, thedisplay adjustment system 114 can display on the display device 104 a first content screen associated with thefirst observer 102A based on determining that thefirst observer 102A has a greater display priority than thesecond observer 102B based at least in part on the position of thefirst observer 102A and the position of thesecond observer 102B. Atblock 408, thedisplay adjustment system 114 can display on the display device 104 a second content screen associated with thesecond observer 102B based on determining that thesecond observer 102B has a greater display priority than thefirst observer 102A based at least in part on the position of thefirst observer 102A and the position of thesecond observer 102B. To limit the impact of interruptions and movement of multiple people in a group, displaying the second content screen associated with thesecond observer 102B based on determining that thesecond observer 102B has a greater display priority than thefirst observer 102A can include confirming that thefirst observer 102A is outside of aninteraction distance threshold 205 such that it is less likely that thefirst observer 102A is still gazing at thedisplay device 104. The greater display priority can be determined, for example, based on one or more of: a distance relative to thedisplay device 104, an orientation relative to thedisplay device 104, a change in position relative to thedisplay device 104, and a gaze relative to thedisplay device 104. - The
display adjustment system 114 can switch display content of thedisplay device 104 to auser input screen 302 based on detecting a transition between thefirst observer 102A having the greater display priority than thesecond observer 102B. Thedisplay adjustment system 114 can switch display content of thedisplay device 104 to auser input screen 302 based on reaching a timeout period after displaying the first content screen or the second content screen. Thedisplay adjustment system 114 can switch thedisplay device 104 to a screen saver mode based on detecting noobservers 102 within athreshold distance 207 ofdisplay device 104 for a screen saver timeout period. - Embodiments may distinguish the
first observer 102A from thesecond observer 102B based on one or more of: a mobile credential, an observed physical characteristic, and facial recognition. Where theobservers 102 are specifically identified, thedisplay device 104 may avoid transitioning to theuser input screen 302, as an existing floor preference of an identifiedobserver 102 may be known. For instance, an association of displayed content can be based on detecting some aspect of theobserver 102, such as a mobile credential (e.g.,mobile device 203 emitting a unique address, a radio frequency identifier tag reading a badge, etc.), by recognizing the face of theobserver 102 or a physical characteristic (e.g., age, gender, emotional state, etc.) of theobserver 102. In such a case, there is only the second screen (e.g., the elevator assignment or wayfinding info) thedisplay device 104 may display customized information to theobserver 102 with the highest priority absent direct user input into thedisplay device 104. For example, as people stream into a building,observers 102 can walk past thedisplay device 104 that shows an elevator assignment, one at a time, without prompting theobservers 102 to input a destination floor if the destination floor can be detected in another way (e.g., based on a default destination floor associated with a mobile credential or associated through other recognition). Therefore, as people walk past thedisplay device 104, thedisplay device 104 can provide information customized for the highest-priority observer 102 (e.g., a closest observer facing a display surface of the display device 104). As thefirst observer 102A moves past thedisplay device 104 and asecond observer 102B emerges as the highest priority, thedisplay device 104 switches to show information customized for thesecond observer 102B. - The
display adjustment system 114 may also track a position of an observer who most recently interacted with thedisplay device 104 as a most recent observer, determine a distortion correction to apply based on the position of the most recent observer relative to thedisplay device 104, and output the distortion correction to thedisplay device 104. Thedisplay adjustment system 114 can adjust the distortion correction as the most recent observer continues to change position relative to thedisplay device 104, where the distortion correction includes at least a rotation and rescaling operation. Thedisplay adjustment system 114 can determine the position of the most recent observer relative to thedisplay device 104 based on one or more angular differences between the most recent observer and thedisplay device 104. - While the above description has described the flow process of
FIG. 6 in a particular order, it should be appreciated that unless otherwise specifically required in the attached claims that the ordering of the steps may be varied. - As described above, embodiments can be in the form of processor-implemented processes and devices for practicing those processes, such as a processor. Embodiments can also be in the form of computer program code containing instructions embodied in tangible media, such as network cloud storage, SD cards, flash drives, floppy diskettes, CD ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes a device for practicing the embodiments. Embodiments can also be in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into an executed by a computer, the computer becomes an device for practicing the embodiments. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
- The term "about" is intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, "about" can include a range of ± 8% or 5%, or 2% of a given value.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, element components, and/or groups thereof.
- While the present disclosure has been described with reference to an exemplary embodiment or embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this present disclosure, but that the present disclosure will include all embodiments falling within the scope of the claims.
Claims (15)
- A method of priority-based adjustment to display content, the method comprising:determining a position of a first observer relative to a display device;determining a position of a second observer relative to the display device;displaying, on the display device, a first content screen associated with the first observer based on determining that the first observer has a greater display priority than the second observer based at least in part on the position of the first observer and the position of the second observer; anddisplaying, on the display device, a second content screen associated with the second observer based on determining that the second observer has a greater display priority than the first observer based at least in part on the position of the first observer and the position of the second observer.
- The method of claim 1, wherein the first content screen comprises a first elevator car assignment for the first observer and the second content screen comprises a second elevator car assignment for the second observer.
- The method of claim 1 or 2, further comprising:
switching display content of the display device to a user input screen based on detecting a transition between the first observer having the greater display priority than the second observer. - The method of any of claims 1 to 3, further comprising:
switching display content of the display device to a user input screen based on reaching a timeout period after displaying the first content screen or the second content screen. - The method of any preceding claim, further comprising:
switching the display device to a screen saver mode based on detecting no observers within a threshold distance of display device for a screen saver timeout period. - The method of any preceding claim, further comprising:tracking a position of an observer who most recently interacted with the display device as a most recent observer;determining a distortion correction to apply based on the position of the most recent observer relative to the display device; andoutputting the distortion correction to the display device.
- The method of claim 6, further comprising:determining the position of the most recent observer relative to the display device based on one or more angular differences between the most recent observer and the display device; andadjusting the distortion correction as the most recent observer continues to change position relative to the display device, wherein the distortion correction comprises at least a rotation and rescaling operation,wherein the position is preferably determined based on one or more of: a depth sensor, a radar sensor, an ultrasonic sensor, a multi-camera system, an infrared sensor, and one or more floor pressure sensors.
- The method of any preceding claim, further comprising:
distinguishing the first observer from the second observer based on one or more of: a mobile credential, an observed physical characteristic, and facial recognition. - The method of any preceding claim, wherein the greater display priority is determined based on one or more of: a distance relative to the display device, an orientation relative to the display device, a change in position relative to the display device, and a gaze relative to the display device.
- A system, comprising:a display device;one or more sensors operable to detect a position of an observer; anda display adjustment system operably coupled to the display device and the one or more sensors, the display adjustment system configured to perform a plurality of operations comprising:determining a position of a first observer relative to the display device;determining a position of a second observer relative to the display device;displaying, on the display device, a first content screen associated with the first observer based on determining that the first observer has a greater display priority than the second observer based at least in part on the position of the first observer and the position of the second observer; anddisplaying, on the display device, a second content screen associated with the second observer based on determining that the second observer has a greater display priority than the first observer based at least in part on the position of the first observer and the position of the second observer.
- The system of claim 10, wherein the first content screen comprises a first elevator car assignment for the first observer and the second content screen comprises a second elevator car assignment for the second observer.
- The system of claim 10 or 11, wherein the system is further configured to perform one or more of the following the operations:switching display content of the display device to a user input screen based on detecting a transition between the first observer having the greater display priority than the second observer,switching display content of the display device to a user input screen based on reaching a timeout period after displaying the first content screen or the second content screen, andswitching the display device to a screen saver mode based on detecting no observers within a threshold distance of display device for a screen saver timeout period.
- The system of any of claims 10 to 12, wherein the system is further configured to perform the operations comprising:tracking a position of an observer who most recently interacted with the display device as a most recent observer;determining a distortion correction to apply based on the position of the most recent observer relative to the display device; andoutputting the distortion correction to the display device.
- The system of claim 13, wherein the system is further configured to perform the operations comprising:determining the position of the most recent observer relative to the display device based on one or more angular differences between the most recent observer and the display device;adjusting the distortion correction as the most recent observer continues to change position relative to the display device, wherein the distortion correction comprises at least a rotation and rescaling operation,wherein the position is preferably determined based on one or more of: a depth sensor, a radar sensor, an ultrasonic sensor, a multi-camera system, an infrared sensor, and one or more floor pressure sensors.
- The system of any of claims 10 to 14, wherein the system is further configured to distinguish the first observer from the second observer based on one or more of: a mobile credential, an observed physical characteristic, and facial recognition, and/or,
wherein the greater display priority is determined based on one or more of: a distance relative to the display device, an orientation relative to the display device, a change in position relative to the display device, and a gaze relative to the display device.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/171,396 US20200133385A1 (en) | 2018-10-26 | 2018-10-26 | Priority-based adjustment to display content |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3667456A2 true EP3667456A2 (en) | 2020-06-17 |
EP3667456A3 EP3667456A3 (en) | 2020-09-09 |
Family
ID=68387138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19204793.4A Withdrawn EP3667456A3 (en) | 2018-10-26 | 2019-10-23 | Priority-based adjustment to display content |
Country Status (3)
Country | Link |
---|---|
US (1) | US20200133385A1 (en) |
EP (1) | EP3667456A3 (en) |
CN (1) | CN111103972A (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11249637B1 (en) * | 2020-03-11 | 2022-02-15 | Meta Platforms, Inc. | User interface information enhancement based on user distance |
JP2023180633A (en) * | 2022-06-10 | 2023-12-21 | 東芝エレベータ株式会社 | signage monitor device |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020046100A1 (en) * | 2000-04-18 | 2002-04-18 | Naoto Kinjo | Image display method |
CN102163081A (en) * | 2011-05-13 | 2011-08-24 | 北京新岸线软件科技有限公司 | Method, electronic equipment and device for screen interaction |
US8723796B2 (en) * | 2012-02-02 | 2014-05-13 | Kodak Alaris Inc. | Multi-user interactive display system |
EP2677458B1 (en) * | 2012-06-22 | 2014-10-15 | Sick Ag | Optoelectronic device and method for adjusting brightness |
KR102158208B1 (en) * | 2013-07-26 | 2020-10-23 | 엘지전자 주식회사 | Electronic device and control method for the same |
CN104469256B (en) * | 2013-09-22 | 2019-04-23 | 思科技术公司 | Immersion and interactive video conference room environment |
WO2015069503A2 (en) * | 2013-11-08 | 2015-05-14 | Siemens Healthcare Diagnostics Inc. | Proximity aware content switching user interface |
CN106339189B (en) * | 2015-11-16 | 2020-07-10 | 北京智谷睿拓技术服务有限公司 | Content loading method and device based on curved display screen and user equipment |
US10558264B1 (en) * | 2016-12-21 | 2020-02-11 | X Development Llc | Multi-view display with viewer detection |
US10514546B2 (en) * | 2017-03-27 | 2019-12-24 | Avegant Corp. | Steerable high-resolution display |
-
2018
- 2018-10-26 US US16/171,396 patent/US20200133385A1/en not_active Abandoned
-
2019
- 2019-10-23 EP EP19204793.4A patent/EP3667456A3/en not_active Withdrawn
- 2019-10-25 CN CN201911022297.0A patent/CN111103972A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN111103972A (en) | 2020-05-05 |
US20200133385A1 (en) | 2020-04-30 |
EP3667456A3 (en) | 2020-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160117864A1 (en) | Recalibration of a flexible mixed reality device | |
US8730164B2 (en) | Gesture recognition apparatus and method of gesture recognition | |
KR102580476B1 (en) | Method and device for calculating the occluded area within the vehicle's surrounding environment | |
US10124729B2 (en) | Method and apparatus for providing driving information | |
EP2660686A1 (en) | Gesture operation input system and gesture operation input method | |
EP2058762A2 (en) | Method and apparatus for generating bird's-eye image | |
US20120026088A1 (en) | Handheld device with projected user interface and interactive image | |
EP3435346B1 (en) | Spectacle-type wearable terminal, and control method and control program for same | |
CN110431378B (en) | Position signaling relative to autonomous vehicles and passengers | |
US10634918B2 (en) | Internal edge verification | |
EP3667456A2 (en) | Priority-based adjustment to display content | |
JP2013164737A (en) | Obstacle alarm display system, information processor, obstacle alarm display method, and obstacle alarm display program | |
EP3822923A1 (en) | Maintenance assistance system, maintenance assistance method, program, method for generating processed image, and processed image | |
US9031772B2 (en) | Assistance system and generation method of dynamic driving information | |
US20180268225A1 (en) | Processing apparatus and processing system | |
US11164390B2 (en) | Method and apparatus for displaying virtual reality image | |
JP6609588B2 (en) | Autonomous mobility system and autonomous mobility control method | |
JP2016092693A (en) | Imaging apparatus, imaging apparatus control method, and program | |
US11423545B2 (en) | Image processing apparatus and mobile robot including same | |
JP2012022646A (en) | Visual line direction detector, visual line direction detection method and safe driving evaluation system | |
CN110502170B (en) | Position-based adjustment of display content | |
US11766779B2 (en) | Mobile robot for recognizing queue and operating method of mobile robot | |
US20180122145A1 (en) | Display apparatus, display system, and control method for display apparatus | |
CN113467731A (en) | Display system, information processing apparatus, and display control method for display system | |
JP2010205030A (en) | Method, system and program for specifying related information display position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06F 1/3231 20190101AFI20200805BHEP Ipc: B66B 3/00 20060101ALI20200805BHEP Ipc: G06F 3/01 20060101ALI20200805BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20210310 |