FI20185600A1 - Public display device management - Google Patents

Public display device management Download PDF

Info

Publication number
FI20185600A1
FI20185600A1 FI20185600A FI20185600A FI20185600A1 FI 20185600 A1 FI20185600 A1 FI 20185600A1 FI 20185600 A FI20185600 A FI 20185600A FI 20185600 A FI20185600 A FI 20185600A FI 20185600 A1 FI20185600 A1 FI 20185600A1
Authority
FI
Finland
Prior art keywords
user
display device
basis
public display
position information
Prior art date
Application number
FI20185600A
Other languages
Finnish (fi)
Swedish (sv)
Inventor
Timo Rantanen
Sampo Pihl
Original Assignee
Genera Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genera Oy filed Critical Genera Oy
Priority to FI20185600A priority Critical patent/FI20185600A1/en
Priority to PCT/FI2019/050489 priority patent/WO2020002767A1/en
Publication of FI20185600A1 publication Critical patent/FI20185600A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/418External card to be used in combination with the client device, e.g. for conditional access
    • H04N21/4182External card to be used in combination with the client device, e.g. for conditional access for identification purposes, e.g. storing user identification data, preferences, personal settings or data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds

Abstract

According to an example aspect of the present invention, there is provided a method, comprising specifying, by applying a location-based user knowledge model, a user interface for the first public display device on the basis of received user identification information and at least one of user position information and display position information; receiving user reaction input indicative of user attitude to the user interface and collected by one or more sensors associated with the first public display device; processing the user reaction input, and updating the location-based user knowledge model on the basis of processing of the user reaction input.

Description

PUBLIC DISPLAY DEVICE MANAGEMENT
20185600 prh 29 -06- 2018
FIELD [001] The present invention relates to management of public display devices.
BACKGROUND [002] As the use in digital public display devices, as for example digital flat screens and digital billboards, in public areas for providing information and advertisements is constantly increasing, it is of interest to monitor the audience around the vicinity of such 10 public display installations, so as to better adapt the advertising spots to display on said installations according to the results of said monitoring.
[003] EP2055101 discloses an automated media content adaptation and delivery.
The method comprises obtaining information including audience information indicative of characteristics of audience present at the display station;
offering at least a portion of the information to a plurality of potential content providers, thereby to assist each potential content provider in estimating audience receptiveness to the respective potential content provider's display content; and accepting auction bids for display time from the potential content providers; and authorizing a highest bidder's content display. Data has value of its own, however the value is relative. There is a need 20 for improved systems capable of learning and handling increasing amounts of data and various options to control or manage inputs related to the data displayed. The larger the potential audience is, the more potential there is for generating surplus by such systems.
SUMMARY [004] According to some aspects, there is provided the subject matter of the independent claims. Some embodiments are defined in the dependent claims.
[005] According to a first aspect, there is provided an apparatus, comprising: at least one processor; and at least one memory including computer program code, wherein
20185600 prh 29 -06- 2018 the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to: receive user position information based on a signal from at least one user device, receive display position information of at least a first public display device, confirm proximity of the user to the first public display device on 5 the basis of at least the user position information and the display position information, specify, by a user interface adaptation module applying a location-based user knowledge model, a user interface for the first public display device on the basis of received user identification information and at least one of the user position information and the display position information; receive user reaction input indicative of user attitude to the user 10 interface and collected by one or more sensors associated with the first public display device; process the user reaction input by a user reaction analysis module, and update the location-based user knowledge model on the basis of the processed user reaction input.
[006] According to a second aspect, there is provided an method, comprising: receiving user position information based on a signal from at least one user device, 15 receiving display position information of at least a first public display device, confirming proximity of the user to the first public display device on the basis of at least the user position information and the display position information, specifying, by applying a location-based user knowledge model, a user interface for the first public display device on the basis of received user identification information and at least one of user position 20 information and display position information; receiving user reaction input indicative of user attitude to the user interface and collected by one or more sensors associated with the first public display device; processing the user reaction input, and updating the locationbased user knowledge model on the basis of processing of the user reaction input.
[007] According to a third aspect, there is provided a method, comprising: 25 receiving a request from a user device to a resource locator identifying a network resource configured to control content displayed in the first public display device, establishing a session with the user device to the network resource on the basis of the received request, sending a remote control user interface from the network resource to the user device over the session, receiving a control command from the user device, the control command being 30 indicative of a user input to the remote control user interface, controlling change of information contents in the first public display device in response to the control command.
[008] According to a fourth aspect, there is provided a method, comprising: receiving by a user device a resource locator from a public display device over a wireless connection, wherein the resource locator identifies a network resource controlling content displayed in at least the display device; establishing a session from the user device to the 5 network resource on the basis of the received resource locator, receiving a remote control user interface from the network resource, detecting a user input to the remote control user interface, and in response to the user input, sending a control command from the user device for causing the network resource to control change of information contents in at least the display device.
[009] According to an aspect, there is provided an apparatus, comprising means for causing the apparatus at least to perform the method according to any of the aspects or an embodiment of the method. According to another aspect, there is provided an apparatus, comprising at least one processor; and at least one memory including computer program code, wherein the at least one memory and computer program code are configured to, with 15 the at least one processor, cause the apparatus at least to perform the method according to any of the aspects or an embodiment of the method. The apparatus may be configured to receive control inputs, learn from the internal data flows in the management system, further develop inputs based on: sensors linked to the apparatus and control tools such as user devices, and implement changes into visual data show in each display and in the 20 portfolio of the displays.
BRIEF DESCRIPTION OF THE DRAWINGS
20185600 prh 29 -06- 2018 [0010] Some example embodiments will now be described with reference to the accompanying drawings.
[0011] FIGURE 1 illustrates a system in accordance with at least some embodiments;
[0012] FIGURE 2 illustrates a method in accordance with at least some embodiments;
[0013] FIGURE 3 illustrates a control apparatus in accordance with at least some 30 embodiments;
20185600 prh 29 -06- 2018 [0014] FIGURE 4 illustrates inputs and outputs of a visual information system in accordance with at least some embodiments; and [0015] FIGURE 5 illustrates an example apparatus capable of supporting at least some embodiments.
EMBODIMENTS [0016] There is now provided an improved display-user interaction and adaptation of public display control on the basis of detected user reactions.
[0017] FIGURE 1 illustrates an example system in accordance with at least some 10 embodiments. One or more sets of public display devices 10 may be provided. The term public display device refers to display devices mainly used in public places, and may be referred to as digital signs or signage, for example. The set of display devices 10 may form a visual information system (VIS) 18. The display devices 10 in a set may be connectable directly or via one or more intermediary devices, such as a VIS control unit or device (not 15 shown). Also the different sets 18, 80 of display devices may be connected to each other via a network 50.
[0018] In addition to a display portion 12, the display device 10 may comprise a communications unit 14 for communicating with the SCP 30 and at least one, but typically a plurality of sensors 16 for obtaining information on the current environment of the 20 display device, such as monitoring users and/or their user devices 20 in proximity of the display device 10. Such sensors may comprise one or more of a positioning device (e.g.
GPS device), an orientation detection sensor, a motion sensor, a proximity detection sensor, a microphone, a still and/or video camera, a short range radio receiver configured to detect signals from proximate devices, a light sensor, a temperature or other weather 25 condition sensor, air pressure, temperature, rainfall, humidity, wind, a sensor connectable to CAN or other vehicle network (e.g. a sensor capable for OBD2 protocol communication), ambient light sensor (ALS), accelerometer, gyroscope, ultrasonic sensor, microwave radar, proximity sensor, laser sensor, acoustic sensor, liquid/gas sensor, etc. The display device may also comprise a communications unit for communicating with the 30 user device 20.
20185600 prh 29 -06- 2018 [0019] The system comprises a smart connection platform (SCP) 30, which may be referred e.g. as delivery and management system or framework. The SCP 30 may be centralized to at least one server, or implemented by a plurality of connected computing devices. The SCP 30 is configured to manage at least some aspects of one or more VIS 18,
80 and their associated information display devices 10 via one or more networks 50.
[0020] The SCP 30 may comprise a content delivery network (CDN) or content management system (CMS) unit 32 configured to manage content provision and user interface adaptation, a device management system (DMS) unit 34 configured to manage one or more VIS 18, 80, and a recognition and authentication system (RAS) unit 36 10 configured to identify and authenticate entities, such as the display devices 10 and the user devices 20. However, it is to be appreciated that these are just examples of applicable operational units and some or all of the presently claimed embodiments for the SCP 30 may be implemented by one or more devices and/or operational units.
[0021] A user device 20 comprises at least one wireless communications unit 22 15 which may be applied for communicating with the display device(s) and/or the signal of which may be applied for generating audience attribute information, position information, and/or movement information. In some embodiments, information from the VIS 18, 80 and/or the SCP 30 may be provided to the user device 20 and displayed on a display of the user device 20. In some embodiments, information may be sent from the user device 20 to 20 the VIS 18, 80 and/or the SCP 30. The user device 20 may be configured to provide identification and authentication data to the SCP 30, send control signals to control information displayed in a display device 10 in proximity to the user device, and/or control information of the user stored in the SCP 30 or database 40.
[0022] The SCP 30 may be connected by secure API(s) to other databases 40, 70 25 and/or systems 60, such as other SCPs or interfaces of other information systems, such as interfaces for emergency systems and/or interfaces for open governmental, municipal, or business data.
[0023] The SCP 30 may comprise or be connected to a database 40 comprising and updated with user information on the basis of which a user interface for a user device in 30 proximity to a display device 10 is specified. The user information is to be understood broadly to refer to information specific for a single user, associated by a user (or user device) identifier, or information specific for a user group, associated by a user group,
20185600 prh 29 -06- 2018 class or category identifier. Thus, a user group may be formed on the basis of certain qualifier(s), classifier(s) or descriptor(s) shared by a set of users. Users in proximity to the display device 10 may be referred to as audience, and such users may be classified into one or user or audience groups on the basis of the predetermined group classifiers and 5 information received from the user device 20 and/or information otherwise obtained regarding the user, e.g. by image processing. The user groups and associated classifier and other parameter data may be dynamically modified.
[0024] In some embodiments, the user information comprises a location-based user knowledge model 42, which may be a model specific for a single user or a group of users.
The location-based refers herein that at least some information in the model is associated with given location, such as given location of the display device 10. In another embodiment, there may be specific user knowledge models for different locations. At least some of the entries of the knowledge model 42 may be associated with detected locations of the user device(s) and one or more public display devices.
[0025] According to some embodiments, the user knowledge model 42 indicates incurred relationships between the user device and one or more content category or type identifiers associated with location. The user knowledge model may comprise information (or preferences) controlled by the user and/or an operator of the SCP, information learned on the basis of user monitoring, such as user reaction and number and times of visits. The 20 information stored in the model may comprise parameters defined on the basis of processing of the user information. For example, the SCP 30 may be configured to generate parameters defining user interface specification on the basis of processing received user monitoring data. However, it will be appreciated that these represent only some example information and implementation options for providing the user knowledge 25 model. It is also to be appreciated that the user knowledge model may be provided in a distributed data storage, and records stored in different data storages may together form the model.
[0026] FIGURE 2 is a flow graph of a method in accordance with at least some embodiments of the present invention. The phases of the illustrated method may be for an 30 information management system comprising a plurality of public display devices and performed in a public display control apparatus or device, such as the SCP 30 or one or more modules thereof.
20185600 prh 29 -06- 2018 [0027] The method comprises receiving 200 user position information based on a signal from at least one user device. For example, the user position information may be received as a short-range radio signal comprising a user device identifier signal. Display position information of at least a first public display device is received 210. The display position information may be received from a memory or separate database, in case of a mobile display device, received from the VIS 18 or the display device 10.
[0028] Proximity of the user device to the first public display device is confirmed 220 on the basis of at least the user position information and the display position information. The confirmation may be obtained just on the basis of detecting the user device by the display device or more precise detection of user device position in relation to the display device may be carried out.
[0029] By applying a location-based user knowledge model, a user interface is specified 230 for the first public display device on the basis of received user identification information and at least one of the user position information and the display position information. The specification of the user interface may involve controlling a plurality of display control parameters, such as definition of one or more of content categories, content sources, activation, partitioning, timing, size, and positioning of the user content. The user interface specification may also involve controlling audio output in a speaker of the display device and available input and control options for the display device.
[0030] The specified user interface may then be caused to be provided for the user(s) by the first display device and user reactions are monitored. A user reaction input is received 240, the user reaction input being indicative of user attitude to the user interface and collected by one or more sensors associated with the first public display device. The user reaction input is processed 250 and the location-based user knowledge model is updated 260 on the basis of processing the user reaction input.
[0031] It is to be appreciated that FIGURE 2 illustrates general features associated with applying and training the user knowledge model and there are a large number of various embodiments and implementation options available, some of which are further illustrated below. For example, in some embodiments, there may be one or more steps of updating the user interface in the first display device on the basis of processing of the user reaction input (based on processing 250 or separate processing for the user interface updating).
20185600 prh 29 -06- 2018 [0032] FIGURE 3 illustrates an apparatus or functional unit 300 for an information management system comprising a plurality of public display devices. The apparatus 300 may be part of the SCP 30 or configured to implement at least some of the SCP 30, such as the CMS unit 32 in FIGURE 1, comprising functional modules and associated information 5 flows in accordance with at least some embodiments. The apparatus 300 may be configured to perform at least some of the presently disclosed display device control related features, as illustrated in FIGURE 2 and related further embodiments.
[0033] A proximity detection module (PDM) 302 is configured to receive information 320 on at least the user position information and the display position 10 information and confirm proximity of the user to the first public display device on the basis of the received information 320.
[0034] A user interface adaptation module (UIAM) 304 accesses 324 a locationbased user knowledge model 42 on the basis of the user identification information. The UIAM applies the model 42 to specify a user interface for the first public display device on 15 the basis of the user position information and/or the display position information. The
UIAM may be request the model in response to receiving an input from the PDM 302 indicative of the user being detected in proximity to the first display device. The PDM 302 may also submit the user position information for the UIAM. The UIAM 304 is configured to provide or control 322 the specified user interface for display in the first display device.
[0035] A user reaction analysis module (URM) 306 is configured to receive user reaction input indicative of user attitude collected by one or more sensors associated with the first public display device. The URM 306 is configured to process the user reaction input and update 328 the location-based user knowledge model 42 on the basis of processing the user reaction input. Hence, the URM gathers user reaction feedback for the 25 user interface specified for the user by the UIAM 304 and provides information for training the location-based user knowledge model 42. The model 42 may be updated 328 on the basis of the user reaction processing output information from the URM 306 by a specific model update module (MUM) 308. In another embodiment, the URM 306 or a further module, which may be outside the apparatus 300, is configured to update and train the model 42 on the basis of the user reaction processing output information.
[0036] For example, the URM 306 may be configured to generate display locationbased control parameters for user interface specification on the basis of the processing the
20185600 prh 29 -06- 2018 user reaction input information and these parameters are stored in the model 42. In another example, location-based user preference and behaviour information learned on the basis of the user reaction input is generated and stored in the model, and during subsequent user interface generation for the user, the (updated) preference and behaviour information is 5 processed to specify the user interface.
[0037] Furthermore, it is to be noted that in some embodiments the user device position is continuously monitored to obtain movement information of the user device 20 and detect when the user device is no longer in proximity to the display device 10. In response to such detection, the user interface on the display device may be updated. Also 10 the user knowledge model 42 may be updated on the basis of the time the user device has been detected in proximity to the display device 10 or the velocity the user device is bypassing the device 10.
[0038] It is to be noted that in some embodiments, some elements and some or all of their associated functions may be implemented outside the apparatus 300 and the SCP 30.
For example, some or all of the PDM 302 functions 200, 210, 220 may be implemented in the VIS 18, 80 and further in the display device 10.
[0039] According to some embodiments, the user reaction input is obtained on the basis of monitoring movement of the user device on the basis of a signal from the user device. The user reaction input may be defined on the basis of velocity of the user device 20 within predefined proximity to the first public display device. The user reaction input is defined on the basis of detected time of the presence of the user device within predefined proximity to the first public display device.
[0040] According to some embodiments, interaction between the at least one user device and the specified user interface of the first display is analysed. The user interface of 25 the first display and/or at least one second public display device is controlled on the basis of analysis of the interaction between the at least one user device and the specified user interface of the first display.
[0041] There are a number of available options to receive and analyse the user reaction and interaction input, some of which are further illustrated below.
[0042] According to some embodiments, the system, such as the URM 306 comprises an image analysis module, which may be configured to analyze image signal of
20185600 prh 29 -06- 2018 at least one camera associated with the first display. The user reaction input may be generated by the image analysis module. The user reaction input may be indicative of at least one of: facial expressions, gestures, attention time, face direction, number of glances, or other behaviouristic indication.
[0043] According to some embodiments, the system, such as the URM 306 comprises a speech analysis module, which may be configured to analyze signal from at least one microphone associated with the display device 10. The user reaction input may be generated by the speech analysis module and the user reaction input may be indicative of output of analysis of words and/or tone detected in voice input captured by the 10 microphone(s).
[0044] According to some embodiments, at least some of the public display devices 10 of the VIS 18, 80 are mobile. The UIAM 304 may be configured to specify the user interface additionally on the basis of current position of the first public display device. Thus, the user interface may be defined on the basis of the location-specific user 15 knowledge mode and potential available location-specific content.
[0045] In some embodiments, the display device 10 is configurable in a vehicle, such as a bus, a car (taxi), a tram, metro, boat, or train carriage, etc. The display device 10 may comprise an interface for connecting to an interface of a communication system of the vehicle, such as a CAN based control system. Device-specific data may be received by the 20 display device 10 and the VIS via the interface and included, in some embodiments by the
UIAM 304 in the user interface specified for the user. Such data may comprise device sensor data, device location data, location-specific device data, etc. For example, in a bus this information could be vehicle speed and direction or estimated time of arrival (ETA).
[0046] In some embodiments, the display device 10 is configurable in an elevator.
The display device may be configured as an information display of the elevator and provide elevator information in the user interface specified for the user. The display device may be configured to display location specific information, such as information on next or target (ordered) floor to the user. Such information may be referred to as location to consumer (L2C) content (type).
20185600 prh 29 -06- 2018 [0047] According to some embodiments, the system 30, such as the CDN 32 and/or DMS 34, is configured to establish a session with the user device 20 for controlling at least some information on the display device 10 in proximity to the user device.
[0048] A method for arranging a remote control of the display device may comprise:
- receiving a request from a user device (e.g. device 20) to a resource locator identifying a network resource (e.g. the SCP 30 or a unit or module thereof, such as the CMS 32 or the unit 300 and further the UIAM 304 thereof) configured to control content displayed in a first public display device (e.g. device 10);
- establishing a session with the user device to the network resource on the basis of the received request;
- sending a remote control user interface from the network resource to the user device over the session;
- receiving a control command from the user device, the control command being indicative of a user input to the remote control user interface;
- controlling change of information contents in the first public display device in response to the control command.
[0049] A method, which may be implemented in the user device 20, for arranging remote control of the display device 10, may comprise:
- receiving by a user device a resource locator from a public display device over a wireless connection, wherein the resource locator identifies a network resource controlling content displayed in at least the display device;
- establishing a session from the user device to the network resource on the basis of the received resource locator, receiving a remote control user interface from the network resource,
- detecting a user input to the remote control user interface, and
- in response to the user input, sending a control command from the user device for causing the network resource to control change of information contents in at least the display device.
20185600 prh 29 -06- 2018 [0050] By applying the remote control interface, the user may further control at least a partition of the display contents. In a further embodiment, the user knowledge model 42 may be updated on the basis of control commands from the user device.
[0051] FIGURE 4 illustrates example input categories to the SCP 30.
The SCP 30 may be configured to gather information from multiple different sources, 18,
80, 10, 40, 60, and/or 70. It may be configured to intelligently apply and/or combine input data of different categories on the basis of the user knowledge model 42 and/or possible further parameters and preferences.
[0052] As indicated above, some or all of the following input categories may be 10 provided to the SCP as inputs:
- Positional data, regarding the current positions of the user devices 20 and the information displays 10. The positional data may further indicate the movement of the user device 20 and/or the information display 10, for example.
- Contextual data, which may also be referred to as Big data, regarding context or environment of the information display(s) 10, such as events, traffic, and/or weather information relevant for the (current) position of the information display
10.
- Device specific data, such data received by the display device 10 from a vehicle or elevator at which the display device 10 is mounted.
- User specific data, such as data received or detected by the display device 10 from the user device 20.
[0053] The SCP 30 may be configured to manage the dynamically changing VIS portfolio 18, referring to a selection of display devices 10. The VIS portfolio may be managed on the basis of input data the SCP 30 receives, such as sensor data, the user 25 knowledge model 42 and/or possible further parameters and preferences. The centralized
SCP 30 may receive control inputs, learn from the internal data flows in the management system, further develop inputs based on: sensors linked to the apparatus; control tools such as user devices and implement changes into visual data show in each display and in the portfolio of the displays. The SCP may thus provide a an automated self-learning system 30 capable of handling increasing amounts of input data and various options to control or
20185600 prh 29 -06- 2018 manage inputs related to the data displayed. The larger the potential audience and the amount of the inputs is, the higher surplus is enabled by the present system.
[0054] The content types may be selected and allocated to the VIS portfolio and the dynamically adapting user interfaces thereof by the SCP on the basis of the inputs to the 5 SCP 30. Thus, the SCP 30 may be configured to dynamically define, on the basis of the inputs, such as the user knowledge model, which role each content type has in each portfolio and/or display device 10. The decision making is further affected by control parameters stored in the SCP 30, such as (smart) contracts. For example, the user interface may thus be specified 230 on the basis of learned and explicitly set user preferences, 10 learned user behavior, VIS display position, dynamic contextual data and device specific data.
[0055] There may be different content types that may be provided in the VIS 18, 80 and the display devices 10. In the example embodiment of FIGURE 4 there are 5 categories: consumer to consumer (C2C), i.e. content shared by users to other users, 15 government to consumer (G2C), such as alerts, events or other information by or for (e.g.
feedback) authorities, business to consumer (B2C), such as advertisements or other paid content, location to consumer L2C, such as location based information for showing direction, speed and possible estimated time of arrival (ETA) calculations, and interface to consumer (I2C), referring generally to locally shared screens of user devices via the VIS. 20 The user interface may comprise specific partitions for some or all of the above content types. The SCP 30 may be configured to dynamically adapt the partitioning. The partitioning may be based also on the location-based user knowledge model 42.
[0056] The SCP 30 may be configured to select the content and content types responsively by applying the location-based user knowledge model, such that the selected 25 content type and content thereof are automatically scaled into appropriate size in an assigned partition. For example, on the basis of learned user behaviour, C2C content type(s) for which interest of most users has been detected with a given display device 10, are prioritized in the user knowledge model and during display specification scaled to a larger display partition.
[0057] Thus, the presently disclosed smart display system facilitates to combine information in a way that the device location generates surplus and the context increases synergy of the information. At the same time each smart info display enables several
20185600 prh 29 -06- 2018 personalization and earning possibilities supporting each other. This facilitates to lower the critical point of the investments and therefore may enable to open up digital media to much wider audience than today. The user interfaces for the smart display devices can be made much more interesting for the passing users and their attention can be more likely obtained, 5 since the system dynamically specifying the user interface based on a large number of inputs can be trained all the time based on detected user reactions.
[0058] An electronic device comprising electronic circuitry may be an apparatus for realizing at least some embodiments. In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing 10 system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above.
[0059] FIGURE 5 illustrates a schematic diagram of an apparatus 501 for a display device management system according to an embodiment. The apparatus may comprise or be formed of a computing unit 501. The computing unit 501 may be configured to operate as a controller of the display device management system, which may comprise at least some of the already illustrated units and functions, such as the at least part of the SCP 30 20 illustrated in FIGURE 1. The system may comprise or be connected to further control unit(s) 510 and network(s)/service(s) 511, such as at least some of the entities 18, 20, 40, 60, 70, and/or 80 illustrated in FIGURE 1. It is to be appreciated that FIGURE 5 illustrates only one example of an applicable apparatus for carrying out at least some of the presently disclosed embodiments.
[0060] The computing unit 501 may comprise a processor 502, a communications unit 503 and a memory 504. The communication unit 503 may comprise a transmitter and/or receiver, which may be configured to operate in accordance with global system for mobile communication, GSM, wideband code division multiple access, WCDMA, long term evolution, LTE, 5G or other cellular communications systems, wireless local area 30 network, WLAN, and/or Ethernet standards, for example. The computing unit 501 may comprise a short range communication, SRC, transceiver, such as a Bluetooth or Bluetooth Low Energy transceiver.
20185600 prh 29 -06- 2018 [0061] The memory may comprise random-access memory and/or permanent memory. The memory may comprise at least one RAM chip. The memory may comprise solid-state, magnetic, optical and/or holographic memory, for example. The memory may be at least in part accessible to the processor 502. The memory may be at least in part 5 comprised in the processor 502. The memory 504 may store computer program code 505 and parameters 506 for causing the computing unit 501 to perform at least some of the presently disclosed features, such as the method of FIGURE 2 and some or all further embodiments thereof, when the computer program code is executed by the processor. The memory, processor and computer program code may thus be the means to cause the 10 computing unit 501 to perform at least some of the presently disclosed features related to arranging a learning system for specifying the user interface for public display devices, such as the method of FIGURE 2 and some or all further embodiments thereof. The computing unit 501 may comprise and/or be configured to apply, for example, a mathematical model, a formula library and an Al module.
[0062] The UI 507 may comprise one or more user interface devices, such as a display and input means, such as one or more of a keyboard, a touch screen, a mouse, a gesture input device or other type input/output device.
[0063] The display device 10 and/or user device 20 may comprise at least some similar elements as the apparatus 501. When computer instructions configured to cause a 20 processor of the user device to perform certain actions are stored in a memory of the display device 10 and/or the user device, and said device in overall is configured to run under the direction of the processor using computer instructions from the memory, the processor and/or its at least one processing core may be considered to be configured to perform at least some of the actions of said device illustrated above.
[0064] There may be further elements inside or in connection with (508) the apparatus 501, such as the a unit of the SCP 30, the display device 10 and/or the user device not illustrated in FIGURE 5. The user device may comprise or be arranged to accept a user identity module. The user identity module may comprise, for example, a subscriber identity module, SIM, card installable in the device. The user identity module may 30 comprise information identifying a subscription of a user of device. The user identity module may comprise cryptographic information usable to verify the identity of a user of device and/or to facilitate encryption of communicated information and billing of the user
20185600 prh 29 -06- 2018 of the device for communication effected via the device. In some other example embodiments, as indicated earlier, the apparatus 10 and/or 20 comprises at least one digital camera and/or one or more sensors.
[0065] It is to be understood that the embodiments of the invention disclosed are not limited to the particular structures, process steps, or materials disclosed herein, but are extended to equivalents thereof as would be recognized by those ordinarily skilled in the relevant arts. It should also be understood that terminology employed herein is used for the purpose of describing particular embodiments only and is not intended to be limiting.
[0066] Reference throughout this specification to one embodiment or an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Where reference is made to a numerical value using a term such as, for 15 example, about or substantially, the exact numerical value is also disclosed.
[0067] As used herein, a plurality of items, structural elements, compositional elements, and/or materials may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be 20 construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. In addition, various embodiments and example of the present invention may be referred to herein along with alternatives for the various components thereof. It is understood that such embodiments, examples, and alternatives are not to be construed as de facto equivalents of 25 one another, but are to be considered as separate and autonomous representations of the present invention.
[0068] Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the preceding description, numerous specific details are provided, such as examples of lengths, widths, 30 shapes, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
[0069] While the forgoing examples are illustrative of the principles of the present invention in one or more particular applications, it will be apparent to those of ordinary 5 skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the invention. Accordingly, it is not intended that the invention be limited, except as by the claims set forth below.
[0070] The verbs “to comprise” and “to include” are used in this document as open 10 limitations that neither exclude nor require the existence of also un-recited features. The features recited in depending claims are mutually freely combinable unless otherwise explicitly stated. Furthermore, it is to be understood that the use of a or an, that is, a singular form, throughout this document does not exclude a plurality.
INDUSTRIAL APPLICABILITY [0071] At least some embodiments of the present invention find industrial application in information management systems.
ACRONYMS LIST
20185600 prh 29 -06- 2018
Al Artificial intelligence
ALS Ambient light sensor
20 API Application programming interface
ASIC Application-specific integrated circuit
CAN Controller area network
FPGA Field-programmable gate array
GSM Global system for mobile communication
25 LTE Long term evolution
NFC Near-field communication
N-RAT
3GPP new radio access technology
OBD
UI
WCDMA
WiMAX
WLAN
On-board diagnostics
User interface
Wideband code division multiple access,
Worldwide interoperability for microwave access
Wireless local area network
20185600 prh 29 -06- 2018

Claims (6)

  1. CLAIMS:
    1. An apparatus for an information management system comprising a plurality of 5 public display devices, comprising
    - at least one processor; and
    - at least one memory including computer program code, wherein the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to:
    10 - receive (200) user position information based on a signal from at least one user device,
    - receive (210) display position information of at least a first public display device,
    - confirm (220) proximity of the user to the first public display device on the
    15 basis of at least the user position information and the display position information,
    - specify (230), by a user interface adaptation module (304) applying a locationbased user knowledge model, a user interface for the first public display device on the basis of received user identification information and at least one of the
    20 user position information and the display position information;
    - receive (240) user reaction input indicative of user attitude to the user interface and collected by one or more sensors associated with the first public display device;
    - process (250) the user reaction input by a user reaction analysis module (306),
    25 and
    - update (260) the location-based user knowledge model on the basis of the processed user reaction input.
  2. 2. The apparatus of claim 1, wherein the apparatus is configured to train the user 30 knowledge model which indicates incurred relationships between the user and one or more content category or type identifiers associated with location.
    20185600 prh 29 -06- 2018
    3. The apparatus of claim 1 or 2, wherein at least some of the entries of the knowledge model are associated with detected locations of the user device and one or more public display devices. 5 4. The apparatus of any preceding claim, wherein the user reaction input is indicative of movement of the user device defined on the basis of a signal from the user device. 5. The apparatus of claim 4, wherein the user reaction input is indicative of velocity 10 of the user device within predefined proximity to the first public display device. 6. The apparatus of any preceding claim, wherein the user reaction input is indicative of detected time of the presence of the user device within predefined proximity to the first public display device. 15 7. The apparatus of any preceding claim, wherein the first public display device is mobile, and the adaptation module is configured to specify the user interface additionally on the basis of current position of the first public display device and available location- 20 specific content. 8. The apparatus of any preceding claim, wherein interaction between the at least one user device and the specified user interface of the first display is analyzed, and the user interface in the first display is modified and/or a user interface of at least 25 one second public display device is controlled on the basis of analysis of the interaction between the at least one user device and the specified user interface of the first display. 9. The apparatus of any preceding claim, wherein the user reaction input is 30 generated by an image analysis module configured to analyze image signal of at least one camera associated with the first display and the user reaction input is indicative of at least one of: facial expressions, gestures, attention time, face direction, number of glances.
    20185600 prh 29 -06- 2018
    10. The apparatus of any preceding claim, wherein the user reaction input is generated by a speech analysis module configured to analyze signal from at least one microphone associated with the first display and the user reaction input is 5 indicative of output of analysis of words and/or tone detected in voice input. 11. The apparatus of any preceding claim, the apparatus being further configured for: - receiving a request from the at least one user device to a resource locator 10 identifying a network resource configured to control content displayed in the first public display device; - establishing a session with the at least one user device to the network resource on the basis of the received request; - sending a remote control user interface from the network resource to the at least 15 one user device over the session; - receiving a control command from the at least one user device, the control command being indicative of a user input to the remote control user interface; and - controlling change of information contents in the first public display device in 20 response to the control command. 12. A method comprising: - receiving (200) user position information based on a signal from at least one user device, 25 - receiving (210) display position information of at least a first public display device, - confirming (220) proximity of the user to the first public display device on the basis of at least the user position information and the display position information, 30 - specifying (230), by applying a location-based user knowledge model, a user interface for the first public display device on the basis of received user identification information and at least one of the user position information and the display position information;
    - receiving (240) user reaction input indicative of user attitude to the user interface and collected by one or more sensors associated with the first public display device;
    - processing (250) the user reaction input, and
  3. 5 - updating (260) the location-based user knowledge model on the basis of processing of the user reaction input.
    13. The method of claim 12, wherein the user knowledge model indicates incurred relationships between the user and one or more content category or type
  4. 10 identifiers associated with location and is trained on the basis of processing of the user reaction input and other inputs to a smart connection platform unit.
  5. 14. The method of claim 12 or 13, wherein the first public display device is mobile, and the user interface is specified additionally on the basis of current position of
  6. 15 the first public display device and available location-specific content.
    15. A computer program comprising code for, when executed in a data processing apparatus, to cause a method in accordance with at least one of claims 12 to 14 to be performed.
FI20185600A 2018-06-29 2018-06-29 Public display device management FI20185600A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20185600A FI20185600A1 (en) 2018-06-29 2018-06-29 Public display device management
PCT/FI2019/050489 WO2020002767A1 (en) 2018-06-29 2019-06-25 Public display device management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20185600A FI20185600A1 (en) 2018-06-29 2018-06-29 Public display device management

Publications (1)

Publication Number Publication Date
FI20185600A1 true FI20185600A1 (en) 2019-12-30

Family

ID=68986042

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20185600A FI20185600A1 (en) 2018-06-29 2018-06-29 Public display device management

Country Status (2)

Country Link
FI (1) FI20185600A1 (en)
WO (1) WO2020002767A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130290994A1 (en) * 2012-04-27 2013-10-31 Leonardo Alves Machado Selection of targeted content based on user reactions to content
US20140379477A1 (en) * 2013-06-25 2014-12-25 Amobee Inc. System and method for crowd based content delivery
US10783555B2 (en) * 2013-11-22 2020-09-22 At&T Intellectual Property I, L.P. Targeting media delivery to a mobile audience
US20160292744A1 (en) * 2015-03-31 2016-10-06 Yahoo! Inc. Smart billboards

Also Published As

Publication number Publication date
WO2020002767A1 (en) 2020-01-02

Similar Documents

Publication Publication Date Title
US20210303137A1 (en) Computer-assisted or autonomous driving vehicles social network
US9843647B2 (en) Method and apparatus for providing selection and prioritization of sensor data
US11397020B2 (en) Artificial intelligence based apparatus and method for forecasting energy usage
JP2019057293A (en) System and method for managing supply state of service
US10243867B2 (en) Vehicle security system
US11164586B2 (en) Artificial intelligence apparatus and method for recognizing utterance voice of user
WO2015084678A1 (en) Systems and methods for geo-location based message streams
US9574898B2 (en) Method and apparatus for providing sharing of navigation route and guidance information among devices
US11669781B2 (en) Artificial intelligence server and method for updating artificial intelligence model by merging plurality of pieces of update information
CN102238104A (en) System and method for distributing messages to communicating electronic devices
US9679032B2 (en) Device information providing system and device information providing method
US20150341241A1 (en) Method and apparatus for specifying machine identifiers for machine-to-machine platform support
US11352012B1 (en) Customized vehicle operator workflows
US11605378B2 (en) Intelligent gateway device and system including the same
CN112673367A (en) Electronic device and method for predicting user intention
US20210174422A1 (en) Smart apparatus
AU2016296473B2 (en) Detecting the context of a user using a mobile device based on wireless signal characteristics
WO2020002768A1 (en) Public display device management
US20190370863A1 (en) Vehicle terminal and operation method thereof
FI20185600A1 (en) Public display device management
US20220201426A1 (en) Assisted micro-environment interaction
US11425339B2 (en) Artificial intelligence device and method thereof
US20210377580A1 (en) Live or local environmental awareness
US11392936B2 (en) Exchange service robot and exchange service method using the same
Wang et al. Design and implementation of a location based service business management platform