NL2004709C2 - System and method for communicating information to a public. - Google Patents

System and method for communicating information to a public. Download PDF

Info

Publication number
NL2004709C2
NL2004709C2 NL2004709A NL2004709A NL2004709C2 NL 2004709 C2 NL2004709 C2 NL 2004709C2 NL 2004709 A NL2004709 A NL 2004709A NL 2004709 A NL2004709 A NL 2004709A NL 2004709 C2 NL2004709 C2 NL 2004709C2
Authority
NL
Netherlands
Prior art keywords
visual
processor
audio
messages
predetermined
Prior art date
Application number
NL2004709A
Other languages
Dutch (nl)
Inventor
Vladimir Nedovic
Roberto Valenti
Original Assignee
Univ Amsterdam
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Amsterdam filed Critical Univ Amsterdam
Priority to NL2004709A priority Critical patent/NL2004709C2/en
Application granted granted Critical
Publication of NL2004709C2 publication Critical patent/NL2004709C2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Description

SYSTEM AND METHOD FOR COMMUNICATING INFORMATION TO A PUBLIC
The invention relates to a system for interactively communicating information to a public.
5
Traditionally information to a public, for instance for marketing purposes, is communicated by displaying video messages on a display and/or by playing sound messages through speakers. The communication can be interactive by 10 using input means where members of the public can provide information about themselves or their preferences, and by choosing the messages to be communicated based on said input.
15 One aim of the invention is to provide a system which is able to communicate information in a more effective and/or attractive manner.
According to the invention the system comprises: a 20 processor; a video camera connected to said processor for capturing photo or video data; output means connected to said processor for transmitting audio, visual and/or digital messages; a detector connected to said processor for capturing reaction data from one or more persons in said 25 public, such as a button, a touch screen, a proximity sensor, a depth sensor, an rfid reader, a Bluetooth receiver, a microphone, or said video camera; electronic memory connected to said processor comprising data representing a set of predetermined visual cues; electronic 30 memory connected to said processor comprising data representing a first set of audio and/or visual messages; electronic memory connected to said processor comprising data representing a second set of one or more audio, visual 2 and/or digital messages; electronic memory connected to said processor comprising a database wherein representations of each one of said visual cues are linked to one or more of said first set of audio and/or visual messages; electronic 5 memory connected to said processor comprising data representing at least one predetermined reaction from one or more persons in said public, such as touching said button, touching said screen, making a predetermined move, or making a predetermined sound.
10
The invention enables the interaction between the display surface and the passers-by. The system addresses persons based on detected visual cues and detects if the person reacts to this manner of addressing, leading to a more 15 natural way of drawing human attention to the display surface .
For detecting cues in the video capture data said processor is arranged to compare in real time said data received from 20 said video camera with said data representing said visual cues; and said processor is arranged to determine in real time from said comparison if and which visual cue or cues from said set of predetermined visual cues are present in said data received from said video camera.
25
In order to attract the attention of the public said processor is arranged to retrieve in real time from said database which audio and/or visual message or messages from said first set are linked to said visual cue or cues; 30 and said processor is arranged to transmit in real time said retrieved audio and/or visual message or messages from said first set through said output means.
3
In order to detect attention of the public said processor is arranged to compare in real time said data received from said detector with said data representing at least one predetermined reaction; and said processor is arranged to 5 determine in real time from said comparison if a reaction from said set of predetermined reactions is present in said data received from said detector.
In order to communicate a message if the public's attention 10 has been detected said processor is arranged to transmit in real time a further audio, visual and/or digital message or messages from said second set through said output means if it is determined that a reaction from said set of predetermined reactions is present in said data received 15 from said detector.
For communicating messages in dependency of type of reaction the system further preferably comprises electronic memory connected to said processor comprising a second database 20 wherein representations of each one of said reactions are linked to one or more of said second set of audio, visual and/or digital messages; wherein said processor is arranged to determine in real time from said comparison which reaction from said set of predetermined reactions is present 25 in said data received from said detector; wherein said processor is arranged to retrieve in real time from said second database which audio, visual and/or digital message or messages from said second set are linked to said reaction; and wherein said processor is arranged to transmit 30 in real time said retrieved audio, visual and/or digital message or messages from said second set through said output means .
4
For communicating further messages in dependency of further visual cues the system further preferably comprises electronic memory connected to said processor comprising a second or third database wherein representations of each one 5 of said visual cues are linked to one or more of said second set of audio, visual and/or digital messages; wherein said processor is arranged to compare in real time said data received from said video camera with said data representing said visual cues after it is determined that a reaction from 10 said set of predetermined reactions is present in said data received from said detector; wherein said processor is arranged to determine in real time from said comparison if and which visual cue or cues from said set of predetermined visual cues are present in said data received from said 15 video camera; wherein said processor is arranged to retrieve in real time from said second or third database which audio, visual and/or digital message or messages from said second set are linked to said visual cue or cues; and wherein said processor is arranged to transmit in real time said 20 retrieved audio, visual and/or digital message or messages from said second set through said output means.
Said set of predetermined visual cues preferably comprises one or more from the group consisting of: the presence of 25 people, the location of a person, the gender of a person, the approximate age of a person, motion of people, the number of people, the presence of a baby carriage, the presence of a child, the presence of a pet, the direction of movement of a person, a person following a predetermined 30 trajectory, the speed of a person, the pose of a person, the colour of a person's garment, the type of a person's garment (casual/formal/elegant), the presence of a cap on the head of a person, the presence of glasses on a person, the skin 5 colour of a person, the hair colour of a person, the presence of curls in a persons hair, the direction of gaze of a person, the duration of directed gaze of a person, the presence of predetermined emotions in a person's face, the 5 presence of predetermined gestures, the presence of electronic devices such as telephones or music players, the presence of a car or motorbike, the speed of a car or motorbike, a car or motorbike following a predetermined trajectory, the car or motorbike type, the car or motorbike 10 brand, the colour of a car or motorbike, the license plate number .
For detecting audio cues the system further preferably comprises a microphone connected to said processor for 15 capturing audio data; electronic memory connected to said processor comprising data representing a set of predetermined audio cues; electronic memory connected to said processor comprising an audio cue database wherein representations of each one of said audio cues are linked to 20 one or more of said first and/or second set of audio, visual and/or digital messages; wherein said processor is arranged to compare in real time said data received from said microphone with said data representing said audio cues; wherein said processor is arranged to determine in real time 25 from said comparison if and which audio cue or cues from said set of predetermined audio cues are present in said data received from said microphone; wherein said processor is arranged to retrieve in real time from said audio cue database which audio, visual and/or digital message or 30 messages from said first set are linked to said audio cue or cues; and wherein said processor is arranged to transmit in real time said retrieved audio, visual and/or digital 6 message or messages from said first or second set through said output means
Said set of predetermined audio cues preferably comprises 5 one or more from the group consisting of: the presence of people, the presence of laughing people, the presence of talking people; the presence of people talking on a telephone; the presence of people shouting; the presence of people talking to a baby or a child; the presence of people 10 singing; the presence of music from small ear speakers. If more than one microphone is used, also a person's location and movement can be detected and used as an audio cue.
Said output means preferably comprises one or more from the 15 group consisting of: a video beamer, a laser lighting display, a CRT display, a flat panel display, a speaker, a wireless data sender.
Said further audio, video and/or digital messages preferably 20 comprise one or more from the group consisting of: commercials, news, sports, product information, object information, site information, library information, collection information, digital archive information.
25 Said data received from said detector is preferably retrieved from one or more from the group consisting of: the received signal of a button, the received signal of a touch screen, the received signal of a proximity sensor, the received signal of a depth sensor, the received signal of an 30 rfid reader, the received signal of a bluetooth receiver, the received signal of a microphone, or the received signal of said video camera. Said rfid reader or said bluetooth 7 receiver may receive identification information about the device and/or its user.
Said predetermined move reaction preferably comprises said 5 one or more persons in said public moving towards the display surface on which said visual message or messages are displayed, or said one or more persons in said public moving towards an object to which the further audio and/or visual message or messages relate.
10
The invention further relates to method for interactively communicating information to a public, wherein a processor performs the steps of: comparing in real time data received from a video camera with data representing visual cues; 15 determining in real time from said comparison if and which visual cue or cues from a set of predetermined visual cues are present in said data received from said video camera; retrieving in real time from a database which audio and/or visual message or messages are linked to said visual cue or 20 cues; transmitting in real time said retrieved audio and/or visual message or messages through said speaker and/or said display means; comparing in real time data received from a detector with data representing at least one predetermined reaction; determining in real time from said comparison if a 25 reaction from a set of predetermined reactions is present in said data received from said detector; and transmitting in real time a further audio, visual and/or digital message or messages through said output means if it is determined that a reaction from said set of predetermined reactions is 30 present in said data received from said detector.
Said camera may for instance be aimed at one on or more from the group consisting of: a shop window, a storefront, a 8 living room, a children's corner, an exhibition room, the interior of a public transportation vehicle, an information screen, a billboard, a store shelf, a book shelf.
5 The invention also relates to computer software program arranged to run on a processor to perform the steps of the method for interactively communicating information to a public. The invention furthermore relates to a computer readable data carrier comprising a computer software program 10 arranged to run on a processor to perform the steps of the method for interactively communicating information to a public. Said data carrier may for instance be one from the group consisting of: a CD-ROM, a floppy disk, a tape, flash memory, system memory, a hard drive. Furthermore the 15 invention relates to a computer comprising a processor and electronic memory connected thereto loaded with a computer software program arranged to perform the steps of the method for interactively communicating information to a public.
20 The invention is described in more detail below with reference to the drawing in which: figure 1 is a perspective view of a system in accordance with the invention.
25
According to figure 1 a system for interactively communicating information to a public comprises a computer 1 with amongst others a processor unit, system memory and a hard drive, a video camera 2 with a microphone connected to 30 for instance a USB port of the computer 1, a video beamer 3 connected to a video out port of the computer 1, a projection surface 4 and a speaker 5 connected to an audio out port of the computer 1. A software program is loaded 9 from the hard drive into the system memory of the computer 1 in order to perform the steps of the communication method.
The system enables the surface 4 to actively interact with 5 humans 6 passing by. The system's software comprises an attract component and an interact component.
The function of the attract component is to draw the attention of passers-by 6 by addressing them directly. The 10 input from the camera 2 is interpreted by computer vision algorithms loaded on the computer 1, which analyze the captured environment for visual cues. This technology enables computers "to see", i.e. analyze and interpret visual input from the camera 2. If the camera 2 is recording 15 the environment in front of the surface 4, then the visual input can be analyzed by the computer 1. Consequently, a decision can be made based on the input about what type of content to play. Based on the detected visual cues, a response can be realized via voice, visual and/or audio 20 content, and may consist of a selection from a database of pre-recorded content, and may also be (speech) synthesized in dependence on the detected cues. For instance in a clothing accessories department, if the detected visual cues comprise the colour of a person's coat, the personalized 25 message may be: "Hey you in the cinsert colour of coat> coat, we have some nice accessories for you!", and at the same time an image of an umbrella and bag, which is selected from a database based on said detected coat colour, is displayed on the surface 4. Other examples include: a 30 character in a video on the surface 4 smiling or making gestures to a passer-by if said passer-by looks at the character; a video stream on the surface 4 reacting (i.e.
10 stopping, starting, play in slow motion) to the motion of a passer-by.
The interact component of the service can become active 5 after the attention of the person 6 is drawn and he or she becomes engaged with the display surface 4, for example when it is detected from the signal from the video camera 2 that the person 6 approaches the surface 4, or that the person is looking, pointing or waving at the surface 4 or at a certain 10 object on the surface 4. When a reaction to the response is detected, a customized content of an advertisement or commercial is displayed on the surface 4 and played through the speaker 5. In this phase there are many possibilities, for example: letting the person 6 play with options on the 15 display surface 4 (in a touch-screen fashion); displaying the person 6 inside the commercial using the advertised product, for instance in the advertised car; showing the person 6 with advertised clothes; letting the person 6 create a customized product, for instance a phone, by 20 drawing loose parts together; letting the person browse an archive/collection, letting the person control the streaming of a video by their body movement. Both the attract and interact components use computer vision technology which interprets input from the camera 2 and/or microphone and 25 determines the customized content.
Some examples of visual cues, which with known prior art technology can be detected by the system, are: people's motion, number of people in the group, accompanying elements 30 (baby/baby-carriage/child), direction of movement, motion trajectory, speed of movement (detecting if a person is in a hurry), type of motion (walking/running/bending over/failing), gait type and characteristics, appearance 11 (color of clothes, caps/hoods, type of clothes: coat/short-sleeves/suit, style: casual/formal/elegant, sport team jerseys, eyeglasses, skin color, hair color, hair type: curly/long/short), gender, people's approximate age, 5 direction of gaze, duration of directed gaze, face analysis (smiling/laughing/crying) and emotion (joy/fear/disgust/irritation), gestures (waving/pointing/extended arms), using portable devices (talking on the phone/listening to music), car 10 motion/appearance (speed, trajectory, car type and brand, color, license plate details, and motorbike motion/appearance, speed, trajectory, type, color, bicycle type).
15
Detection of people in a video signal is described in: - N. Dalai and B. Triggs, "Histograms of oriented gradients for human detection", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2005), 2005.
20 - M. Enzweiler and D. M. Gavrila, "Monocular Pedestrian
Detection: Survey and Experiments", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.31, no.12, pp.2179-2195, 2009.
25 Human motion analysis is described in: - D. M. Gavrila, "The Visual Analysis of Human Movement: A Survey", Computer Vision and Image Understanding, vol. 73, nr. 1, pp. 82-98, 1999.
30 Gesture analysis and recognition are described in:
- W. Tu, T. Tan, L. Wang and S. Maybank, "A Survey on Visual Surveillance of Object Motion and Behaviors", IEEE
12
Transactions on Systems, Man and Cybernetics, Vol. 34, nr. 3, 2004.
- S. Mitra and T. Acharya, "Gesture Recognition: A Survey", IEEE Transactions on Systems, Man and Cybernetics, Vol. 37, 5 nr. 3, 2007.
Detection of types of motion is described in: - E. Pogalin, A.W.M. Smeulders and A.H.C. Thean, "Visual quasy-periodicity", IEEE Conference on Computer Vision and 10 Pattern Recognition (CVPR 2008), 2008.
- S. Ali and M. Shah, "Human Action Recognition in Videos Using Kinematic Features and Multiple Instance Learning", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 288-303, 2010.
15
Detection of appearance cues such as hair or clothes color is described in: - D. Comaniciu and P. Meer, "Robust analysis of feature spaces: Color image segmentation", IEEE Conference on 20 Computer Vision and Pattern Recognition (CVPR 1997), 1997. Y. Deng and B.S. Manjunath, "Unsupervised Segmentation of Color-Texture Regions in Images and Video", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 800-810, 2001.
25 - Roberto Valenti, Nicu Sebe and Theo Gevers, "Image
Saliency by Isocentric Curvedness and Color", IEEE International Conference on Computer Vision, 2009.
Detection and recognition of different object categories is 30 described in:
- R. Fergus, P. Perona and A. Zisserman, "Object class recognition by unsupervised scale-invariant learning", IEEE
13
Conference on Computer Vision and Pattern Recognition (CVPR 2003), 2003.
- G. Csurka, C. Dance, L. Fan, J. Willamowski and C. Bray, "Visual categorization with bags of keypoints", IEEE
5 International Conference on Computer Vision (ECCV 2004), 2004 .
- J. Winn, A. Criminisi and T. Minka, "Object categorization by learned universal visual dictionary", IEEE International Conference on Computer Vision (ECCV 2005), 2005.
10
Object tracking is described in: - D. Comaniciu, V. Ramesh and P. Meer, "Kernel-based object tracking", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, 2003.
15 - M. Isard and A. Blake, "Condensation—conditional density propagation for visual tracking", International Journal of Computer Vision, vol. 29, nr. 1, pp. 5-28, 1998.
Face analysis, gaze detection and tracking, and emotion 20 analysis is described in: - T.F. Cootes, G.J. Edwards and C.J. Taylor, "Active Appearance Models", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, No.6, pp.681-685, 2001.
R. Valenti, N. Sebe, T. Gevers and I. Cohen, "Machine 25 Learning Techniques for Face Analysis", Springer, page 159-188, 2008.
- Roberto Valenti and Theo Gevers, "Accurate Eye Center Location and Tracking Using Isophote Curvature", IEEE conference on Computer Vision and Pattern Recognition, 2008.
30 - Roberto Valenti, Zeynep Yucel and Theo Gevers, "Robustifying Eye Center Localization by Head Pose Cues", IEEE conference on Computer Vision and Pattern Recognition, 2009.
14 - L.P. Morency, J. Whitehill and J. Movellan, "Monocular Head Pose Estimation using Generalized Adaptive View-based Appearance Model", Image and Vision Computing, Elsevier, 2009.
5
These publications are incorporated herein by reference.
Some examples of audio cues, which can be combined with the above visual cues, are laughing, talking to friends in the 10 group, talking on the phone, calling somebody farther away, talking to a baby/child, singing, playing music from a portable device, talking to the system or the surface 4.
Some examples of situations where the system can be used, 15 are: attracting attention of passers-by to a shopping window, inviting customers to a storefront, inviting visitors at company trade shows/symposiums/fairs/automobile salons, personalized service (i.e. based on recognition/needs) of robot assistants, elderly care 20 (measuring attention time-span, action/motion recognition), child care/children's corners (of shopping malls/public institutions/physician's practices), serving personalized content (news/daily schedule details/calls information), personalized ambient intelligence (household/electronics 25 devices/regulation systems), personalized content (guide) in museums/cultural monuments/historical sites, assistance in browsing of library/video club/archive/collection, serving sports information (e.g. up to date match information/results) to fans, serving appropriate traffic 30 information to drivers (congestion routes/accidents/expected arrival times), and serving information about opening hours/waiting times at public institutions.
15
It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept thereof It is understood, therefore, that this invention is not 5 limited to the particular embodiments disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.
10

Claims (15)

1. Systeem voor het interactief communiceren van informatie naar een publiek, omvattende: 5 een processor; een videocamera die is verbonden met de processor voor het vastleggen van foto- of videogegevens; uitvoermiddelen die zijn verbonden met de processor voor het verzenden van audio-, visuele en/of digitale 10 berichten; een detector die is verbonden met de processor voor het vastleggen van reactiegegevens van een of meer personen in het publiek, zoals een knop, een aanraakscherm, een nabijheidssensor, een microfoon, of de videocamera; 15 elektronisch geheugen dat verbonden is met de processor en die gegevens bevat die een verzameling voorafbepaalde visuele signalen vertegenwoordigt; elektronisch geheugen dat verbonden is met de processor en dat gegevens bevat die een eerste verzameling met audio- 20 of visuele berichten vertegenwoordigt; elektronisch geheugen dat verbonden is met de processor en dat gegevens bevat die een tweede verzameling met een of meer audio-, visuele en/of digitale berichten vertegenwoordigen; 25 elektronisch geheugen dat verbonden is met de processor en dat een gegevensbank bevat waarin vertegenwoordigingen van elk van de visuele signalen zijn verbonden met een of meer van de eerste verzameling met audio- en/of visuele berichten; 30 elektronisch geheugen dat verbonden is met de processor en dat gegevens bevat die ten minste een voorafbepaalde reactie van een of meer personen in het publiek vertegenwoordigen, zoals het aanraken van de knop, het aanraken van het aanraakscherm, het maken van een voorafbepaalde beweging, of het maken van een voorafbepaald geluid; waarbij de processor is ingericht om onvertraagd de 5 gegevens die worden ontvangen van de videocamera te vergelijken met de gegevens die de visuele signalen vertegenwoordigen; waarbij de processor is ingericht om onvertraagd uit de vergelijking te bepalen of en welke visueel signaal of 10 visuele signalen uit de verzameling met voorafbepaalde visuele signalen aanwezig is in de gegeven die worden ontvangen van de videocamera; waarbij de processor is ingericht om onvertraagd uit de gegevensbank te onttrekken welke audio- en/of visueel 15 bericht of berichten uit de eerste verzameling zijn verbonden met het visuele signaal of de visuele signalen; waarbij de processor is ingericht om onvertraagd de onttrokken audio- en/of visueel bericht of berichten uit de eerste set te verzenden door middel van de uitvoermiddelen; 20 waarbij de processor is ingericht om onvertraagd de gegevens die worden ontvangen van de detector te vergelijken met de gegevens die de ten minste ene voorafbepaalde reactie vertegenwoordigen; waarbij de processor is ingericht om onvertraagd uit de 25 vergelijking te bepalen of een reactie uit de verzameling voorafbepaalde reacties aanwezig is in de gegevens van de detector; en waarbij de processor is ingericht om onvertraagd een verder audio-, visueel of digitaal bericht of berichten uit 30 de tweede verzameling te verzenden door middel van de uitvoermiddelen indien bepaald is dat een reactie uit de verzameling voorafbepaalde reacties aanwezig is in de gegevens van de detector.A system for interactively communicating information to an audience, comprising: a processor; a video camera connected to the processor for recording photo or video data; output means connected to the processor for sending audio, visual and / or digital messages; a detector connected to the processor for recording reaction data of one or more persons in the audience, such as a button, a touch screen, a proximity sensor, a microphone, or the video camera; Electronic memory connected to the processor and containing data representing a set of predetermined visual signals; electronic memory connected to the processor and containing data representing a first set of audio or visual messages; electronic memory connected to the processor and containing data representing a second collection with one or more audio, visual and / or digital messages; Electronic memory connected to the processor and containing a database in which representations of each of the visual signals are connected to one or more of the first set of audio and / or visual messages; Electronic memory connected to the processor and containing data representing at least a predetermined response from one or more persons in the audience, such as touching the button, touching the touch screen, making a predetermined movement, or making a predetermined sound; wherein the processor is arranged to compare in real-time the data received from the video camera with the data representing the visual signals; wherein the processor is arranged to determine in real time from the comparison whether and which visual signal or visual signals from the set of predetermined visual signals is present in the data received from the video camera; wherein the processor is arranged to extract from the database in real time which audio and / or visual message or messages from the first set are connected to the visual signal or the visual signals; wherein the processor is adapted to send the extracted audio and / or visual message or messages from the first set in real time by means of the output means; The processor being arranged to compare in real-time the data received from the detector with the data representing the at least one predetermined response; wherein the processor is arranged to determine in real time from the comparison whether a response from the set of predetermined responses is present in the data from the detector; and wherein the processor is arranged to send a further audio, visual or digital message or messages from the second set in real time by means of the output means if it is determined that a reaction from the set of predetermined reactions is present in the data of the detector . 2. Systeem volgens conclusie 1, voorts omvattende elektronisch geheugen dat verbonden is met de processor en dat een tweede gegevensbank omvat waarin 5 vertegenwoordigingen van elk van de reacties zijn verbonden met een of meer van de tweede verzameling met audio-, visuele en/of digitale berichten; waarbij de processor is ingericht om onvertraagd uit de vergelijking te bepalen welke reactie uit de verzameling 10 voorafbepaalde reacties aanwezig is in de gegevens die worden ontvangen van de detector; waarbij de processor is ingericht om onvertraagd uit de tweede gegevensbank te onttrekken welke audio-, visueel en/of digitaal bericht of berichten zijn verbonden met de 15 reactie; en waarbij de processor is ingericht om onvertraagd het onttrokken audio-, visueel en/of digitaal bericht of berichten uit de tweede verzameling te verzenden door middel van de uitvoermiddelen. 20The system of claim 1, further comprising electronic memory connected to the processor and comprising a second database in which 5 representations of each of the responses are connected to one or more of the second audio, visual and / or digital collection messages; wherein the processor is arranged to determine from the comparison in real time which response from the set of predetermined responses is present in the data received from the detector; wherein the processor is arranged to extract from the second database in real time which audio, visual and / or digital message or messages are connected to the response; and wherein the processor is arranged to send the extracted audio, visual and / or digital message or messages from the second collection in real time by means of the output means. 20 3. Systeem volgens conclusie 1 of 2, voorts omvattende elektronisch geheugen dat is verbonden met de processor en dat een tweede of derde gegevensbank bevat waarin vertegenwoordigingen van elk van de visuele signalen zijn 25 verbonden met een of meer van de tweede verzameling met audio-, visuele en/of digitale berichten; waarbij de processor is ingericht om onvertraagd de gegevens die worden ontvangen van de videocamera te vergelijken met de gegevens die de visuele signalen 30 vertegenwoordigen nadat is bepaald dat een reactie van de verzameling met voorafbepaalde reacties aanwezig is in de gegevens die wordt ontvangen van de detector; waarbij de processor is ingericht om onvertraagd uit de vergelijking te bepalen of en welk visueel signaal of signalen uit de tweede verzameling met voorafbepaalde visuele signalen aanwezig is in de gegevens die ontvangen 5 worden van de videocamera; waarbij de processor is ingericht om onvertraagd uit de tweede of derde gegevensbank te onttrekken welk audio-, visueel en/of digitaal bericht of berichten uit de tweede verzameling zijn verbonden met het visuele signaal of 10 signalen; en waarbij de processor is ingericht om onvertraagd het audio-, visuele en/of digitale bericht of berichten uit de tweede verzameling te verzenden door middel van de uitvoermiddelen. 153. System as claimed in claim 1 or 2, further comprising electronic memory connected to the processor and comprising a second or third database in which representations of each of the visual signals are connected to one or more of the second set of audio, visual and / or digital messages; wherein the processor is arranged to compare in real time the data received from the video camera with the data representing the visual signals 30 after it has been determined that a response from the set of predetermined responses is present in the data received from the detector; wherein the processor is arranged to determine in real time from the comparison whether and which visual signal or signals from the second set of predetermined visual signals is present in the data received from the video camera; wherein the processor is adapted to extract in real time from the second or third database which audio, visual and / or digital message or messages from the second set are connected to the visual signal or signals; and wherein the processor is arranged to send the audio, visual and / or digital message or messages from the second set in real time by means of the output means. 15 4. Systeem volgens conclusie 1, 2 of 3, waarbij de verzameling voorafbepaalde visuele signalen een of meer uit de groep omvat die bestaat uit: de aanwezigheid van mensen, de locatie van een persoon, het geslacht van een persoon, 20 een benadering van de leeftijd van een persoon, beweging van mensen, het aantal mensen, de aanwezigheid van een kinderwagen, de aanwezigheid van een kind, de aanwezigheid van een huisdier, de richting van beweging van een persoon, een persoon die een voorafbepaalde route aflegt, de snelheid 25 van een persoon, de houding van een persoon, de kleur van kleding van een persoon, de soort kleding van een persoon, de aanwezigheid van een petop het hoofd van een persoon, de aanwezigheid van een bril op een persoon, de huidskleur van een persoon, de haarkleur van een persoon, de aanwezigheid 30 van krullen in het haar van een persoon, de kijkrichting van een persoon, de duur van een gerichte blik van een persoon, de aanwezigheid van voorafbepaalde emoties in het gezicht van een persoon, de aanwezigheid van voorafbepaalde gebaren, de aanwezigheid van elektronische apparaten zoals telefoons of muziekspelers, de aanwezigheid van een auto of motorfiets, de snelheid van een auto of motorfiets, een auto of motorfiets die een bepaalde route volgt, het soort auto 5 of motorfiets, het merk van een auto of motorfiets, de kleur van een auto of motorfiets, het kentekennummer.4. System as claimed in claim 1, 2 or 3, wherein the set of predetermined visual signals comprises one or more from the group consisting of: the presence of people, the location of a person, the gender of a person, an approximation of the age of a person, movement of people, the number of people, the presence of a pram, the presence of a child, the presence of a pet, the direction of movement of a person, a person who follows a predetermined route, the speed 25 of a person, the attitude of a person, the color of a person's clothing, the type of clothing of a person, the presence of a petop the head of a person, the presence of glasses on a person, the skin color of a person , the hair color of a person, the presence of curls in a person's hair, the direction of view of a person, the duration of a focused look of a person, the presence of predetermined emotions in the face icht of a person, the presence of predetermined gestures, the presence of electronic devices such as telephones or music players, the presence of a car or motorcycle, the speed of a car or motorcycle, a car or motorcycle following a certain route, the type of car 5 or motorcycle, the brand of a car or motorcycle, the color of a car or motorcycle, the registration number. 5. Systeem volgens een van de voorgaande conclusies, voorts omvattende, 10 een microfoon die is verbonden met de processor voor het vastleggen van audiogegevens; elektronisch geheugen dat is verbonden met de processor en die gegevens omvat die een verzameling met voorafbepaalde audiosignalen vertegenwoordigt; 15 elektronisch geheugen dat is verbonden met de processor en die een adiosignaalgegevensbestand omvat waarin vertegenwoordigingen van elk van de audiosignalen zijn verbonden met een of meer uit de eerste en/of tweede verzameling audio-, visuele en/of digitale berichten; 20 waarbij de processor is ingericht om onvertraagd de gegevens die worden ontvangen van de microfoon te vergelijken met de gegevens die de audiosignalen vertegenwoordigen; waarbij de processor is ingericht om onvertraagd uit de 25 vergelijking te bepalen of en welk audiosignaal of -signalen uit de verzameling met voorafbepaalde audiosignalen aanwezig zijn in de gegevens die worden ontvangen van de microfoon; waarbij de processor is ingericht om onvertraagd uit de audiosignaalgegevensbank te onttrekken welk audio-, visueel 30 en/of digitaal bericht of berichten uit de eerste verzameling zijn verbonden met het audiosignaal of -signalen; en waarbij de processor is ingericht om onvertraagd het onttrokken audio-, visuele en/of digitale bericht of berichten uit de eerste of tweede verzameling te verzenden door middel van de uitvoermiddelen. 55. System as claimed in any of the foregoing claims, further comprising, a microphone connected to the processor for recording audio data; electronic memory connected to the processor and comprising data representing a set of predetermined audio signals; Electronic memory connected to the processor and comprising an audio signal database in which representations of each of the audio signals are connected to one or more from the first and / or second collection of audio, visual and / or digital messages; 20 wherein the processor is arranged to compare in real-time the data received from the microphone with the data representing the audio signals; wherein the processor is arranged to determine in real time from the comparison whether and which audio signal or signals from the set of predetermined audio signals are present in the data received from the microphone; wherein the processor is adapted to extract from the audio signal database in real time which audio, visual and / or digital message or messages from the first set are connected to the audio signal or signals; and wherein the processor is arranged to send the extracted audio, visual and / or digital message or messages from the first or second collection in real time by means of the output means. 5 6. Systeem volgens conclusie 5, waarbij de verzameling voorafbepaalde audiosignalen een of meer uit de groep omvat bestaande uit: de aanwezigheid van mensen; de aanwezigheid van lachende mensen, de aanwezigheid van pratende mensen, de 10 aanwezigheid van mensen die schreeuwen; de aanwezigheid van mensen die tegen een baby of een kind praten; de aanwezigheid van mensen die zingen; de aanwezigheid van muziek uit kleine oorluidsprekers; de locatie van een persoon; een persoon die een voorafbepaald traject volgt. 15The system of claim 5, wherein the set of predetermined audio signals comprises one or more from the group consisting of: the presence of people; the presence of smiling people, the presence of talking people, the presence of people screaming; the presence of people talking to a baby or child; the presence of people who sing; the presence of music from small ear speakers; the location of a person; a person who follows a predetermined route. 15 7. Systeem volgens een van de voorgaande conclusies, waarbij de uitvoermiddelen een of meer uit de groep omvat bestaande uit: een videoprojector, een laserlichtbeeldscherm, een CRT-beeldscherm, een plat 20 paneelbeeldscherm, een luidspreker, een draadloze gegevenszender.7. System as claimed in any of the foregoing claims, wherein the output means comprise one or more from the group consisting of: a video projector, a laser light display, a CRT display, a flat panel display, a loudspeaker, a wireless data transmitter. 8. Systeem volgens een van de voorgaande conclusies, waarbij de verdere audio-, video- of digitale berichten een 25 of meer uit de groep omvat bestaande uit: reclameboodschappen, nieuws, sport, productinformatie, objectinformatie, locatieinformatie, bibliotheekinformatie, collectie-informatie, digitaal archiefinformatie.8. System as claimed in any of the foregoing claims, wherein the further audio, video or digital messages comprise one or more from the group consisting of: advertising messages, news, sports, product information, object information, location information, library information, collection information, digital archive information. 9. Systeem volgens een van de voorgaande conclusies, waarbij de voorafbepaalde bewegingsreactie omvat dat de een of meer personen in het publiek richting het beeldschermoppervlak bewegen waarop de visuele bericht of berichten worden getoond, of dat de een of meer personen in het publiek richting een object bewegen waarop het verdere audio- en/of visuele bericht of berichten betrekking hebben.The system of any preceding claim, wherein the predetermined motion response comprises that the one or more people in the audience move toward the display surface on which the visual message or messages are displayed, or that the one or more people in the audience move toward an object to which the further audio and / or visual message or messages relate. 10. Werkwijze voor het interactief communiceren van informatie naar een publiek, waarbij een processor de volgende stappen uitvoert: het onvertraagd vergelijken van gegevens die worden ontvangen van een videocamera met gegevens die visuele 10 signalen vertegenwoordigen; het onvertraagd bepalen uit de vergelijking of en welk visueel signaal of signalen uit de verzameling voorafbepaalde visuele signalen aanwezig zijn in de gegevens die worden ontvangen van de videocamera; 15 het onvertraagd onttrekken uit een gegevensbank welke audio- en/of visueel bericht of berichten zijn verbonden met het visuele signaal of de signalen; het onvertraagd verzenden van het onttrokken audio-en/of visueel bericht of de berichten door de luidspreker 20 en/of de beeldschermmiddelen; het onvertraagd vergelijken van gegevens die worden ontvangen van een detector met gegevens die ten minste een voorafbepaalde reactie vertegenwoordigen; het onvertraagd bepalen uit de vergelijking of een 25 reactie uit een verzameling met voorafbepaalde reacties aanwezig is in de gegevens die worden ontvangen van de detector; en het onvertraagd verzenden van een verder audio-, visueel en/of digitaal bericht of berichten door de 30 uitvoermiddelen als wordt bepaald dat een reactie uit de verzameling voorafbepaalde reacties aanwezig is in de gegevens die worden ontvangen van de detector.10. Method for interactively communicating information to an audience, a processor performing the following steps: comparing data received from a video camera with data representing visual signals in real time; determining in real time from the comparison whether and which visual signal or signals from the set of predetermined visual signals are present in the data received from the video camera; Extracting in real time from a database which audio and / or visual message or messages are connected to the visual signal or signals; sending the extracted audio and / or visual message or messages through the loudspeaker 20 and / or the display means in real time; comparing data received from a detector in real time with data representing at least a predetermined response; determining in real time from the comparison whether a response from a set of predetermined responses is present in the data received from the detector; and the real-time transmission of a further audio, visual and / or digital message or messages by the output means if it is determined that a response from the set of predetermined responses is present in the data received from the detector. 11. Werkwijze volgens conclusie 10, waarbij de camera is gericht op een of meer uit de groep bestaande uit: een winkelraam, een winkelpui, een woonkamer, een kinderhoek, een tentoonstellingsruimte, het interieur van een openbaar- 5 vervoerwagen, een informatiebeeldscherm, een reclameposter, een winkelschap, een boekenplank.11. Method as claimed in claim 10, wherein the camera is aimed at one or more from the group consisting of: a shop window, a shop front, a living room, a children's corner, an exhibition space, the interior of a public transport vehicle, an information display, a advertising poster, a store shelf, a bookshelf. 12. Werkwijze volgens conclusie 10 of 11, waarbij de gegevens die worden ontvangen van de detector wordt 10 onttrokken uit een of meer van de groep bestaande uit: het ontvangen signaal van een knop, het ontvangen signaal van een aanraakscherm, het ontvangen signaal van een nabijheidssensor, het ontvangen signaal van een dieptesensor, het ontvangen signaal van een rfid-lezer, het 15 ontvangen signaal van een bluetooth-ontvanger, het ontvangen siganaal van een microfoon, of het ontvangen signaal van de videocamera.12. Method as claimed in claim 10 or 11, wherein the data received from the detector is extracted from one or more of the group consisting of: the received signal from a button, the received signal from a touch screen, the received signal from a proximity sensor, the received signal from a depth sensor, the received signal from an RFID reader, the received signal from a bluetooth receiver, the received signal from a microphone, or the received signal from the video camera. 13. Computersoftwareprogramma dat is ingericht om te 20 draaien op een processor teneinde de stappen van de werkwijze volgens conclusie 10, 11 of 12 uit te voeren.13. Computer software program adapted to run on a processor to perform the steps of the method according to claim 10, 11 or 12. 14. Computerleesbare drager omvattende een computersoftwareprogramma dat is ingericht om te draaien op 25 een processor teneinde de stappen van de werkwijze volgens conclusie 10, 11 of 12 uit te voeren.14. A computer-readable carrier comprising a computer software program that is adapted to run on a processor in order to carry out the steps of the method according to claim 10, 11 or 12. 15. Computer omvattende een processor en daarmee verbonden elektronisch geheugen dat is geladen met een 30 computersoftwareprogramma dat is ingericht om de stappen van de werkwijze volgens conclusie 10, 11 of 12 uit te voeren.15. Computer comprising a processor and associated electronic memory that is loaded with a computer software program that is adapted to perform the steps of the method according to claim 10, 11 or 12.
NL2004709A 2010-05-12 2010-05-12 System and method for communicating information to a public. NL2004709C2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2004709A NL2004709C2 (en) 2010-05-12 2010-05-12 System and method for communicating information to a public.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2004709 2010-05-12
NL2004709A NL2004709C2 (en) 2010-05-12 2010-05-12 System and method for communicating information to a public.

Publications (1)

Publication Number Publication Date
NL2004709C2 true NL2004709C2 (en) 2011-11-15

Family

ID=42335172

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2004709A NL2004709C2 (en) 2010-05-12 2010-05-12 System and method for communicating information to a public.

Country Status (1)

Country Link
NL (1) NL2004709C2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110393922A (en) * 2019-08-26 2019-11-01 徐州华邦益智工艺品有限公司 A kind of acoustic control down toy dog with projection function

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001045004A1 (en) * 1999-12-17 2001-06-21 Promo Vu Interactive promotional information communicating system
US20030088832A1 (en) * 2001-11-02 2003-05-08 Eastman Kodak Company Method and apparatus for automatic selection and presentation of information
US20040044564A1 (en) * 2002-08-27 2004-03-04 Dietz Paul H. Real-time retail display system
WO2007125285A1 (en) * 2006-04-21 2007-11-08 David Cumming System and method for targeting information
US20080059994A1 (en) * 2006-06-02 2008-03-06 Thornton Jay E Method for Measuring and Selecting Advertisements Based Preferences
US20090097712A1 (en) * 2007-08-06 2009-04-16 Harris Scott C Intelligent display screen which interactively selects content to be displayed based on surroundings

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001045004A1 (en) * 1999-12-17 2001-06-21 Promo Vu Interactive promotional information communicating system
US20030088832A1 (en) * 2001-11-02 2003-05-08 Eastman Kodak Company Method and apparatus for automatic selection and presentation of information
US20040044564A1 (en) * 2002-08-27 2004-03-04 Dietz Paul H. Real-time retail display system
WO2007125285A1 (en) * 2006-04-21 2007-11-08 David Cumming System and method for targeting information
US20080059994A1 (en) * 2006-06-02 2008-03-06 Thornton Jay E Method for Measuring and Selecting Advertisements Based Preferences
US20090097712A1 (en) * 2007-08-06 2009-04-16 Harris Scott C Intelligent display screen which interactively selects content to be displayed based on surroundings

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110393922A (en) * 2019-08-26 2019-11-01 徐州华邦益智工艺品有限公司 A kind of acoustic control down toy dog with projection function

Similar Documents

Publication Publication Date Title
US20230230053A1 (en) Information processing apparatus, control method, and storage medium
US8154615B2 (en) Method and apparatus for image display control according to viewer factors and responses
US9922271B2 (en) Object detection and classification
US8723796B2 (en) Multi-user interactive display system
US8810513B2 (en) Method for controlling interactive display system
US9349131B2 (en) Interactive digital advertising system
JP7224488B2 (en) Interactive method, apparatus, device and storage medium
US20180181995A1 (en) Systems and methods for dynamic digital signage based on measured customer behaviors through video analytics
TWI779343B (en) Method of a state recognition, apparatus thereof, electronic device and computer readable storage medium
TWI492150B (en) Method and apparatus for playing multimedia information
CN107206601A (en) Customer service robot and related systems and methods
CN103760968A (en) Method and device for selecting display contents of digital signage
US9589296B1 (en) Managing information for items referenced in media content
Liu et al. Customer behavior classification using surveillance camera for marketing
CN110716641B (en) Interaction method, device, equipment and storage medium
US11528512B2 (en) Adjacent content classification and targeting
Hasanuzzaman et al. Monitoring activity of taking medicine by incorporating RFID and video analysis
CN113126629A (en) Method for robot to actively search target and intelligent robot
US12008808B1 (en) Location tracking system using a plurality of cameras
JP2017156514A (en) Electronic signboard system
US10937065B1 (en) Optimizing primary content selection for insertion of supplemental content based on predictive analytics
CN109947239A (en) A kind of air imaging system and its implementation
NL2004709C2 (en) System and method for communicating information to a public.
El-Yacoubi et al. Vision-based recognition of activities by a humanoid robot
CN113724454A (en) Interaction method of mobile equipment, device and storage medium

Legal Events

Date Code Title Description
V1 Lapsed because of non-payment of the annual fee

Effective date: 20131201