WO2014172777A1 - System and method for personal identification of individuals in images - Google Patents

System and method for personal identification of individuals in images Download PDF

Info

Publication number
WO2014172777A1
WO2014172777A1 PCT/CA2014/000366 CA2014000366W WO2014172777A1 WO 2014172777 A1 WO2014172777 A1 WO 2014172777A1 CA 2014000366 W CA2014000366 W CA 2014000366W WO 2014172777 A1 WO2014172777 A1 WO 2014172777A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
venue
attendee
user
Prior art date
Application number
PCT/CA2014/000366
Other languages
French (fr)
Inventor
Benoit FREDETTE
Original Assignee
Fans Entertainment Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201361814489P priority Critical
Priority to US61/814,489 priority
Application filed by Fans Entertainment Inc. filed Critical Fans Entertainment Inc.
Publication of WO2014172777A1 publication Critical patent/WO2014172777A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00Arrangements for user-to-user messaging in packet-switching networks, e.g. e-mail or instant messages
    • H04L51/32Messaging within social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00664Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce, e.g. shopping or e-commerce
    • G06Q30/02Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination

Abstract

There is provided a system and method for identifying in an image at least one attendee of an event occurring at a venue. Location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee is retrieved. An image taken during the event is received, the image capturing the at least one attendee occupying the selected physical space. A position of the selected physical space in the image is determined. Identifier data is indicated in the image at the determined position of the selected physical space, thereby generating a modified image, the identifier data uniquely identifying the at least one attendee. The modified image is then output.

Description

SYSTEM AND METHOD FOR PERSONAL IDENTIFICATION OF INDIVIDUALS
IN IMAGES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This patent application claims priority of US provisional Application Serial No. 61/814,489, filed on April 22, 2013.
TECHNICAL FIELD
[0002] The present invention relates to the field of identification of individuals in images.
BACKGROUND OF THE ART
[0003] Artists increasingly use social networks to reach out to and interact with fans. One of the recent ways in which this is done is by taking pictures of the crowd from the stage during an event, such as a concert, and posting the pictures online. The pictures may then be accessed by individuals interested in the artist's activities and/or having attending the event. However, such pictures typically fail to identify the individuals captured in the images and due to the size of the crowd, it may be difficult for attendees to identify themselves in the pictures when accessing the latter.
[0004] There is therefore a need for an improved system and method for identification of individuals in images.
SUMMARY
[0005] In accordance with a first broad aspect, there is provided a system for identifying in an image at least one attendee of an event occurring at a venue, the system comprising a memory having stored therein identifier data uniquely identifying the at least one attendee; a processor; and at least one application stored in the memory and executable by the processor for receiving location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee, receiving an image taken during the event, the image capturing the at least one attendee occupying the selected physical space, determining a position of the selected physical space in the image, indicating the identifier data in the image at the determined position of the selected physical space, thereby generating a modified image, and outputting the modified image.
[0006] In some embodiments, the at least one application is executable by the processor for identifying an area of the venue captured in the image, the area of the venue comprising a plurality of physical spaces including the selected physical space, and correlating the location data with an identification of the plurality of physical spaces captured in the image for determining the position of the physical space in the image.
[0007] In some embodiments, the memory has stored therein at least one template image corresponding to a view of the venue taken from a predetermined viewpoint and the at least one application is executable by the processor for comparing the image to the at least one template image for determining the area of the venue captured in the image.
[0008] In some embodiments, the memory has stored therein signal data transmitted by an image capturing device having taken the image, the signal data indicative of at least one of a location of the image capturing device at the venue and a direction the image capturing device is pointing towards, and the at least one application is executable by the processor for identifying the area of the venue captured in the image from the signal data.
[0009] In some embodiments, the at least one application is executable by the processor for receiving the image having associated therewith user-provided localization data, the localization data indicative of the area of the venue captured in the image, and for identifying the area of the venue captured in the image from the localization data.
[0010] In some embodiments, the memory has stored therein communication data exchanged between one or more first communication devices each provided at a given location of the venue and a second communication device provided adjacent an image capturing device having taken the image, the communication data comprising an indication of the given location as transmitted by each of the one or more first communication devices to the second communication device, and the at least one application is executable by the processor for identifying the area of the venue captured in the image from the communication data.
[001 1] In some embodiments, the memory has stored therein the communication data transmitted by a selected one of the one or more first communication devices to the second communication device, the selected first communication device provided adjacent the physical space and having encoded therein the identifier data and the communication data comprising the encoded identifier data, and the at least one application is executable by the processor for indicating in the image the identifier data transmitted by the selected first communication device.
[0012] In some embodiments, the at least one application is executable by the processor for indicating in the image the identifier data comprising overlaying at least one of a username and a photo of the at least one attendee on the image at the determined position of the selected physical space.
[0013] In some embodiments, the at least one application is executable by the processor for receiving the location data further to one of the location data being manually entered and at least a portion of a ticket controlling access of the at least one attendee to the venue being scanned.
[0014] In some embodiments, the at least one application is executable by the processor for outputting the modified image for presentation on a profile page associated with a social networking service, access to the modified image restricted to at least one user having endorsed the profile page.
[0015] In some embodiments, the at least one application is executable by the processor for sending to the at least one user having endorsed the profile page a notification indicating that the modified image comprising an identification of the at least one attendee is available on the profile page.
[0016] In some embodiments, the memory has stored therein preference data for the at least one attendee, the preference data indicating whether identification of the at least one attendee in the image is desired, and the at least one application is executable by the processor for indicating the identifier data in the image if the preference data indicates that the identification is desired.
[0017] In some embodiments, the at least one application is executable by the processor for receiving timestamp data associated with the image, receiving event data indicative of a timing of the live event, and correlating the timestamp data with the event data for identifying a moment during the event at which the image was taken, and attaching an identification of the moment to the modified image.
[0018] In accordance with a second broad aspect, there is provided a computer- implemented method for identifying in an image at least one attendee of an event occurring a venue, the method comprising receiving location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee; receiving an image taken during the event, the image capturing the at least one attendee occupying the selected physical space; determining a position of the selected physical space in the image; indicating identifier data in the image at the determined position of the selected physical space, thereby generating a modified image, the identifier data uniquely identifying the at least one attendee; and outputting the modified image.
[0019] In some embodiments, the method further comprises identifying an area of the venue captured in the image, the area of the venue comprising a plurality of physical spaces including the selected physical space, and correlating the location data with an identification of the plurality of physical spaces captured in the image for determining the position of the physical space in the image.
[0020] In some embodiments, the method further comprises comparing the image to at least one template image corresponding to a view of the venue taken from a predetermined viewpoint for determining the area of the venue captured in the image.
[0021] In some embodiments, the area of the venue captured in the image is identified from signal data transmitted by an image capturing device having taken the image, the signal data indicative of at least one of a location of the image capturing device at the venue and a direction the image capturing device is pointing towards.
[0022] In some embodiments, the area of the venue captured in the image is identified from user-provided localization data associated with the received image, the localization data indicative of the area of the venue captured in the image.
[0023] In some embodiments, the area of the venue captured in the image is identified from communication data exchanged between one or more first communication devices each provided at a given location of the venue and a second communication device provided adjacent an image capturing device having taken the image, the communication data comprising an indication of the given location as transmitted by each of the one or more first communication devices to the second communication device.
[0024] In some embodiments, indicating in the image the identifier data comprises indicating identifier data transmitted by a selected one of the one or more first communication devices to the second communication device, the selected first communication device provided adjacent the physical space and having the identifier data encoded therein. [0025] In some embodiments, the identifier data indicated in the image comprises at least one of a username and a photo of the at least one attendee overlaid on the image at the determined position of the selected physical space.
[0026] In some embodiments, the location data is received further to one of the location data being manually entered and at least a portion of a ticket controlling access of the at least one attendee to the venue being scanned.
[0027] In some embodiments, the modified image is output for presentation on a profile page associated with a social networking service, access to the modified image restricted to at least one user having endorsed the profile page.
[0028] In some embodiments, the method further comprises sending to the at least one user having endorsed the profile page a notification indicating that the modified image comprising an identification of the at least one attendee is available on the profile page.
[0029] In some embodiments, the identifier data is indicated in the image if preference data for the at least one attendee is indicative that identification of the at least one attendee in the image is desired.
[0030] In some embodiments, the method further comprises receiving timestamp data associated with the image, receiving event data indicative of a timing of the event, correlating the timestamp data with the event data for identifying a moment during the event at which the image was taken, and attaching an identification of the moment to the modified image.
[0031] In accordance with a third broad aspect, there is provided a computer readable medium having stored thereon program code executable by a processor for identifying in an image at least one attendee of an event occurring at a venue, the program code executable for receiving location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee; receiving an image taken during the event, the image capturing the at least one attendee occupying the selected physical space; determining a position of the selected physical space in the image; indicating identifier data in the image at the determined position of the selected physical space, thereby generating a modified image, the identifier data uniquely identifying the at least one attendee; and outputting the modified image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
[0033] Figure 1 is a schematic diagram of a system for personal identification of individuals in an image, in accordance with an illustrative embodiment of the present invention;
[0034] Figure 2a is a schematic diagram of an application running on the processor of Figure 1 ;
[0035] Figure 2b is a schematic diagram of the localization module of Figure 2a;
[0036] Figure 2c is a schematic diagram of the imaged seating area identification module of Figure 2b;
[0037] Figure 3a is a flowchart of a method for personal identification of individuals in an image, in accordance with an illustrative embodiment of the present invention;
[0038] Figure 3b is a flowchart of the step of Figure 3a of identifying user(s) in an image;
[0039] Figure 3c is a flowchart of the step of Figure 3b of determining the position of a user's seat in the image; [0040] Figure 3d is a flowchart of the step of Figure 3c of identifying a seating area captured in the image;
[0041] Figure 3e is a flowchart of the step of Figure 3d of identifying a seating area captured in the image when the image was not taken from a pre-determined viewpoint;
[0042] Figure 4 is a screen capture of a user interface for logging into the entertainment experience management system of Figure 1 ;
[0043] Figure 5 is a screen capture of a user interface for registering with the entertainment experience management system of Figure 1 ;
[0044] Figure 6 is a screen capture of a user interface for viewing entertainment experience management functionalities and services, in accordance with an illustrative embodiment; and
[0045] Figure 7 is a screen capture of a user interface for viewing an image providing personal identification of individuals, in accordance with an illustrative embodiment.
[0046] It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
DETAILED DESCRIPTION
[0047] Referring to Figure 1 , a system 100 for personal identification of individuals in an image (not shown) will now be described. The image (not shown) illustratively captures individuals, e.g. spectators, while the latter are attending an event, such as a concert, a sporting event (e.g. sports game), or the like, at a venue (not shown). In some embodiments, the event is a live event but it should be understood that events other than live events (e.g. presentation of pre-recorded events) may occur at the venue. The venue is illustratively a facility, such as a stadium/arena, a theater, a concerts hall, or the like, where physical spaces, such as seats, rows, areas or sections of the venue, are uniquely assigned to attendees. It should be understood that, for seat-less venues (e.g. outdoor festival shows) or the like where general admission or open seating is used, the physical spaces assigned to attendees may comprise sections of the venue, such as balcony or floor, rather than specific seats. Thus, although the description below and the drawings refer to seats and to seat location data being associated with the attendee, physical spaces other than seats (e.g. rows, sections, or the like), and accordingly location data therefor, may also apply.
[0048] It should also be understood that although the description below refers to an entertainment venue, e.g. a stadium/arena, other venues, such as convention centers, hospitality venues, hotels, resorts, transportation facilities (e.g. cruise ships, planes, trains, etc.), or the like, may apply. Access to the venue is illustratively controlled by means of a ticket, which may further be used to assign a specific seat to a holder thereof. As such, an attendee wishing to gain access to the venue in order to attend the event thereat purchases or otherwise obtains a ticket. The ticket may be a paper or electronic ticket and typically includes seat location data or indicia (e.g. row number and seat number) uniquely defining the physical location of the attendee's space (e.g. seat) at the venue. The attendee is then typically expected to remain at his/her assigned seat during most of the event. As discussed above, in some venues, no seats will be assigned to attendees and the seat location indicia provided on the ticket then indicates a section of the venue, rather than the specific physical location of a seat.
[0049] In particular, the image illustratively captures an area of the venue and represents a plurality of attendees experiencing the event from the area in question. Indeed, the image is typically taken while the attendees are seated or standing in their assigned seat/venue section as identified by the seat location indicia on their ticket. The image may be taken at random times during the event, at predetermined time intervals, at a period of excitement or key moment during performance of the event, or at any other suitable moment. The image may be taken using any suitable image capturing device (not shown), such as a camera (e.g. pan-and-ti)t camera) or a smartphone having photography capabilities. The image may also be taken using a video camera (not shown) with photography capabilities. One or more static or moving cameras may be used. Any suitable type of photography, such as still or video photography, may therefore be used and both digital and analog media may be employed. Although the description herein refers to the image being a still image, it should be understood that moving images, i.e. videos comprising a sequence of successive still images, may also apply. In this case, the moving images may be captured by multiple cameras. Each still image in the sequence constituting the moving image may then be analyzed (in the manner described further below) to identify therein a given attendee.
[0050] In one embodiment, one or more images may be taken during the event by an artist, performer, athlete, or other individual(s) performing at the venue. For example, the artist may take a picture of the crowd from the stage at any time during the event and post the picture online. For instance, the picture may be posted on the artist's website for viewing by users visiting the website. Alternatively, the artist may post the picture on a profile page or social network site associated with an account the artist holds with an online social networking service, such as Facebook™, Twitter™, Google+™, MySpace™, or the like. The picture may then be accessible (e.g. for purposes of viewing, download, printing, and the like) to users associated with, e.g. having subscribed to or otherwise endorsed, the artist's profile page on the social networking service. Such users may be referred to as "fans" of the artist and may be provided with an opportunity to be enrolled in the artist's social media efforts. As known to those skilled in the art, a user may endorse the artist's profile page using a suitable social networking feature, such as the "Like" button on Facebook™. In this case, the user may receive automatic notifications when the artist updates his/her profile and posts new content, such as new photos. In another embodiment, one or more images may be taken by one or more venue-based cameras, which may be static or moving cameras. [0051] The image(s) may be panoramic views that capture the entire crowd attending the event at the venue. The image(s) may then be posted by a technician, operator, or other suitable personnel associated with the venue, on a website or profile page of the venue and made accessible to users visiting the website (e.g. using their device 104 via mobile sites, applications, web browsers, or the like) or having subscribed the venue's profile page. Any picture taken during the event may be posted either in real time during the event or after the event has occurred. As will be discussed further below, upon the picture being posted online, personal identification of users captured in the picture may be automatically provided. Identification in pictures related to past events may also be provided upon the pictures being available to the system 100. It should be understood that identification of users may be provided in a picture without the latter being posted online. In particular, the picture taken during the event may be made available to the system 100 via any suitable communication means known to those skilled in the art, such as email, Short Message Service (SMS), Multimedia Messaging Service (MMS), instant messaging (IM), mobile applications, or the like. Once the picture is available at the system 100, the latter may then proceed with personal identification of users in the picture in the manner discussed further below.
[0052] The memory 114 and/or databases 1 18 may further store all pictures comprising personal identification of users and the pictures may be categorized by event. In this manner, it becomes possible to build a historical record (e.g. a virtual photo album) that provides an indication of all users (as identified with the system 100) having attended a given event at the venue. For this purpose, the system 100 comprises an entertainment experience management system 102 adapted to communicate with a plurality of mobile devices 104 via a network 06, such as the Internet, a cellular network, Wi-Fi, or others known to those skilled in the art. As will be discussed further below, the devices 104 may provide users access to the entertainment experience management system 102. The devices 104 may comprise any device, such as a laptop computer, a personal digital assistant (PDA), a smartphone, or the like, adapted to communicate over the network 106. [0053] In some embodiments, the entertainment experience management system 102 may require users to log in or otherwise gain authorized access to the system 102 through the use of a unique identifier. For this purpose, users illustratively register with the entertainment experience management system 102 by completing an application, thereby creating a unique profile or account. This may be done by accessing a website associated with the entertainment experience management system 102 using the user's device 104. Once registration is complete, each user is illustratively provided with a unique identifier, such as an email address, a username, and/or a password, associated with his/her profile. The identifier may be used to verify the identity of the user upon the latter attempting to access the entertainment experience management system 102. Access to the entertainment experience management system 102 may then be effected by logging on to the website using the identifier. Alternatively, the entertainment experience management system 102 may be installed on the device 104 as a software application, which may be launched by the user on the device. It should be understood that the entertainment experience management system 102 may be accessed by multiple users simultaneously. It should also be understood that the user may log into the entertainment experience management system 102 using an identifier (e.g. username and password) associated with an online social network or social networking application (e.g. Facebook™, Google+™, Twitter™ or the like) to which the user has subscribed.
[0054] The entertainment experience management system 102 may further be accessed by one or more social network/web server(s) 108. When the one or more images captured during the event are made available, (e.g. posted on a website, such as the venue's website or the artist's website), a web server (e.g. associated with the website in question) may access the entertainment experience system 102 in order to personally identify users of the system 102 captured in the image(s). If the image has been posted on a profile page associated with a social networking service, a social network server associated with the profile page in question may access the entertainment experience system 102 in order to personally identify in the image users of the system 102. It should be understood that the image(s) may be posted on at least one of the venue's website, the venue's profile page, the artist's website, and the artist's profile page. Other websites or web pages may also apply. As such, one or more social network servers and/or one or more web servers may access the entertainment experience system 102 to provide personal identification of users of the system 102 in the image(s). Also, as discussed above, it should be understood that the picture need not always be posted on a webpage or website but may be made available to the system 102 using any suitable communication means. Once identification of user(s) in the picture has been performed by the system 102, the resulting modified image (with personal identification of users embedded therein) may be sent to users (e.g. to devices 104) using similar communication means. The modified image may further be provided on a webpage or website for user access (e.g. via the devices 104).
[0055] The entertainment experience management system 102 may comprise one or more server(s) 110. For example, a series of servers corresponding to a web server, an application server, and a database server may be used. These servers are all represented by server 110 in Figure 1. The server 1 10 may comprise, amongst other things, a processor 112 coupled to a memory 114 and having a plurality of applications 1 6a, 6n running thereon. The processor 112 may access the memory 114 to retrieve data. The processor 112 may be any device that can perform operations on data. Examples are a central processing unit (CPU), a microprocessor, and a front-end processor. The applications 116a, 116n are coupled to the processor 112 and configured to perform various tasks as explained below in more detail. It should be understood that while the applications 116a, 116n presented herein are illustrated and described as separate entities, they may be combined or separated in a variety of ways. It should be understood that an operating system (not shown) may be used as an intermediary between the processor 112 and the applications 116a, 116n.
[0056] The memory 114 accessible by the processor 1 12 may receive and store data. The memory 1 14 may be a main memory, such as a high speed Random Access Memory (RAM), or an auxiliary storage unit, such as a hard disk or flash memory. The memory 1 14 may be any other type of memory, such as a Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), or optical storage media such as a videodisc and a compact disc. Also, although the entertainment experience management system 102 is described herein as comprising the processor 112 having the applications 116a, 116n running thereon, it should be understood that cloud computing may also be used. As such, the memory 114 may comprise cloud storage.
[0057] One or more databases 1 18 may be integrated directly into the memory 114 or may be provided separately therefrom and remotely from the server 110 (as illustrated). In the case of a remote access to the databases 118, access may occur via any type of network 106, as indicated above. The databases 118 described herein may be provided as collections of data or information organized for rapid search and retrieval by a computer. The databases 118 may be structured to facilitate storage, retrieval, modification, and deletion of data in conjunction with various data-processing operations. The databases 118 may consist of a file or sets of files that can be broken down into records, each of which consists of one or more fields. Database information may be retrieved through queries using keywords and sorting commands, in order to rapidly search, rearrange, group, and select the field. The databases 118 may be any organization of data on a data storage medium, such as one or more servers. As discussed above, the system 100 may use cloud computing and it should therefore be understood that the databases 118 may comprise cloud storage.
[0058] In one embodiment, the databases 118 are secure web servers and Hypertext Transport Protocol Secure (HTTPS) capable of supporting Transport Layer Security (TLS), which is a protocol used for access to the data. Communications to and from the secure web servers may be secured using Secure Sockets Layer (SSL). Identity verification of a user may be performed using usernames and passwords for all users. Various levels of access rights may be provided to multiple levels of users.
[0059] Alternatively, any known communication protocols that enable devices within a computer network to exchange information may be used. Examples of protocols are as follows: IP (Internet Protocol), UDP (User Datagram Protocol), TCP (Transmission Control Protocol), DHCP (Dynamic Host Configuration Protocol), HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), Telnet (Telnet Remote Protocol), SSH (Secure Shell Remote Protocol).
[0060] Figure 2a is an exemplary embodiment of an application 116a running on the processor 12 of Figure 1. The application 1 16a illustratively comprises a receiving module 202, a user profile management module 204, a localization module 206, a personal identification module 208, and an output module 210.
[0061] The receiving module 202 illustratively receives one or more input signals from one or more device(s) 104 and/or the social network/web server(s) 108. The input signal(s) received from each device 104 may comprise data uniquely identifying the user, e.g. the user's identifier associated with his/her account in the entertainment experience management system 102. The user identifier may indeed be received upon the user attempting to gain access to the entertainment experience management system 102. The user identifier may then be sent by the receiving module 202 to the user profile management module 204 for authenticating the user prior to providing the latter access to functionalities of the entertainment experience management system 102. The user profile management module 204 may, upon receiving the user identifier, retrieve from the memory 114 and/or databases 118 a stored user identifier associated with the user's account. The user profile management module 204 may then compare the retrieved user identifier and the user identifier received from the receiving module 202. If both identifiers match, the user profile management module 204 successfully authenticates the user and generates a message to that effect. Otherwise, if the identifiers do not match, the user profile management module 204 determines that the user attempting to access the entertainment experience management system 102 should not be authorized to do so. A message to that effect is then generated. The message output by the user profile management module 204 is then sent to the output module 210 for rendering on a suitable output module, e.g. a screen, of the device 104. The output module 210 may transmit data to the device 104 through instant push notifications sent via the network 106. Email, Short Message Service (SMS), Multimedia Messaging Service (MMS), instant messaging (IM), or other suitable communication means known to those skilled in the art may also apply.
[0062] The input signal(s) received from a device 104 may also comprise seat location data uniquely identifying the location of the user's physical space (e.g. seat, row, or section number) at the venue. Once the user has been authenticated, the receiving module 202 may transmit the seat location data to the user profile management module 204 so that the seat location data is associated in the memory 114 and/or the databases 118 with the user's profile. In this manner, the user's profile may be updated to indicate that the user has attended the event and where the user was seated or otherwise located at the venue during the event. The seat location data may be loaded by the user (or venue personnel, or other suitable person) scanning a portion, e.g. a barcode (one dimensional or two dimensional, i.e. a matrix barcode) encoding the seat location data, or the entirety of the user's ticket using a suitable scanning device, e.g. a camera, coupled to their device 104. In this manner, the user profile management module 204 may then obtain information associated with the ticket, e.g. ticket/seat number, from a ticket issuer or ticketing system. The ticketing system may have access to information associated with each one of a plurality of tickets issued for the event occurring at the venue. In particular, the ticketing system may record (e.g. in the memory and/or databases 118) the location and numbers of all physical spaces (e.g. seats) for the venue, an identification of all issued tickets (and corresponding ticket numbers), an identification of the number of the physical space associated with each ticket, and/or an identification of each patron having purchased a ticket for the event. [0063] The received seat location data may further be stored in the memory 114 and/or databases 1 18 for subsequent use. The user profile management module 204 may further load the information into the memory 114 and/or the databases 1 18 further to the scanning process. It should be understood that the seat location data may also be manually entered or submitted by the user (or venue personnel, or other suitable person) using suitable input means or interface elements(not shown), such as a keyboard or touchscreen, provided with the device 04. In one embodiment, the user (or venue personnel, or other suitable person) may also be provided with the option to select an electronic ticket issued by the venue in order to provide the seat location data.
[0064] The input signal(s) received at the receiving module 202 from the device 104 may also optionally comprise endorsement data indicative of the user's endorsement of the artist and/or venue. As discussed above, the endorsement data may be received further to a suitable social networking feature, such as the "Like" button on Facebook™, being used. This endorsement data may then be sent to the user profile management module 204 to be associated with the user's profile stored in the memory 114 and/or the databases 1 18.
[0065] Still referring to Figure 2a, the receiving module 202 may also receive image and/or event data. As discussed above, this data may be provided by one or more social network/web server(s) 108 accessing the entertainment experience system 102 in order to personally identify users captured in an image posted on a website and/or profile page. Alternatively, the image and/or event data may be provided to the receiving module 202 over the network 106 using suitable communication means (e.g. email, SMS, MMS, IM, and the like). The image data may comprise metadata, timestamp data, resolution and other camera information, and the like, for the one or more images for which identification is needed. The event data may comprise an identification of the event, e.g. event name and date, and/or of the venue, e.g. venue name, address, where the event took place and for which the image data has been obtained. The event data may also comprise timing data that may be indicative of a timing (e.g clock information, timing of interruptions, breaks, time-outs, intermissions, advertisements, actions or activities during the performance) for the event. Such event timing data may be obtained from the venue, an official game clock, or a content provider broadcasting the event. The timing data may also be obtained from a timer or clock provided with the image capturing device. Other suitable means of obtaining timing data may apply.
[0066] The receiving module 202 may then send the image/event data to the localization module 206. The localization module 206 may further communicate with the user profile management module 204 in order to retrieve profile information of users having attended the event identified in the event data. More particularly, the localization module 206 may retrieve seat location data for each user having attended the event. It should be understood that the localization module 206 may alternatively retrieve the seat location data directly from the memory 114 and/or databases 118 without communicating with the user profile management module 204. As will be discussed further below, the retrieved data may be used along with venue data retrieved from the memory 114 and/or the databases 118 to determine the position of the users' seats in the image. It should be understood that, for seat-less venues, the position of an area or section of the venue where the users were located during the event may be determined. The venue data may comprise information about the venue, such as template images of one or more areas of the venue, characteristics of equipment (e.g. cameras) provided at the venue, data acquired by the equipment, and the like.
[0067] Once the localization module 206 has determined the positions of the users' seats/venue sections in the image, the determined positions may be sent to the personal identification module 208. The personal identification module 208 may further access the user profile management module 204 in order to retrieve the user profile data associated with the seat location data. Again, it should be understood that the personal identification module 208 may alternatively retrieve the user profile data directly from the memory 114 and/or databases 118 without communicating with the user profile management module 204. It should also be understood that in some embodiments, the user profile data stored in memory may not yet be linked to the received seat location data. This may be the case when the seat location data is received upon manual entry of information (e.g. ticket number) provided on a paper ticket, rather than from an electronic or digital ticket for which corresponding data, e.g. user profile data, may automatically be stored in memory at the time of purchase of the ticket. Upon manually entering the information provided on the paper ticket into the system, it may then be possible to associate the ticket information with the user's profile data.
[0068] In this manner, upon obtaining the user profile data, the personal identification module 208 can identify users captured in the image. In particular, the personal identification module 208 may retrieve the user identifier(s) associated with to the seat location data provided to the localization module 206. In one embodiment, each user identifier comprises the complete first and last names of the user as well as the usemame associated with the user's profile with the system 102. As discussed above, the retrieved user identifier may also comprise an identifier (e.g. username) associated with an online social network or social networking application to which the user has subscribed. The personal identification module 208 may then associate the user identifier(s) with the localization of the users' seats/venue sections in the image, as determined by the localization module 206. The personal identification module 208 may further generate a modified image in which the user identifier(s) are indicated in the image, e.g. overlaid or otherwise positioned on the image, adjacent to the position of the users' seats/venue sections, which may also be indicated in the modified image.
[0069] In some embodiments, the identifier(s) may be indicated in an image different from the image received from the image capturing device. For instance, having knowledge of the user's location at the venue (e.g. from the seat location data), the personal identification module 208 may retrieve from the memory 114 and/or databases 118 a venue map and position thereon the user's identifier(s) at the location of the user's seat, row, or section. This may cause generation of a new image that may be output for presenting a live map of users attending the event.
[0070] It should be understood that one or more identifiers may be indicated in the image. It should also be understood that the identifier(s) may be indicated using any suitable means other than being overlaid on the image. For instance, indicia, such as an asterisk, may be indicated adjacent the position of the user's physical space in the image and a corresponding caption provided to specify the user's identifier(s). In addition, the identification data presented on the image may comprise an identification of a moment during performance of the event at which the image has been taken. For this purpose, the image data (e.g. the timestamp data) may be correlated with the event data (e.g. the timing of the event) to determine one or more activities or actions (e.g. a goal, a specific song played, a specific advertisement presented) having occurred during the event at the moment the image was taken. An identification of the event activities or actions (and/or accordingly of the moment at which they have occurred) may be attached to the modified image. In this manner, images may be categorized in accordance with the timing of the event, e.g. activities or actions having occurred during the event. Upon accessing their identification in the image, users may therefore also be presented with information indicating the action or activity having occurred during the event at the moment the image was captured.
[0071] Although not illustrated, it should be understood that the personal identification module 208 may also retrieve from the user profile data a picture associated with each user's profile. As such, the profile pictures may also be associated with the position of the users' seat/venue section for display in the modified image. Depending on the user's preferences, other suitable information may also be associated with the position of the user's physical space in the image for display in the modified image. The personal identification module 208 may then send the modified image data to the output module 210 for causing the modified image to be transmitted to the social network/web server(s) 108 using any suitable communication means discussed above. The social network/web server(s) 108 may then cause the modified image to be presented on the venue/artist website or profile page for subsequent access by the user. The personal identification module 208 may then send the modified image data to the output module 2 0 for causing the modified image to be transmitted directly to the devices 104 using any suitable communication means discussed above.
[0072] In one embodiment, the personal identification module 208 may determine from the user profile data preferences of the users, and more particularly preferences indicating that a given user does not wish to be identified in the image. A user may indeed be provided with the option to "turn off' the identification feature. If this is the case, the personal identification module 208 may not associate the user's identifier to the position of the user's seat/venue section in the image. As a result, the modified image will not comprise an identification of the user. The preferences retrieved by the personal identification module 208 may also indicate that a given user does not wish for one or more other users to see his/her identification in the image. In this case, the personal identification module 208 may cause the modified image to be presented on the venue/artist website or profile page (or output using any suitable communication means discussed above) in such a manner that only approved users may access each other's identification in the image.
[0073] In one embodiment, a user having endorsed the profile page of the venue and/or artist may receive a notification that a new photo has been posted. The user may then access the profile page using their device 104. Alternatively, the user may randomly visit the website of the venue and/or artist to get an update on the activities of the artist or the events at the venue. In order to access the profile page or the website, the user may launch a browser on their device 04 and be directed to the desired website/profile page. The user may then view the image along with his/her identification in the image. As such, the user is provided with social recognition for being part of the event performance and post-event engagement may be fostered. In some embodiments, only authorized users may be allowed access to the image. As will be discussed further below, it should be understood that a user may view the image upon accessing the entertainment experience management system 102 with their device 104. Also, the image along with the user's identification in the image may automatically appear on the profile page associated with an account the user holds with an online social network or social networking application. In this case, access to the image may be limited to users being "friends" with, "followers" of, or otherwise endorsers of the user on the social network. Notifications may be sent to such endorsers of the user to indicate that an image comprising the user's identification is made available on the user's profile page. Additional image access restrictions may also apply.
[0074] In one embodiment, the user, upon viewing the image comprising his/her identification, may add a caption, signature, personalized message, explanatory information, or other information to the image. The information may be provided as text, video, audio, or any other suitable format. In other embodiments, the user, upon viewing the image comprising the identification of his endorsers on the social network, may be provided with the option select the endorsers' identification data attached to the image in order to gain access to additional features. For instance, upon selecting the identification, the user may be directed towards a live chat or messaging service for initiating a live discussion with the selected endorser(s). The user may also be directed towards the selected endorser(s)' profile page upon selecting the identification.
[0075] Referring to Figure 2b, the localization module 206 illustratively comprises an imaged seating area identification module 212 and a user seat positioning module 214. The imaged seating area identification module 212 may be used to determine the area of the venue (e.g. seating area) captured in the image, as will be discussed further below. For instance, the imaged seating area identification module 212 may determine the seating area, e.g. the seating rows or sections, captured in the image. Once the imaged seating area has been determined, the imaged seating area identification module 212 may communicate with the user seat positioning module 214, which may be used to determine the position of the users' seats in the image. As discussed above, it should be understood that the user seat positioning module 214 may be used to determine the position of the users' venue sections in the image when seat-less venues apply. For this purpose, the user seat localization module 214 may then correlate the seat location data with the imaged seating area data in order to find in the image the position of the seat(s)/venue section(s) corresponding to the seat location data. In particular, the user seat localization module 214 may correlate the seat location data (e.g. seat number) with an identification (e.g. number) of physical spaces (e.g. seats, areas, rows) identified as present in the area of the venue captured in the image. When a match is found, a conclusion as to the position of the user's seat in the image can be achieved.
[0076] Referring now to Figure 2c, the imaged seating area identification module 212 may use at least one of a plurality of localization techniques in order to determine the seating area captured in the image. The imaged seating area identification module 212 may thus comprise a localization technique determination module 216, a known viewpoint analysis module 218, a transmitter/receiver data analysis module 220, a cameral signal analysis module 222, and an image data analysis module 224. It should be understood that one or more of the localization techniques, and accordingly that at least one of the modules 218, 220, 222, and 224, may be used.
[0077] The localization technique determination module 216 may be used to determine from the retrieved venue data which localization technique is best suitable to determine the seating area captured in the image. The localization technique determination module 216 may first determine from the venue data whether the image has been taken from a predetermined viewpoint. This may be achieved by comparing the image data to predetermined image templates comprised in the venue data. The image templates may correspond to one or more images of the venue taken from known viewpoints, such as known camera angles or directions. If the image data corresponds to at least one image template, the localization technique determination module 216 can conclude that the image has been taken from a predetermined viewpoint, e.g. from a known position or camera angle. This may for example be the case if the image has been taken by a static venue-based camera, which is known to always take images of a given area of the venue from a known angle or direction. When it is determined that the image has been taken from a predetermined viewpoint, the localization technique determination module 216 may then communicate with a known viewpoint analysis correlation module 218, which may be used to determine the imaged seating area.
[0078] For this purpose, the known viewpoint analysis module 218 may determine from the received image data the position (e.g. the known viewpoint) from which the image has been taken. Indeed, the image data may be coded with information indicating the position (e.g. coordinates) of the image capturing device (e.g. the camera) having taken the image. The image data may also comprise information about the features of the image capturing device, such as zooming features, used when taking the image. On the basis of this information, the known viewpoint analysis module 218 may, upon comparing the information to the image template data, then identify the area of the venue, e.g. the seating area, captured in the image. For instance, the known viewpoint analysis module 218 may determine, from the image template matching the image data, the viewpoint from which the image was taken and accordingly the area of the venue associated with the viewpoint.
[0079] If the localization technique determination module 216 has determined that the image has not been taken from a known viewpoint, the localization technique determination module 216 may further determine whether the seats provided at the venue are equipped with communication devices, such as transmitters and/or receivers (not shown). This may be determined from the equipment characteristics provided in the venue data. The communication devices may use any suitable technology, such as iBeacon™, Bluetooth, or the like. It should be understood that, although the description below refers to the seats being provided with the communication devices, it should be understood that other structural elements (e.g. lighting structures, fences, advertising boards, tribunes, or the like) provided at the venue may also be equipped with the communication devices and used for determining the area of the venue captured in the image. Still, it is desirable for the communication devices to be positioned so as to allow to infer (e.g. using triangulation or any other suitable technique) from the communication device's position the location of the attendee's seat at the venue.
[0080] For instance, a seat may be equipped with a transmitter with a matching receiver being provided at the venue in or adjacent an image capturing device, e.g. the camera, having taken the image. Alternatively, a seat may be equipped with a receiver with a matching transmitter being provided in or adjacent the image capturing device. It should be understood that any given seat may be equipped with both a transmitter and a receiver. In one embodiment, each transmitter and each receiver may have a given communication range associated therewith such that communication (i.e. data exchange) is only possible with given corresponding transmitter(s) / receiver(s) within the range. In this manner, a transmitter or receiver of an image capturing device may only communicate with a corresponding number of receivers or transmitters. In other embodiments, each communication device (e.g. provided adjacent a physical space, e.g. under a seat of a given section of the venue) may be provided with both receive and transmit capabilities and adapted to communicate with a given control module, not shown (e.g. provided for the section in question). The control module may be provided with receive capabilities and adapted to store the received data along with coordinates (e.g. associated with the section in question) of each communication device the data is received from. The control module may then communicate with the image capturing device to transmit thereto information (e.g. coordinates) received from the communication device(s). The image capturing device may further communicate with the control module to position itself relative thereto. In this manner, the image capturing device need only communicate with a single device, i.e. the control module, and need not communicate with a plurality of communication devices.
[0081] In some embodiments, a transmitter may send a transmit signal, such as a "ping" signal or the like, towards the matching receiver, which upon receiving the transmit signal may output a return signal to the transmitter. In one embodiment, when the venue seats are equipped with such transmitters and/or receivers, the latter may each be coded so as to be uniquely matched to the identifier of the user to which the corresponding seat has been assigned. In particular, the identifier may be coded in the corresponding seat transmitter and/or receiver. In this manner, each transmitter attached to a venue seat may then be adapted to output to a receiver a signal comprising the identifier of the user assigned to the seat in question. Each seat transmitter may further output to the receiver its coordinates or physical position (e.g. seat, row, and/or section number) at the venue, thereby providing an identification of the seat. In some embodiments, the identifier data may be coded in a transmitter provided in the attendee's device 104. Alternatively, the transmitter may be provided in any wearable device held by the attendee. As such, in one embodiment, while taking one or more images of the area of the venue where a given seat is located, the image capturing device's receiver may receive, from the transmitter (e.g. of each seat present in the device's range or viewpoint), a signal comprising the identifier of the user assigned to the seat and/or identification (e.g. position) of the seat. This information may in turn be used to determine the position of a given seat in an image taken by the image capturing device. The return signal output by the receiver may also comprise data confirming coordinates of the receiver. Data exchanged by the transmitters and receivers may further be stored in the memory 1 14 and/or databases 1 18.
[0082] If the localization technique determination module 216 determines that the seats are indeed equipped with transmitters and/or receivers, the localization technique determination module 216 may then communicate with the transmitter/receiver data analysis module 220. The transmitter/receiver data analysis module 220 may in turn retrieve from the memory 1 14 and/or databases 1 18 the data output by the transmitter(s) and/or receiver(s) associated with (e.g. adjacent or communicating with) the image capturing device having taken the image and analyze this data in order to identify which area of the venue has been imaged. For instance, the transmitter/receiver data analysis module 220 may determine from the transmitter data the coordinates obtained from the receiver return signal in order to identify the imaged venue area. When the receiver is located adjacent the image capturing device, the transmitter/receiver data analysis module 220 may further process the transmitter data in order to take into account and compensate for the distance between the receiver and the image capturing device.
[0083] If the localization technique determination module 216 has determined that the image has not been taken from a known viewpoint and that the venue seats are not equipped with transmitters and/or receivers, the localization technique determination module 216 may further determine from the venue data whether the image capturing device, e.g. the camera, having taken the image is equipped with a transceiver (not shown). Such a transceiver may be a wireless transceiver and may be adapted to communicate radio signals (e.g. at the time the image is taken) with at least one other transceiver using any suitable communication technology, such as Wi-Fi or the like. In one embodiment, the at least one other transceiver may be a network antenna positioned in a given location of the venue, at a known position relative to given physical space(s), e.g. seat(s), row(s), or section(s), of the venue. If the localization technique determination module 216 determines that the camera is indeed equipped with a wireless transceiver, the localization technique determination module 216 may communicate the venue data (e.g. retrieved from the memory 1 14 and/or databases 1 18) to the camera signal analysis module 222. The camera signal analysis module 222 may in turn retrieve the camera signal data (e.g. the radio signal(s) from the camera's transceiver) from the venue data and localize the camera using triangulation or any other suitable localization technique (e.g. GPS or the like). The camera signal data may further comprise signal data emitted by one or more directional sensors provided with the camera. Using the signals from the sensors (may be obtained from the venue data ) and the location of the camera as determined using triangulation (or any other suitable technique), the camera signal analysis module 222 may further determine the direction the camera was pointing towards when the image was taken, thereby identifying the venue area captured in the image.
[0084] If the localization technique determination module 216 has determined that the image has not been taken from a known viewpoint, that the venue seats are not equipped with transmitters, and that the camera is not equipped with a transceiver as discussed above, the localization technique determination module 216 may further determine from the image data whether the latter is coded with localization information. Indeed, the person, e.g. the artist, having taken the picture may have attached thereto localization information identifying the captured venue area. For instance, the person may have indicated the seating area, e.g. rows and/or sections, the imaged area corresponds to. This user-provided localization information may therefore be contained in the image data generated upon the image being captured. The image data may then be transmitted by the localization technique determination module 216 to the image data analysis module 224. The image data analysis module 224 may in turn retrieve the localization information from the image data in order to determine the imaged seating area accordingly. In some instance, the image data analysis module 224 may need to calibrate the image data in order to accurately identify the imaged seating area in accordance with the localization information provided in the image data. This may for instance be the case when the image data corresponds to a zoomed image. In this case, it may be desirable for the image data analysis module 224 to calibrate the image data in order to adjust, e.g. increase or decrease, the spacing between the seats so as to take into account the zoom effect, e.g. zoom-in or zoom-out. [0085] Referring to Figure 3a, a method 300 for personal identification of individuals in an image (not shown) will now be described. The method 300 may comprise receiving at step 302 image/event data. This data may de received upon at least one image capturing a given area of the venue being posted on an online website and/or social network page or transmitted via any suitable communication means, as discussed above. User identification data, such as a unique identifier associated with a user's account with the entertainment experience management system (reference 102 in Figure 1), may then be received at step 304. Seat location data indicative of a physical location of a seat/venue section assigned to the user during the event may further be received at step 306. The next step 308 may then be to identify in the received image data at least one user having provided their seat location data at step 306. Image data containing the user identification may then be output at step 310.
[0086] Referring to Figure 3b, the step 308 of identifying each user in the image may comprise determining at step 31 the position of the user's seat/venue section in the image. The user identification data may then be associated at step 316 with the seat position determined at step 314. As illustrated in Figure 3c, the step 314 of determining the position of the user's seat/venue section in the image may comprise identifying at step 318 the seating area captured in the image, correlating at step 320 the seat location data with the identified seating area, and locate at step 322 the user's seat/venue section within the identified seating area on the basis of the correlation.
[0087] Referring to Figure 3d, the step 318 of identifying the seating area captured in the image may comprise retrieving venue image templates at step 324 and determining at step 326 from the retrieved image templates whether the image was taken from a pre-determined viewpoint. If this is not the case, the method 300 may flow to the step 330 discussed below in reference to Figure 3e. Otherwise, the next step 328 may be to determine the imaged venue area from information about the known viewpoint information found in the image data, as discussed above with reference to Figure 2c.
[0088] Referring now to Figure 3e, the step 330 illustratively comprises determining at step 332 whether the image data contains localization information. As discussed above, this may be the case if the person having taken the image attached localization information, e.g. an identification of the venue area captured, to the image data. If the image data contains such information, the seating area may be identified at step 334 from the localization information. Otherwise, the next step 336 may be to retrieve venue data and determine at step 338 from the venue data whether the seats provided at the venue are equipped with transmitters/receivers. If this is the case, the next step 340 may be to obtain from the venue data seat transmitter/receiver data acquired at the time the image was taken. The seating area may then be identified at step 342 from the seat transmitter/receiver data.
[0089] If it is determined at step 338 that the seats are not equipped with transmitters, the next step 344 may be to determine whether the image capturing device, e.g. the camera, having taken the image was equipped with a transceiver. If this is not the case, the method 300 may end. Otherwise, the next step 346 may be to obtain from the venue data camera signal data acquired at the time the image was taken. The seating area may then be identified at step 348 from the camera signal data using triangulation or any other suitable localization technique.
[0090] Figure 4 illustrates a screen capture of a user interface 400 presented on the screen of a device 104. The user interface 400 prompts the user to log into the entertainment experience management system (reference 102 of Figure 1) by presenting a login interface element 402. Using such an interface element 402, the patron may enter the unique identifier, e.g. an email address (as illustrated) or username and a password, associated with their profile. As discussed above, it should be understood that users may log into the system 102 using an identifier associated with an online social network or social networking application (e.g. Facebook™, Google+™, Twitter™ or the like) to which the user has subscribed. For this purpose, a corresponding user interface element (not shown) may be presented to the user on the interface 400. Once the information has been provided, the user may select a "Login" option 404 in order to access the system 102. The user may enter information or select a given option, as in 404, using one of a variety of input/selection means. For example, if the screen of the device 104 is a touchscreen, selection may be effected by touching on the screen a desired option. Other selection means, such as a mouse, a keyboard, a pointing device, and the like (not shown), coupled to the device 104 may also be used
[0091] The user interface 400 may also comprise a "Sign up" option 406, which allows a user who is not registered with the system 102 to create an account. As shown in Figure 5, a sign up screen 500 may be presented to the user upon the latter selecting the "Sign up" option 406. The sign up screen 500 illustratively comprises a plurality of user interface elements as in 502, such as text boxes allowing for lines of free text to be entered. In this manner, the user can provide the information required for completing his/her application, thereby creating a unique profile. For example, the user may enter their first and last names, gender, country, zip code, date of birth, and others. Although not illustrated, it should be understood that the user may also provide an email address, which may be used as the user's login name for accessing the system 102, and a password that will be associated with the user's account in the system 102. Once the information has been properly entered, a "Done" option 504 may be selected on the user interface 500 to submit the information. The user may also select a "Back" option 506 in order to return to the previous screen 400.
[0092] Referring to Figure 6, once the user has been authenticated and has accessed the system 102, a user interface 600 may present to the user a menu of functionalities provided by the system 102. In particular, relevant account and/or event/venue information can be presented. For instance, the event name may be presented in a first field 602 of the interface 600 while the user's ticket/seat information may be presented in a field 604 of the interface. As discussed above, the ticket/seat information may be loaded by the user scanning their ticket or manually entering the information using suitable interface elements (not shown) presented on the device 104. The interface 600 may also present the user with a plurality of functionalities or services as selectable icons as in 6O61 and 6Ο62. Upon selection of a given icon as in 6Ο61 or 6Ο62, the user may then be presented with sub-menus detailing the corresponding service. It should be understood that a variety of screen selection/manipulation means other than icons, e.g. tabs, buttons, and the like, may apply.
[0093] It should be understood that the label, number, placement, order, and format of the icons as in 6Ο61 and 6062 may vary depending on the content, products, and services offered at the venue. Also, the main menu may be tailored to the preferences of the user, as indicated in their profile. Examples of venue services comprise, but are not limited to, concession services, fan store or fan club services, season ticket holder services, event information, live content, venue map, interactive content, live chat, upcoming events, notification services, social media integration, localization of social network friends present at the venue, parking management, suite management, fan cam, fundraising, charity lottery, silent auctions, loyalty programs, badges or ticket history, fine dining reservation services, gaming marking, and statistics. Using their devices 104 and through their online social network or social networking application, users may recommend and/or share with other users any content, product, or service associated with the icons 606.
[0094] In order to access information about an artist (performer, athlete, or other individual) having performed at the venue during the event, the user may select the "Artist info" option 6062. In some embodiments, the user may also select the "Interactive" option (not shown) to gain access to the artist information. This may direct the user to a website or profile page of the artist where the user can view photos posted by the artist, as discussed above. Alternatively, the user may launch a browser (not shown) on the device 104 in order to be provided access to the website or profile page. It should be understood that a similar process may be used to access a website or profile page associated with the venue that held the event.
[0095] Referring to Figure 7, an interface 700 may then present the user with the website or profile page and more particularly with one or more pictures or images as in 702 taken during the event. As discussed above, it should be understood that the picture(s) 702 may be transmitted using any suitable communication means and stored in a memory (e.g. that of the user's device). The interface 700 may accordingly present an image 702 retrieved from memory. Upon accessing a given picture 702, the user may also access his/her identification in the picture 702. Indeed, the identification of users of the entertainment experience management system 102 having attended the event may be presented along with the picture 702. For instance, the name and identifier of each user may be provided in a box (or other suitable visual means) as in 704i, 7042, 7043, 7044 positioned adjacent the corresponding position as indicated by the circles (or other suitable visual means) as in 706i, 7062, 7063, 7064 of the user in the picture 702. As discussed above, the identification information may be overlaid on the picture 702 and displayed in accordance with the user preferences.
[0096] While illustrated in the block diagrams as groups of discrete components communicating with each other via distinct data signal connections, it will be understood by those skilled in the art that the present embodiments are provided by a combination of hardware and software components, with some components being implemented by a given function or operation of a hardware or software system, and many of the data paths illustrated being implemented by data communication within a computer application or operating system. The structure illustrated is thus provided for efficiency of teaching the present embodiment.
[0097] It should be noted that the present invention can be carried out as a method, can be embodied in a system, and/or on a computer readable medium. The embodiments of the invention described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the scope of the appended claims.

Claims

CLAIMS:
1. A system for identifying in an image at least one attendee of an event occurring at a venue, the system comprising:
a memory having stored therein identifier data uniquely identifying the at least one attendee;
a processor; and
at least one application stored in the memory and executable by the processor for
receiving location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee,
receiving an image taken during the event, the image capturing the at least one attendee occupying the selected physical space,
determining a position of the selected physical space in the image, indicating the identifier data in the image at the determined position of the selected physical space, thereby generating a modified image, and
outputting the modified image.
2. The system of claim 1 , wherein the at least one application is executable by the processor for identifying an area of the venue captured in the image, the area of the venue comprising a plurality of physical spaces including the selected physical space, and correlating the location data with an identification of the plurality of physical spaces captured in the image for determining the position of the physical space in the image.
3. The system of claim 2, wherein the memory has stored therein at least one template image corresponding to a view of the venue taken from a predetermined viewpoint and further wherein the at least one application is executable by the processor for comparing the image to the at least one template image for determining the area of the venue captured in the image.
4. The system of any one of claims 2 to 3, wherein the memory has stored therein signal data transmitted by an image capturing device having taken the image, the signal data indicative of at least one of a location of the image capturing device at the venue and a direction the image capturing device is pointing towards, and further wherein the at least one application is executable by the processor for identifying the area of the venue captured in the image from the signal data.
5. The system of any one of claims 2 to 4, wherein the at least one application is executable by the processor for receiving the image having associated therewith user-provided localization data, the localization data indicative of the area of the venue captured in the image, and for identifying the area of the venue captured in the image from the localization data.
6. The system of any one of claims 2 to 5, wherein the memory has stored therein communication data exchanged between one or more first communication devices each provided at a given location of the venue and a second communication device provided adjacent an image capturing device having taken the image, the communication data comprising an indication of the given location as transmitted by each of the one or more first communication devices to the second communication device, and further wherein the at least one application is executable by the processor for identifying the area of the venue captured in the image from the communication data.
7. The system of claim 6, wherein the memory has stored therein the communication data transmitted by a selected one of the one or more first communication devices to the second communication device, the selected first communication device provided adjacent the physical space and having encoded therein the identifier data and the communication data comprising the encoded identifier data, and further wherein the at least one application is executable by the processor for indicating in the image the identifier data transmitted by the selected first communication device.
8. The system of any one of claims 1 to 7, wherein the at least one application is executable by the processor for indicating in the image the identifier data comprising overlaying at least one of a username and a photo of the at least one attendee on the image at the determined position of the selected physical space.
9. The system of any one of claims 1 to 8, wherein the at least one application is executable by the processor for receiving the location data further to one of the location data being manually entered and at least a portion of a ticket controlling access of the at least one attendee to the venue being scanned.
10. The system of any one of claims 1 to 9, wherein the at least one application is executable by the processor for outputting the modified image for presentation on a profile page associated with a social networking service, access to the modified image restricted to at least one user having endorsed the profile page.
11. The system of claim 10, wherein the at least one application is executable by the processor for sending to the at least one user having endorsed the profile page a notification indicating that the modified image comprising an identification of the at least one attendee is available on the profile page.
12. The system of any one of claims 1 to 11 , wherein the memory has stored therein preference data for the at least one attendee, the preference data indicating whether identification of the at least one attendee in the image is desired, and wherein the at least one application is executable by the processor for indicating the identifier data in the image if the preference data indicates that the identification is desired.
13. The system of any one of claims 1 to 12, wherein the at least one application is executable by the processor for receiving timestamp data associated with the image, receiving event data indicative of a timing of the event, correlating the timestamp data with the event data for identifying a moment during the event at which the image was taken, and attaching an identification of the moment to the modified image.
14. A computer-implemented method for identifying in an image at least one attendee of an event occurring a venue, the method comprising:
receiving location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee;
receiving an image taken during the event, the image capturing the at least one attendee occupying the selected physical space;
determining a position of the selected physical space in the image;
indicating identifier data in the image at the determined position of the selected physical space, thereby generating a modified image, the identifier data uniquely identifying the at least one attendee; and
outputting the modified image.
15. The method of claim 14, further comprising identifying an area of the venue captured in the image, the area of the venue comprising a plurality of physical spaces including the selected physical space, and correlating the location data with an identification of the plurality of physical spaces captured in the image for determining the position of the physical space in the image.
16. The method of claim 15, further comprising comparing the image to at least one template image corresponding to a view of the venue taken from a predetermined viewpoint for determining the area of the venue captured in the image.
17. The method of any one of claims 15 to 16, wherein the area of the venue captured in the image is identified from signal data transmitted by an image capturing device having taken the image, the signal data indicative of at least one of a location of the image capturing device at the venue and a direction the image capturing device is pointing towards.
18. The method of any one of claims 15 to 17, wherein the area of the venue captured in the image is identified from user-provided localization data associated with the received image, the localization data indicative of the area of the venue captured in the image.
19. The method of any one of claims 15 to 18, wherein the area of the venue captured in the image is identified from communication data exchanged between one or more first communication devices each provided at a given location of the venue and a second communication device provided adjacent an image capturing device having taken the image, the communication data comprising an indication of the given location as transmitted by each of the one or more first communication devices to the second communication device.
20. The method of claim 19, wherein indicating in the image the identifier data comprises indicating identifier data transmitted by a selected one of the one or more first communication devices to the second communication device, the selected first communication device provided adjacent the physical space and having the identifier data encoded therein.
21. The method of any one of claims 14 to 20, wherein the identifier data indicated in the image comprises at least one of a username and a photo of the at least one attendee overlaid on the image at the determined position of the selected physical space.
22. The method of any one of claims 14 to 21 , wherein the location data is received further to one of the location data being manually entered and at least a portion of a ticket controlling access of the at least one attendee to the venue being scanned.
23. The method of any one of claims 14 to 22, wherein the modified image is output for presentation on a profile page associated with a social networking service, access to the modified image restricted to at least one user having endorsed the profile page.
24. The method of claim 23, further comprising sending to the at least one user having endorsed the profile page a notification indicating that the modified image comprising an identification of the at least one attendee is available on the profile page.
25. The method of any one of claims 14 to 24, wherein the identifier data is indicated in the image if preference data for the at least one attendee is indicative that identification of the at least one attendee in the image is desired.
26. The method of any one of claims 14 to 25, further comprising receiving timestamp data associated with the image, receiving event data indicative of a timing of the event, correlating the timestamp data with the event data for identifying a moment during the event at which the image was taken, and attaching an identification of the moment to the modified image.
27. A computer readable medium having stored thereon program code executable by a processor for identifying in an image at least one attendee of an event occurring at a venue, the program code executable for:
receiving location data identifying a selected one of one or more physical spaces of the venue uniquely assigned to the at least one attendee; receiving an image taken during the event, the image capturing the at least one attendee occupying the selected physical space;
determining a position of the selected physical space in the image;
indicating identifier data in the image at the determined position of the selected physical space, thereby generating a modified image, the identifier data uniquely identifying the at least one attendee; and
outputting the modified image.
PCT/CA2014/000366 2013-04-22 2014-04-22 System and method for personal identification of individuals in images WO2014172777A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201361814489P true 2013-04-22 2013-04-22
US61/814,489 2013-04-22

Publications (1)

Publication Number Publication Date
WO2014172777A1 true WO2014172777A1 (en) 2014-10-30

Family

ID=51790937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2014/000366 WO2014172777A1 (en) 2013-04-22 2014-04-22 System and method for personal identification of individuals in images

Country Status (1)

Country Link
WO (1) WO2014172777A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9697348B1 (en) 2016-10-24 2017-07-04 International Business Machines Corporation Location specific image based authentication
US9811653B1 (en) 2016-10-24 2017-11-07 International Business Machines Corporation Location specific image based authentication

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2666187A1 (en) * 2008-05-23 2009-11-23 Research In Motion Limited Systems and methods for presenting an image on a display of a mobile device
US20120056898A1 (en) * 2010-09-06 2012-03-08 Shingo Tsurumi Image processing device, program, and image processing method
US20120069131A1 (en) * 2010-05-28 2012-03-22 Abelow Daniel H Reality alternate
US20120269380A1 (en) * 2006-01-27 2012-10-25 Spyder Lynk Llc Encoding and Decoding Data in an Image for Social Networking Communication
US20130242064A1 (en) * 2012-03-15 2013-09-19 Ronaldo Luiz Lisboa Herdy Apparatus, system, and method for providing social content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120269380A1 (en) * 2006-01-27 2012-10-25 Spyder Lynk Llc Encoding and Decoding Data in an Image for Social Networking Communication
CA2666187A1 (en) * 2008-05-23 2009-11-23 Research In Motion Limited Systems and methods for presenting an image on a display of a mobile device
US20120069131A1 (en) * 2010-05-28 2012-03-22 Abelow Daniel H Reality alternate
US20120056898A1 (en) * 2010-09-06 2012-03-08 Shingo Tsurumi Image processing device, program, and image processing method
US20130242064A1 (en) * 2012-03-15 2013-09-19 Ronaldo Luiz Lisboa Herdy Apparatus, system, and method for providing social content

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9697348B1 (en) 2016-10-24 2017-07-04 International Business Machines Corporation Location specific image based authentication
US9811653B1 (en) 2016-10-24 2017-11-07 International Business Machines Corporation Location specific image based authentication

Similar Documents

Publication Publication Date Title
US10880252B2 (en) Systems and networks to aggregate photo content for heuristic ad targeting
US9733271B2 (en) Systems and methods for providing an enhanced user experience at a venue or event
US9087178B2 (en) System and method for posting content to network sites
US20150032658A1 (en) Systems and Methods for Capturing Event Feedback
US8442922B2 (en) Sporting event image capture, processing and publication
US20190121823A1 (en) Communication Terminal, Communication Method, Program, And Communication System.
CN105122789A (en) Digital platform for user-generated video synchronized editing
US10171617B2 (en) Communication system that support review of usage details for a communication service
US20130159869A1 (en) System and method for broadcasting mass market messages over a network of screens
EP2728538A1 (en) Method and system for providing content based on location data
US20150324400A1 (en) Interest Collection and Tracking System and Method of Use
KR20130143159A (en) Apparatus and method for stamp mission service
US20190362053A1 (en) Media distribution network, associated program products, and methods of using the same
US10158723B2 (en) Determining communication history of users
US20140112633A1 (en) Method and system for network-based real-time video display
US9614844B2 (en) Systems and methods to authenticate identity for selective access to information relating to property
CN104170394A (en) System and method for sharing videos
US9646196B2 (en) Image processing device, image processing method, and program
WO2014172777A1 (en) System and method for personal identification of individuals in images
US10135773B2 (en) Communications system
US10368213B1 (en) Location-based open social networks
US20200334285A1 (en) Method and system for collecting, and globally communicating and evaluating, digital still and video images of sports and event spectators, including augmented reality images from entertainment and venues
US10349233B2 (en) Interactive communications system
US10230676B2 (en) Location-based open social networks
US10372752B1 (en) Method and system for collecting, and globally communicating and evaluating, digital still and video images of sports and event spectators, including augmented reality images from entertainment and gathering venues

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14788018

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14788018

Country of ref document: EP

Kind code of ref document: A1