WO2018092016A1 - Providing location specific point of interest and guidance to create visual media rich story - Google Patents

Providing location specific point of interest and guidance to create visual media rich story Download PDF

Info

Publication number
WO2018092016A1
WO2018092016A1 PCT/IB2017/057082 IB2017057082W WO2018092016A1 WO 2018092016 A1 WO2018092016 A1 WO 2018092016A1 IB 2017057082 W IB2017057082 W IB 2017057082W WO 2018092016 A1 WO2018092016 A1 WO 2018092016A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
location
data
interest
destinations
Prior art date
Application number
PCT/IB2017/057082
Other languages
French (fr)
Inventor
Yogesh Chunilal Rathod
Original Assignee
Yogesh Chunilal Rathod
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yogesh Chunilal Rathod filed Critical Yogesh Chunilal Rathod
Publication of WO2018092016A1 publication Critical patent/WO2018092016A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25841Management of client data involving the geographical location of the client
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles

Definitions

  • the present invention relates generally to displaying of information about user device's current location specific nearest contextual point of interest or places on the display and provides contextual instructions for enabling user to where and when to take visual media and what and how to take visual media including a photo and a video and share, add, broadcast and post to one or more auto identified destinations or select from suggested destinations or user selected one or more destinations.
  • the present invention also enables user to creating, starting, pausing and managing rich story or rich story gallery or feed or album.
  • Snapchat TM or Instagram TM enables user to add one or more photos or videos to "My stories" or feed for publishing or broadcasting or presenting said added phots or videos or sequence of photos or videos to one or more or all friends or contacts or connections or followers or particular category or type of user.
  • Snapchat TM or Instagram TM enables user to add one or more photos or videos to "Our stories” or feed i.e. add photos or videos to particular events or place or location or activity or category and making them available to requested user or searching user or connected or related user.
  • Snapchat TM enables user to capture photo or record video and send or share or broadcast said photo or video to one or more selected contacts or connections or followers of sender or send to all friends or selected events related users.
  • Twitter TM and other social networks enable user to create and add hashtags and search or view hashtags specific contents.
  • the other object of present invention is to contextually guide or teach or provides one or more techniques, tips, tricks to user for instructing how to take current point of interest related best photo or video in terms of pose, effect, scene, style, angel, focus, light, sequence, theme, expression, arrangement, concept and idea.
  • enabling user to invite one or more contacts to participate in gallery in response to acceptance of invitation or acceptance of request to join publish or present or display said gallery and display gallery named visual media capture controller label and/or identified information about points of interest based on matching monitored and identified member device's current geo-location with points of interests data to each participant members for enabling to take visual media and auto post to said gallery.
  • An electronic device comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access information about points of interests or places or locations or positions data, user data and rule base; monitors, tracks and identifies current geo-location or position of user device; identify contextual one or more points of interests or positions based on matching pre-stored points of interests or places or locations or positions data with user's current geo-location or position information, user data and contextually selected and executed rules from rules base; notify or provide indication or present information about contextual one or more points of interests or positions on the display of user device; present said identified one or more points of interest or position related corresponding visual media capture controller label(s) or icon(s) on the camera display screen of the user device; in response to access of a visual media capture controller, alternately capture a photo or start recording of a
  • one or more types of content or media including captured or selected photo(s), recorded or selected video(s), product or service review(s), comments, ratings, suggestions, feedbacks, complaint, microblog, post, stored or auto generated or identified or provided user activities, actions, events, transactions, logs, status and one or more types of user generated contents or media.
  • a computer implemented method comprising: receiving a geo-location data for a device; determining whether the geo-location data corresponds to a geo-location fence associated with a particular position or place or spot or point of interest; supplying a notification or indication list to the device in response to the geo-location data corresponding to the geo- location fence associated with the particular position or place or spot or point of interest.
  • enable user to select from list of suggested destinations In an embodiment auto post one or more media item(s) to auto determined one or more destinations.
  • a server comprising: a processor; and a memory storing instructions executed by the processor to: maintain a gallery comprising a plurality of messages posted by a user for viewing by one or more recipients, wherein each of the messages comprises a photograph or a video or one or more media type, the maintaining of the gallery comprising making the gallery available for viewing users, upon request, via respective user devices associated with the one or more recipients; receiving request, by the server system, to create particular named gallery; creating, by the server system, said named gallery; receiving request, by the server system, to start or initiating said created or selected gallery, wherein in the event of starting or initiating gallery, by the server system, present on the display of the user device said created gallery specific visual media capture controller label or icon, for enabling user to take or capture or record said created named gallery specific one or more types of visual media and starts monitoring and identifying of geo-location or positions of user device; matching, by the server system, identified current geo-location or positions of user device with pre-stored points of interests or places or positions or spots data to
  • a computer-implemented notification method comprising acts of: inputting a query to monitor a user computing device relative to a geographical point of interest; configuring a preferences associated with the user computing device; monitoring the geographical location of the user computing device relative to the geographical point of interest; executing a rules based on the user data and geographical location of the user computing device relative to the geographical point of interest; automatically communicating a notification to a user computing device based on the geographical location of the target computing device relative to the point of interest; and utilizing a processor that executes instructions stored in memory to perform at least one of the acts of configuring, monitoring, communicating, or processing.
  • an electronic device comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access guide system associated data and resources; monitoring geo-location or positions of user device; identify point of interest or place or spot or location based on matching nearest point of interest or place with current geo-location or positions of user device or check-in place associate position information; presenting information about identified point of interest on the display of user device; based on identified point of interest or place or spot or location or position or in response to capturing of photo or recording of video or in response to initiating capturing of photo ore recording of video or taking of visual media via camera display screen or current scene display by camera display screen, providing contextual information, instruction, step-by-step guide or wizard or instruction.
  • GPS Global Positioning System
  • contextually providing information, instruction, step-by-step guide or wizard or instruction for capturing photo or recording of video comprises provide one or more contextual techniques, angels, directions, styles, sequences, story or script type or idea or sequences, scenes, shot categories, types & ideas, transition, spots, motion or actions, costumes and makeups idea, levels, effects, positions, focus, arrangement styles of group, step by step rehearsal to take visual media, identify required lights, identify flash range, contextual tips, tricks, concepts, expressions, poses, contextual suggested settings or modes or options of camera application including focus, resolution, brightness, shooting mode, contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route or start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information, contextual or matched photos or videos previously taken by other users at same current location or POI or place or
  • OCR Optical Character Recognition
  • a server comprising: a processor; and a memory storing instructions executed by the processor to: maintain user data, point of interest data, rule base, hashtags data and monitored and identified user device current location or position information including coordinates; based on identified current location and/or user data and/or point of interest data and/or hashtags data and/or auto selected or identified or updated one or more executed rule(s), identifying by the server system, one or more contextual hashtag(s); presenting, by the server system, accessible link or control of contextual hashtag(s) on the display; in the event of access or tap on particular link or control of hashtag, presenting by the server system, associated or contextual or requested one or more digital items from one or more sources including present microblog application or present review application or present hashtag named visual media capture controller label or icon (see figure 6 - 610) for enabling user to one tap capture photo or tap & hold to start recording of video and end by user anytime or tap & hold to start video and stop when release and auto post within said one tap to said hashtag or associated one or more destination(s)
  • display on the user interface auto presented hashtag(s) specific contents, wherein contents comprise contents posted by other users of networks.
  • destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites including 3 rd parties social networks & microblogging sites including Twitter, Instagram, Facebook, Snapchat & like, web pages, point of web page, user profiles, rich story, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
  • metadata or data of hashtag comprise one or more keywords, categories, taxonomy, date & time of creation, creator identity, source name including user, advertiser, 3 rd parties web sites or servers or storage medium or applications or web services or devices or networks, associate or define related rules, privacy settings, access rights & privileges, triggers and events, associated one or more digital item, number of followers and/or viewers, number of contents posted, verified or non-verified status or icon and provide descriptions.
  • auto present hashtag(s) based on most ranked, most followers, current trend, most useful, most liked or ranked or viewed or present based on location, place, event, activity, action specific, status, contacts, date & time and any combination thereof.
  • present contextual menu on accessible hashtag control or link of hashtag to enable user to access one or more contextual menu items including take photo, record video, provide comments, provide structured review, provide microblog, like, dis-like, provide rating, like to buy, make order, buy, add to cart, share, and refer.
  • auto present hashtag based on physical surround context-aware including user's current location, surround events, current location or place or point of interest related information, weather, date & time, check-in-place, user selected provided status, rules and any combination thereof.
  • a geo-fence is a virtual perimeter for a real-world geographic area.
  • a geo-fence could be dynamically generated— as in a radius around a store or point location, or a geo-fence can be a predefined set of boundaries, like school attendance zones or neighborhood boundaries.
  • the use of a geo-fence is called geo-fencing, and one example of usage involves a location- aware device of a location-based service (LBS) user entering or exiting a geo-fence. This activity could trigger an alert to the device's user as well as messaging to the geo-fence operator.
  • This info which could contain the location of the device, could be sent to a mobile telephone or an email account.
  • Geo-fencing used with child location services, can notify parents if a child leaves a designated area. Geo-fencing used with locationized firearms can allow those firearms to fire only in locations where their firing is permitted, thereby making them unable to be used elsewhere. Geo-fencing is critical to telematics. It allows users of the system to draw zones around places of work, customer's sites and secure areas. These geo-fences when crossed by an equipped vehicle or person can trigger a warning to the user or operator via SMS or Email. In some companies, geo-fencing is used by the human resource department to monitor employees working in special locations especially those doing field works. Using a geofencing tool, an employee is allowed to log his attendance using a GPS-enabled device when within a designated perimeter.
  • Geofencing in a security strategy model, provides security to wireless local area networks. This is done by using predefined borders, e.g., an office space with borders established by positioning technology attached to a specially programmed server. The office space becomes an authorized location for designated users and wireless mobile devices.
  • Geo-fencing is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries.
  • GPS global positioning system
  • RFID radio frequency identification
  • a geofence is a virtual barrier.
  • Programs that incorporate geo-fencing allow an administrator to set up triggers so when a device enters (or exits) the boundaries defined by the administrator, a text message or email alert is sent.
  • Many geo-fencing applications incorporate Google Earth, allowing administrators to define boundaries on top of a satellite view of a specific geographical area. Other applications define boundaries by longitude and latitude or through user-created and Web-based maps. The technology has many practical uses. For example, a network administrator can set up alerts so when a hospital -owned iPad leaves the hospital grounds, the administrator can disable the device.
  • a marketer can geo-fence a retail store in a mall and send a coupon to a customer who has downloaded a particular mobile app when the customer (and his smartphone) crosses the boundary.
  • Beacons can achieve the same goal as app-based geo-fencing without invading anyone's privacy or using a lot of data. They can't pinpoint the user's exact location on a map like a geo-fence can, but they can still send signals when it's triggered by certain events (like entering or exiting the beacon's signal, or getting within a certain distance of the beacon)— and they can determine approximately how close the user is to the beacon, down to a few inches. Best of all, because beacons rely on Bluetooth technology, they hardly use any data and won't affect the user's battery life.
  • a Beacon is a piece of hardware advertiser or merchant can add to their location or place e.g.
  • Beacons are great for proximities and knowing at a very granular level that someone is near a certain object or product. Beacons send out messages via Bluetooth connection to application users that enter a specified range and perfect for in-store and micro channels.
  • Geo-location identifying the real-world location of a user with GPS, Wi-Fi, and other sensors
  • Geo-fencing taking an action when a user enters or exits a geographic area
  • Geo-awareness customizing and localizing the user experience based on rough approximation of user location, often used in browsers
  • One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer- implemented method.
  • Programmatically means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device.
  • a programmatically performed step may or may not be automatic.
  • a programmatic module, engine, or component can include a program, a sub-routine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions.
  • a module or component can exist on a hardware component independently of other modules or components.
  • a module or component can be a shared element or process of other modules, programs or machines.
  • Some embodiments described herein can generally require the use of computing devices, including processing and memory resources.
  • computing devices including processing and memory resources.
  • one or more embodiments described herein may be implemented, in whole or in part, on computing devices such as servers, desktop computers, cellular or smartphones, personal digital assistants (e.g., PDAs), laptop computers, printers, digital picture frames, network equipments (e.g., routers) and tablet devices.
  • PDAs personal digital assistants
  • Memory, processing, and network resources may all be used in connection with the establishment, use, or performance of any embodiment described herein (including with the performance of any method or with the implementation of any system).
  • one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium.
  • Machines shown or described with figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention can be carried and/or executed.
  • the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions.
  • Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers.
  • Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on smartphones, multifunctional devices or tablets), and magnetic memory.
  • Computers, terminals, network enabled devices are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, embodiments may be implemented in the form of computer-programs, or a computer usable carrier medium capable of carrying such a program.
  • FIG. 1 is a network diagram depicting a network system having a client-server architecture configured for exchanging data over a network, according to one embodiment.
  • FIG. 2 illustrates components of an electronic device implementing notification system in accordance with the invention.
  • FIG. 3 illustrates a notification system in accordance with the disclosed architecture.
  • FIG. 4 illustrates flowchart explaining notification system, according to an embodiment.
  • FIG. 5 and FIG. 6 illustrate exemplary graphical user interface, describing disclosed architecture with examples
  • FIG. 7 illustrates flowchart explaining guide system, according to an embodiment.
  • FIG. 8 illustrates an auto determined, auto suggested, auto identified contextual or matched or dynamic destination(s) and auto send to auto identified destination(s) system with some examples, according to one embodiment.
  • FIG. 9 illustrates a rich story system configured in accordance with an embodiment of the invention.
  • FIG. 10 illustrates flowchart explaining rich story system, according to an embodiment.
  • FIG. 11 illustrate exemplary graphical user interface, describing rich story system with some examples
  • FIG. 12 illustrate exemplary graphical user interface of rich story related gallery and generated or updated map of visited POIs and enable to access each POI specific information or shared media items via contextual menu, describing rich story system with some examples;
  • FIG. 13 is a block diagram that illustrates a mobile computing device upon which embodiments described herein may be implemented.
  • FIG. 1 is a network diagram depicting a network system 100 having a client-server architecture configured for exchanging data over a network, according to one embodiment.
  • the network system 100 may be a messaging system where clients may communicate and exchange data within the network system 100.
  • the data may pertain to various functions (e.g., sending and receiving notifications, text and media communication, media items, and receiving search query or search result) associated with the network system 100 and its users.
  • client-server architecture other embodiments may include other network architectures, such as peer-to-peer or distributed network environments.
  • a platform in an example, includes a server 110 which includes various applications including Guide System Application 156, Rich Story Application 154, Auto Identify and auto present Destinations Application 152 and Notification or Indication Application 150, and may provide server-side functionality via a network 125 (e.g., the Internet) to one or more clients.
  • the one or more clients may include users that utilize the network system 100 and, more specifically, the server applications 136, to exchange data over the network 125. These operations may include transmitting, receiving (communicating), and processing data to, from, and regarding content and users of the network system 100.
  • the data may include, but is not limited to, content and user data such as user profiles, user status, user location or checked-in place, search queries, saved search results or bookmarks, privacy settings, preferences, created events, feeds, stories related settings & preferences, user contacts, connections, groups, networks, opt-in contacts, followed feeds, stories & hashtag, following users & followers, user logs of user's activities, actions, events, transactions messaging content, shared or posted contents or one or more types of media including text, photo, video, edited photo or video e.g. applied one or more photo filters, lenses, emoticons, overlay drawings or text, messaging attributes or properties, media attributes or properties, client device information, geolocation information, and social network information, among others.
  • content and user data such as user profiles, user status, user location or checked-in place, search queries, saved search results or bookmarks, privacy settings, preferences, created events, feeds, stories related settings & preferences, user contacts, connections, groups, networks, opt-in contacts, followed feeds, stories & hashtag, following users & followers,
  • the data exchanges within the network system 100 may be dependent upon user-selected functions available through one or more client or user interfaces (UIs).
  • UIs may be associated with a client machine, such as mobile devices or one or more types of computing device 130, 135, 140, 145, 175.
  • the mobile devices e.g. 130 and 135 may be in communication with the server application(s) 136 via an application server 160.
  • the mobile devices e.g. 130, 135 include wireless communication components, and audio and optical components for capturing various forms of media including photos and videos as described with respect to FIG. 2.
  • the server messaging application 136 an application program interface (API) server is coupled to, and provides programmatic interface to the application server 160.
  • the application server 160 hosts the server application(s) 136.
  • the application server 160 is, in turn, shown to be coupled to one or more database servers 164 that facilitate access to one or more databases 115.
  • the Application Programming Interface (API) server 160 communicates and receives data pertaining to notifications, messages, media items, and communication, among other things, via various user input tools.
  • the API server 162 may send and receive data to and from an application running on another client machine (e.g., mobile devices 130, 135, 140, 145 or one or more types of computing devices e.g. 175 or a third party server).
  • another client machine e.g., mobile devices 130, 135, 140, 145 or one or more types of computing devices e.g. 175 or a third party server.
  • the server application(s) 136 provides messaging mechanisms for users of the mobile devices e.g. 130, 135 to send messages that include text and media content such as pictures and video and search request, feed request or request to view shared media or contents by user or authorized user, subscribe or follow request, request to access search query based feeds and stories or galleries or album.
  • the mobile devices 130, 135 can access and view the messages from the server application(s) 136.
  • the server application(s) 136 may utilize any one of a number of message delivery networks and platforms to deliver messages to users.
  • FIG. 1 illustrates an example platform, under an embodiment.
  • system 100 can be implemented through software that operates on a portable computing device, such as a mobile computing device 110.
  • System 100 can be configured to communicate with one or more network services, databases, objects that coordinate, orchestrate or otherwise provide advertised contents of each user to other users of network. Additionally, the mobile computing device can integrate third-party services which enable further functionality through system 100.
  • the system for enabling users to use platform for receiving indication(s) or notification(s) or information related to contextual point of interest or place or spots where user can prepare one or more type of media or content including capture photo(s) or record video(s) or broadcast live stream or draft post(s) and share with auto identified contextual one or more types of one or more destinations or entities or selected one or more types of destinations including one or more contacts, groups, networks, feeds, stories, categories, hashtags, servers, applications, services, networks, devices, domains, web sites, web pages, user profiles and storage mediums.
  • Various embodiments of system also enables user to create events or groups, so invited participants or presented members at particular place or location can share media including photos and videos with each other. While FIG.
  • gateway 120 may be implemented in the system as separate systems, a single system, or any combination of systems.
  • the system may include a posting or sender user device or mobile devices 130/140 and viewing or receiving user device or mobile devices 135/ 145.
  • Devices or Mobile devices 130/140/135/145 may be particular set number of or an arbitrary number of devices or mobile devices which may be capable of posting, sharing, publishing, broadcasting, advertising, notifying, sensing, sending, presenting, searching, matching, accessing and managing shared contents.
  • Each device or mobile device in the set of posting or sending or broadcasting or advertising or sharing user(s) 130/140 and viewing ore receiving user(s) device or mobile devices 135/140 may be configured to communicate, via a wireless connection, with each one of the other mobile devices 130/140/135/145.
  • 130/140/135/145 may also be configured to communicate, via a wireless connection, to a network 125, as illustrated in FIG. 1.
  • 130/140/135/145 may be implemented within a wireless network such as a Bluetooth network or a wireless LAN.
  • the system may include gateway 120.
  • Gateway 120 may be a web gateway which may be configured to communicate with other entities of the system via wired and/or wireless network connections. As illustrated in FIG. 1, gateway 120 may communicate with mobile devices 130/140/135/145 via network 125. In various embodiments, gateway 120 may be connected to network 125 via a wired and/or wireless network connection. As illustrated in FIG. 1, gateway 120 may be connected to database 115 and server 110 of system. In various embodiments, gateway 120 may be connected to database 115 and/or server 110 via a wired or a wireless network connection.
  • Gateway 120 may be configured to send and receive user contents or posts or data to targeted or prospective, matched & contextual viewers based on preferences, wherein user data comprises user profile, user connections, connected users' data, user shared data or contents, user logs, activities, actions, events, senses, transactions, status, updates, presence information, locations, check-in places and like) to/from mobile devices 130/140/135/145.
  • gateway 120 may be configured to receive posted contents provided by posting users or publishers or content providers to database 115 for storage.
  • gateway 120 may receive a request from a mobile device and may query database 115 with the request for searching and matching request specific matched posted contents, sources, followers, following users and viewers. Gateway 120 may be configured to inform server 110 of updated data. For example, gateway 120 may be configured to notify server 110 when a new post has been received from a mobile device or device of posting or publishing or content broadcaster(s) or provider(s) stored on database 115.
  • Database 115 may be configured to store a database of registered user's profile, accounts, posted or shared contents, followed updated keyword(s), key phrase(s), named entities, nodes, ontology, semantic syntax, categories & taxonomies, user data, payments information received from mobile devices 130/140/135/145 via network 125 and gateway 120.
  • the system may include a server, such as server 110.
  • Server may be connected to database 115 and gateway 120 via wired and/or wireless connections.
  • server 110 may be notified, by gateway 120, of new or updated user profile, user data, user posted or shared contents, user followed updated keyword(s), key phrase(s), named entities, nodes, ontology, semantic syntax, categories & taxonomies & various types of status stored in database 115.
  • FIG. 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the system. In some embodiments, gateway 120, database 115 and server 110 may be implemented in the system as separate systems, a single system, or any combination of systems.
  • the position sensor 242 measures a physical position of the mobile device relative to a frame of reference.
  • the position sensor 242 may include a geomagnetic field sensor to determine the direction in which the optical sensor 240 or the image sensor 244 of the mobile device is pointed and an orientation sensor 237 to determine the orientation of the mobile device (e.g., horizontal, vertical etc.).
  • the processor 230 may be a central processing unit that includes a media capture application 278, a media display application 280, and a media sharing application 282.
  • the media display application 280 includes executable instructions to determine whether the geolocation of the mobile device 200 corresponds to the geolocation of one of the media item stored at server 110 or accessed by server 110.
  • the media display application 280 displays the corresponding media item in the display 210 when the mobile device 200 is at the geolocation where the media item was previously generated by other users.
  • the display 210 includes, for example, a touch screen display.
  • the display 210 displays the media items generated by the media capture application 278.
  • a user captures record and selects media items for adding to rich story by touching the corresponding media items on the display 210.
  • a touch controller monitors signals applied to the display 210 to coordinate the capturing, recording, and selection of the media items.
  • the notification application 260 includes a notification module or component 312, a geolocation module or component 317, a position module or component 317, a user data module or component 353, a points of interest or points module 351, a rules module or rules engine or component 367, a digital items module or component 372 e.g. to access or present camera display screen via camera module or dynamic generated review form module 378.
  • the geolocation module or component 317 communicates with the GPS sensor 238 to access geolocation information of the user device 200 and media item captured or selected location by the user.
  • the geolocation information may include GPS coordinates of the mobile device 200 when the mobile device 200 enters, exit point of interest or position or point or place or generated the media items.
  • the geo-location or position module 317 communicates with the position sensor 242 to access direction information and position information of the mobile device 200 at the time the mobile device 200 or when the user reaches, enters, and/or exits the nearest or contextual or specified point of interest and generated the media item.
  • the direction information may include a direction (e.g., North, South, East, West, or other azimuth angle) in which the mobile device 100 was pointed when the mobile device 200 generated the media item.
  • the orientation information may identify an orientation (e.g., horizon, vertical or one or more types of angles) at which the mobile device 200 was pointed when the mobile device 200 generated the media item.
  • the notification component or module 314 accesses the current geolocation of the mobile device 200, the current direction and position of the mobile device 200, and the corresponding boundaries for the nearest or relevant or contextual points of interest/place/points/ spots e.g. 310 or 305 or 307 or points of interest/place/points/spots e.g. 310 or 305 or 307 within set particular radius of boundaries.
  • the notification component or module 314 compares the current geolocation, direction, and position of the mobile device 200 with the corresponding boundaries of the nearest or relevant or contextual points of interest/place/points/spots or points of interest/place/points/spots within set particular radius of boundaries.
  • the notification component or module 314 determines that a current geolocation of the mobile device 200 is within a geolocation boundary of a identified or matched nearest or relevant or contextual points of interest/place/points/spots e.g. 310 or 305 or 307 or points of interest / place / points / spots e.g. 310 or 305 or 307 within set particular radius of boundaries regardless of a current direction and position of the mobile device 200, the notification component or module 314 generates a notification 314. The notification component or module 314 causes the notification 314 to be displayed in the display 210.
  • the media display application 280 generates a visual guide, such as an arrow or guided map (turn by turn routes), in the display of the mobile device 200 to guide and direct the user of the mobile device 200 to one or more nearest or relevant or contextual or matched points of interest or places or positions or spots or locations or points.
  • a visual guide such as an arrow or guided map (turn by turn routes)
  • the mobile device 200 may display a right arrow to instruct the user to move and point the mobile device 200 further to the right.
  • a notification component 112 of the system 300 monitors the geographical location of the user device 200 (or 130/135/140/145) and communicates a notification 314 to a user device 200 (or 130/135/140/145) according to the user preferences, privacy settings, one or more types of user data including user profile, logs, activities, actions, events, transactions, behavior, senses, interactions, shared contents and user connections or contacts 355 when the geographic location of the user device 200 (or 130/135/140/145) meets geo-location criteria (e.g., near, in, or exiting) related to the point of interest 310.
  • geo-location criteria e.g., near, in, or exiting
  • the user device 200 can be a mobile device (e.g., a cellphone) the geographical location of which is monitored, and can be a mobile device (e.g., a cellphone) to which the notification 314 is communicated.
  • the user device can include one or more of a computing device (e.g., portable computer, desktop computer, tablet computer, etc.), a web server, a mobile phone, etc.
  • the point of interest 310 can be one of a class of locations (e.g., all restaurants, all shopping malls in a five mile radius, etc.) specified in association with the advertisement and the notification 314 is communicated when the user device 200 meets the geo-location criteria for one location (e.g., POI 310) of the class.
  • a query provided to the system 300 can be "all shopping malls".
  • the notification 314 and advertisement 343 are triggered for communication to all matched, intended and approved user devices (e.g., user device 200).
  • the notification component 312 can receive geo-location information 318 that determines the geographical relationship 306 between the user device 308 and the point of interest 310.
  • the geo-location information 318 can be obtained from a technology that identifies location of an entity, such as global positioning system (GPS), triangulation, Wi-fi, Bluetooth, iBeacons, 3 rd parties accurate location identification technologies including Accuware TM which provides up-to approx.. 6-10 feet indoor or outdoor location accuracy of user device 200 and can be integrated via Application Programming Interface (API), access points, pre-defined geo-fence boundaries, and other techniques used to ascertain the geographical location of the entity (e.g., cell phone).
  • GPS global positioning system
  • API Application Programming Interface
  • access points pre-defined geo-fence boundaries
  • other techniques used to ascertain the geographical location of the entity e.g., cell phone.
  • Geo-fencing technology can be employed to determine the proximity relative to a point of interest.
  • a geo-fence is a predefined virtual perimeter (e.g., within a two mile radius) relative to a physical geographic area.
  • specified events can be triggered to occur, such as sending the notification 314 to the user device 200.
  • FIG.3 illustrates a more detailed embodiment of a notification application 260 in accordance with the disclosed architecture.
  • the notification component 312 can include a geo-location component 317 that continuously identifies geo-location information 318 of user device 392 and based on current position information and user data, prepares query to matches contextual one or more points of interests (e.g., POI 310) relative to a query (e.g., user device is near to that particular location or place or position or coordinates or e.g. all shopping malls) or relative to a user's current location place or check in place or auto identified geo- location data or position or exact position from Points of Interest / Points / Positions / Places / Locations Database 345.
  • the query can be refreshed to include added points of interest (e.g., POI 307) and removed points of interest (e.g., POI 305) associated with a geo-fence.
  • the notification 314 is then processed based on the refreshed query.
  • New York City so when current location of any user device e.g. 200, when arrives near or comes near within particular radius of boundaries of advertised one or more products, services, shop(s), item(s), thing(s), person(s), and one or more types of entities then provide notification to all nearest user devices.
  • Advertiser can provide advertisement description, bids, target criteria including send notification to target user devices who are near or within particular radius, provide offers, media including photo or video or text, select or set or associate one or more digital items 370, customized digital items, digital items available from 3 rd parties one or more sources, storage mediums, domains, servers, devices, and networks for enabling notification receiving user to access, open, download, install, use, invoke said digital item(s) 378 from storage medium 370 via digital item component 372 e.g.
  • advertiser can send notification to all user devices near to advertised shop or product or showroom and send notification to them and in the event of acceptance of notification present one or more types of contextual or advertisement associated digital item(s) 370 including e.g. information, new products arrival information, customized offers for regular customers, photos of new products.
  • advertiser creates, selects, apply rules with advertisement which will apply to target user devices.
  • send notification only to new customers send notification to customers of other shops e.g. Jeweler shops for upsells
  • send notification and associated digital item e.g. customized and dynamically selected fields or generated personalized review form based on user data including purchase of product or order of foods
  • present interface to play game and win lottery present camera screen display to capture brands photo and add to rich story and/or send or refer to contacts of user and based on numbers of sharing or reactions provide benefits to referrer user.
  • the notification component 312 access, identifies, selects, sequence, apply, and execute one or more rules from rule base 366 via rules component 367 and/or sensors data generated from one or more types of sensors and/or accessing current location(s) or position information 318 via geo-location or position component 317 and/or camera scan and/or date & time and/or access user data 355 via user data component 353 including user structured profile or fields and associate values including gender, income range, age range, education, skills, interest, home address, work address and like, logs or identification of user activates, actions, events, transactions, interactions, communications, sharing, participations, collaborations, connections, behavior, one or more types of senses and status and identify, customize, dynamically or auto customized or update, select, set, apply and execute one or more rules from rule base 366 via rule component 367 for processing, preparing, generating and sending one or more notifications 314, , for example auto or manually identify various status of user including identify that user starts or currently consuming or finishes food intake, visit particular place or tourist place, finishes viewing
  • airport, mall and passing or near to particular shop ordered particular types of food items, identify that user is exit from particular place e.g. movie theatre, airplane, bus, cruise etc., identify that user is traveling at particular place(s) or location(s) or point(s) of interest on particular date and time, track user's locations, routes, status, position, interaction, voice, behavior and based on numerous combinations present one or more types of contextual or pre-selected or auto selected or customized digital items.
  • user interface 378 enables user to provide or ad one or more points of interest, places, points, positions, spots and locations and provide associate metadata, categories, tags, keywords, phrases, taxonomy, description and one or more types of media.
  • user interface 380 enables user to provide one or more preferences or interests or filters including one or more types or categories or tags or key phrases or favorite places, keywords with Boolean operators (A D/OR/NOT/+/-/Phrases) related points of interest, places, points, positions, spots and locations including shops, products, services, brands, named one or more types of entities, items, tourist places, selfie spots, monuments, structures, roads, restaurants, hotels, foods and like.
  • Boolean operators A D/OR/NOT/+/-/Phrases
  • User can apply notification settings including receiving particular number of notification within particular period of time, on or off receiving of notifications, stop receiving of notification from particular source(s), receive notifications from particular range or radius or boundaries of locations, select and apply one or more rules for receiving notifications, configure do not disturb settings including receive all, scheduled, from selected or all source(s), mute notification for particular period of time or up-to un-mute, change ringtones, alerts tones or vibration type(s), sent notifications as per user selected or manually provided status, sent notification only when user's current location is relative to photo spots or review spots and like, sent notification only from scanned object(s), QR code(s), person(s), product(s), shop(s) item(s), thing(s) or one or more types of entities via camera display screen, User can apply one or more privacy settings including disclose or do not disclose user's identity, location information, one or more types of fields and associated values related to user profile data including name, address, contact information, gender, photo, video, age, qualification, income, skills, interests, work place address, one or
  • One or more types of user interfaces 375 enables user to provide one or more types of user data including updated user profile, structured fields or fields combination specific values,
  • User database 355 stores plurality types of user data generated by user, received from one or more sources, and auto generated data.
  • Point of interest or positions or spots or places or locations related to one or more types of entities including shop, product, item, thing, art, person, showroom, showcase, school, college, road, tourist places, selfie spots, user identified or provided or suggested places database 355 comprise advertised and/or non-advertised including user suggested or 3 rd parties provided point of interest or positions or spots or places or locations related to one or more types of entities.
  • FIG. 4 illustrates a computer-implemented notification method in accordance with the disclosed architecture.
  • the geographical location of the user computing device e.g. mobile phone is monitored by server module 150 relative to the geographical point of interest.
  • server module 150 the geographical point of interest.
  • matchmaking by server module 150, of user device's current location or position information and user data with points of Interest or positions or places or points or locations or spots data and associated one or more types of data and further select, customize, auto select or auto update based on parameters, apply and execute one or more rule(s) from rule base to identify nearest contextual one or more point(s) of interest or points or positions or places or spots or locations.
  • the notification is processed and generated, by server module 150, based on the geographical location of the user computing device relative to the geographical point of interest and plurality types of user data.
  • server module 150 checks whether application is open if no then follows process 432 and if not open then follows process 416.
  • a notification is automatically communicated, by server module 150, to a user computing device 200 at client application 260 based on the geographical location of the user computing device 200 relative to the point of interest.
  • server module 150 presents one or more notifications at client application 260 of user device 200.
  • Figure 5 illustrates an example prospective places or spots notification service, wherein user is notified about contextual or matched or associated prospective places or spots where user can capture photo or record video or broadcast live stream or prepare one or more type of media or contents related to that place or spot.
  • advertiser(s) or merchant(s) 505 is/are enables to provide or create one or more advertisements campaigns, associate advertisement groups and advertisements including geo location or positional or place information of advertised entity for example shop, product, particular part of shop, department, showroom, showcase, item, thing, person or physical establishment, advertisement description, offer information including discount, coupon, redeemable points, offers or parts or components of notification message including one or more type of media including text, links, image, video, user actions (e.g. "Like” button, "Take Photo” button, "Provide Video Review” button) 545/548/550/552, advertisement targeted keywords, bids, condition associate with said offer or discount (e.g.
  • photo filter or media to overlay on photo by user, target criteria including type of user or user profile, for example gender, age, age range, education, qualification, skills, income range, hobbies, languages, region, home or work or other location(s) or place(s) or location boundaries, preferences of user, one or more type of user profile data or fields or associated values or ranges of values or selections.
  • Information provided by advertiser(s) or merchant(s) 505 is sent to server 110 and store at database or storage medium 525 via prospective places or spots module 512, wherein prospective places or spots are places or spots where contextual or associated or matched notified user can prepare or provide user generated contents including one or more types of media, for example photo, video, live video stream, review, blog, video review, ratings, likes, dislikes, notes, suggestions, micro blogging and like.
  • user(s) 510 is/are enables to provide information of prospective places or spots including geolocation or positional information, user actions, details, name of entity including school, college, shop, hotel, restaurant, home or house, apartment, tourist places, art gallery, beaches, roads, movable people or person, temple, malls, railway station, airport, bus stops, particular show rooms, show case & like where in the event of detecting or matching of current location of other users of network with said prospective place or location, user notifies about prospective place where user can prepare and post or share or broadcast or send one or more types of media including select or capture photo or record video or live stream video to one or more destinations, sources, users, contacts, connections, followers, groups, networks, devices, storage mediums, web services, web sites, applications, web pages, user profile, hashtags, tags, categories, events and like.
  • Information provided by user(s) 510 is sent to server 110 and store at database or storage medium 525 via prospective places or spots module 512, wherein prospective places or spots are places or spots where contextual or associated or matched notified user can prepare or provide user generated contents including one or more types of media, for example photo, video, live video stream, review, blog, video review, ratings, likes, dislikes, notes, suggestions, micro blogging and like.
  • database or storage medium 530 stores information about user provided by user or any other sources including user profile or user information , activities, actions, events, transactions, shared media, current location or check-in place or positional information, user preferences, privacy settings and other one or more types of settings.
  • database or storage medium 535 stores or updates rules.
  • database or storage medium 540 stores various types of media including text, photo, video, voice, files, emoticons, virtual goods, links & like shared or provided by user
  • server 110 can access various types of media from 3 rd parties applications, services, servers, networks, web sites, devices and storage mediums.
  • database or storage medium 541 stores digital items, applications, message templates, web service links, objects, interfaces, media, fonts, photo filters, virtual goods, digital coupons, user actions which advertisers or merchants select while preparing advertisements or notification message or user can select while preparing or providing prospective places or locations or spots information.
  • server 110 based on user's 510 current location or place or check-in place or positional information and users one or more type of data 530, server 110 matching user data 530 with data of prospective places or spots 525 and applying one or more contextual or associate or identified rules stored at rule base database or storage medium 535 to identify or recognize or detect matched prospective locations or places or spots from database or storage medium 525 where user 510 is alerted or notified by sending rich notification message e.g.
  • 525/530/535/540/541 ay user device 560, wherein rich notification message is prepared by advertiser 505 or user 510 or server 110 based on data stored at 525/530/540/541, and enabling user 510 to access rich notification information and associated links or user action links 545/548/550/552 for providing or sending or posting or sharing or broadcasting one or more type of media or information 565 related to that place or location or that place or location associate entity including shop, product, person, company, firm, club, school, college, organization, hotel, restaurant, food, dress, item, thing, object, service, arts, art gallery, event, activity, action, transaction and like.
  • user 510 can receive or view said server 110 provided rich notification(s) e.g.
  • user 510 can access said one or more e.g. 525/530/535/540/541 rich notification messages.
  • user 510 received and views rich notification 548 and tap on "Take Photo/Video" user action or active link or accessible link, wherein based on user access or tap or click on said link, server 110 presents or downloads or installs or enabling user to access said link associate application or interface or object or digital item or form or one or more controls (buttons, combo box, textbox etc.).
  • server 110 For example based on user's 510 tapping on "Take Photo/Video” link 548, server 110 presents photo or video application or camera display feature of application 583 at user device 560 so user 510 can take one or more photos or videos related to rich notification 548 for example capturing "Cafe Coffee Day” photo or video 565 via "photo” icon or button 586.
  • server 110 identifies or detects or verified or matches object e.g. "coffee cup” 580 / e.g. "cafe coffee day” text 599 inside said user's 510 said captured photo 565 with said advertiser's 505 said notification message 548 associate entity or details to identify that user 510 took photo or video 565 related to said clicked notification 548.
  • user 510 can view offer 572 associate with that notification message 548 related advertisements stored at 525 and posted by advertiser 505.
  • User 510 can get offer when user 510 captures a photo 565 and share said captured photo with user's 510 contacts via posting or sending to selected contacts or followers or story or feed 595, which stores at database or storage medium 540 of server 110 via Media Receiving Module 566.
  • server 110 enabling user 510 to view media or photos or videos of other users related to that place or spot to similar other places or spots, so user can view angels, scene and learn from them that how, what, where, when to prepare media or capture photo or record video or broadcast live stream.
  • user 510 can receive notification(s) e.g. 525/530/535/540/541 at spectacles 590 or device e.g. 560 connected with spectacles 590 and enabling user 510 to capture photo or video via spectacles 590 which have an integrated wireless video camera 598 that enable user to capture photo or record video clips and save them in spectacles 590 or to user device 560 connected with spectacles 590 via one or more communication interface or save to server 110 database or storage medium 540.
  • the glasses begin to capture photo or record video after user 510 tap a small button near the left camera 598.
  • the camera can record videos for particular period of time or up-to user stops it.
  • the snaps will live on user's Spectacles until user transfer them to smartphone 560 and upload to server database or storage medium 540 via Bluetooth or Wi-Fi or any communication interface, channel, medium, application or service.
  • user 510 is enabled to capture each possible photos or record videos or share what user is thinking about particular location or place or said location or place related entity including shop, product, movie, person, thing, item, object, service & like and add to feed or day to day story and/or share to one or more contacts, groups, followers, categories users of network, auto identified contextual users, servers, destinations, devices, networks, web sites, web pages, user profiles, applications, services, web pages, databases, storage mediums, social networks and like.
  • User can describe whole day visually without missing any possible selfies or photos or videos related to user's day to day life or activities or actions or events or transactions and share with friends and contacts or followers.
  • user 510 visits various places New City then user is notified at every possible prospective spots or places or locations during user' s travelling, so user can capture each possible moments via photos or videos or describe moments via micro blogging or notes without missing prospective photos or videos or selfies at various visited places, locations, spots and that visited places, locations, spots related people, products, services, foods, items, natural scene, objects, arts, photos, artistic roads, monuments, structures & buildings, fair, light show e.g. Times Square, showrooms, dresses, selfie booth or spots.
  • At present user is misses many prospective photos or videos or micro blog sharing because user does not know where and when what to capture, what to record or what to broadcast or what to write or post or share.
  • server 110 can auto identify and stores at storage medium 525 prospective places or spots based on location or place or positional information of particular number of users' particular number of photos or videos taken by users or media posts by users or particular number of reactions on them within particular period of time or at current particular range of time.
  • rule base comprises identifying or matching or executing contextual or associated various rules based on intelligently identify, request, alert, store, process various data of user or prospective places or spots including enabling user to provide check-in place(s), updated status or auto identify check-in place based on monitoring current location of user for particular period of time identify user associate or accompanied friends based on monitoring of similar current location for particular period of time or check-in place, identify user's intent to capture photo or video based on sensors, based on monitoring of particular speed of changing of current location identify that user or connected users of user is moving i.e.
  • Point of Interest Positions, Places, Spots Map Generation Module 520 discussed in detail in Figure 12
  • Figure 6 illustrates exemplary graphical user interface(s), in the event of tapping on notification 548, user device 200 is presented with notification associated digital item e.g. camera display screen interface 583 and in the camera display screen interface user is auto presented with notification associated destination(s) visual media capture controller (e.g. hashtag (e.g.
  • user is also presented with one or more user actions 613 including selected pre-created message to send with said capture photo or recorded video, Like, provide status including purchased, viewed, ate, drink and watch, instruct receiver including Refer others, Re- share to others, provide comments and like.
  • user can remove 606 said auto presented or notification associate contextual visual media capture controller 610.
  • system automatically identify one or more objects inside captured photos or recorded videos e.g. brand name "cafe coffee day" 694 and coffee cup 694 and based on matching user's exact place position or geo-location information with location of captured photo or recorded location system identifies and verifies that user capture said advertised notification related photo or video and send to advertise specific destinations e.g.
  • all contacts of user and based on number of recipients system identifies sending of photo or video to advertiser set particular number of recipients e.g. if send to more than 10 contacts then provide pre-specified benefits to said user including digital coupon, discount, gift, offer, cash back, redeemable points, voucher, invite to take sample and like.
  • user is auto presented with contextual or matched destination visual media capture controller 610 based on matching of points of interest or points or advertised places or products positions or spots database 345 with user's current location 317, user data, preferences, privacy settings 355 as discuss above.
  • current updated geo-location and position information 317 based on monitoring, tracking and identification of current user device's 200 current updated geo-location and position information 317 system auto removes previously presented visual media capture controller 610 and associated presented information 611 and associated presented user action(s) 613 and present another contextual or matched destination visual media capture controller (if any) based on matching of points of interest or points or advertised places or products positions or spots database 345 with user's current updated or nearest or particular set radius specific location 318, user data, preferences, privacy settings 355 as discuss above.
  • user is presented with pre-set particular duration of delay sending message 671 and passed updated reaming duration indicator 672, so within said duration 672 user is able to preview and remove 673 captured photo or video 631 and capture again if user want for sending another photo or video.
  • system identifies that user consume tea or particular type of tea at "Tea House", so based on that system sends notification to user and in response to accepting or tapping or clicking on notification 645 user device 200 is presented with customized or pre-created and pre-stored or dynamically generated or "Tea House” as an advertiser provided digital item e.g. Review Interface 655. So user can provide review immediately after consuming tea and user provided review auto sent to authorize person of "Tea House” for further process and action.
  • System continuously or up-to system off or mute by user, monitoring, tracking and identifying of user device's 200 current location or position information 318, identifying or matching or relating nearest or surround place or point of interest or advertised place or points or positions or selfie spots from database 355 and filter based on user data including user preferences and privacy settings and if application is not open then send notification and in the event of tap on notification present associated digital items including one or more applications, interfaces, dynamically generated or customized or pre-created forms, web sites, web pages, set of user actions or controls, objects and one or more types of associated web services, data or contents or one or more types of media from one or more sources.
  • FIG.7 illustrates a computer-implemented guided system for dynamically, context aware, contextually and artificial intelligently facilitating user to take visual media including capture photo, record video or prepare one or more types of media including text, microblog & like in accordance with the disclosed architecture.
  • the geographical location of the user computing device e.g. mobile phone 200 is monitored, by server module 156, relative to the geographical point of interest e.g. 310.
  • At 730 identify that user tap on indication to access, open or invoke indication and in the event of tapping or clicking on particular indication
  • open auto open, allow to access one or more associated digital items including one or more applications including camera application to capture, display and share visual media, interfaces, objects, user controls, user actions, web site, web page, dynamically created or customized or pre-selected forms, one or more type of media including one or more or set or series or sequence of photos, videos, text, data or content or information, voice, emoticons, photo filters, digital coupons, multimedia, interactive contents, advertisements, enable to access associated web services and data from one or more sources.
  • applications including camera application to capture, display and share visual media, interfaces, objects, user controls, user actions, web site, web page, dynamically created or customized or pre-selected forms, one or more type of media including one or more or set or series or sequence of photos, videos, text, data or content or information, voice, emoticons, photo filters, digital coupons, multimedia, interactive contents, advertisements, enable to access associated web services and data from one or more sources.
  • start guide system in the event of identifying 737 that user wants to take visual media or prepare one or more types of media or content for current context or current location or environment including current or identified nearest point of interest, position or place or associated one or more types of entities including object(s), product(s), item(s), shop, person, infrastructure & like.
  • server module 156 curated or contextual or pre-stored or pre-configured or pre-selected one or more types of one or more media items which is/are previously taken or generated or provided from/at identified POI e.g. 394 or 3rd parties contextual stock photos or videos or similar types or pattern of places or locations or POIs or positions.
  • contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information based on user device's 200 current location 318 specific particular identified or selected POI or place or position or entity (person, object, item, product, shop etc.) related to said POI or place or position e.g.
  • Figure 8 illustrates auto presenting, present suggested or enable user to select one or more destinations for sharing one or more types of media or content.
  • Advertiser 805 is enable to prepare listing of one or more types of destinations, via advertiser user interface, including provide, select, edit, update and set one or more brand pages, brand web sites, brand related hashtags, tags, categories, servers, services or web services, databases or storage mediums, applications, objects, interfaces, applets, servers, devices, networks, created galleries, albums, stories, feeds, events, profiles and like, in which users can share, send, broadcast and provide one or more types of media or user generated or user prepared contents including photos, videos, live steam, text, posts, reviews, micro blogs, comments, user reactions and like, and also enable advertiser 805 to provide or update or select associate details, select, set, apply one or more associated rules 835, policies, privacy settings, target criteria, bids, preferences, geo-location or positional information, set or define geo-fence boundaries information and target location query e.g.
  • user 810 also enable to provide, create, select, update & set one or more destination lists including user connections like phone contacts, events, networks , groups & followers 830, web sites, web pages, user profiles, storage mediums, applications, web services, categories, hashtags, galleries, albums, stories, feeds, events, social network accounts and like, user 810 is also enable to provide, select, update and set or apply or associate one or more rules, privacy settings, policies, preferences and details which stores at database 825 via prospective destination module 812 of server 110.
  • camera application 855 and visual media capture controller 898 and associate information 896 User can remove visual media capture controller 898 via remove icon 893 or user can switch to other visual media capture controllers via previous icon 894 or next icon 895.
  • user device 200 captures photo 850 via photo capture button or icon 862 or visual media capture controller 898 then based on auto presented visual media capture controller associated destination or user device's location or position specific and identified or recognized object e.g. "coffee cup” 852 and "cafe coffee day” text or word or logo 858 inside photo via employing one or more types of object recognition technologies and optical character recognition (OCR) technologies, system auto identifies or auto matched contextual one or more destinations e.g.
  • OCR optical character recognition
  • destinations 869 and 876 and auto select and initiate auto sending to said auto identified destinations within set particular period of duration, so user can preview 885 and cancel 884 sending of photo before expiration of said duration 882.
  • system auto sends captured photo 885 to associated or defined or set or auto identified destinations 869 and 876.
  • user can change destination via manually selecting one or more destinations from list of destinations 882 before expiration of said timer or duration 882.
  • user can manually select one or more destinations from list of destinations 875 including user contacts and groups, opt-in contacts, and auto suggested destinations e.g. 877, 878 and 879.
  • user can provide one or more types of admin rights to one or more members and provide one or more or all types of access rights 987.
  • User can accept one or more requests 933 from other users of network to become member and/or admin of particular rich story e.g. rich story "My USA Tour" 903.
  • User can provide preferences and apply privacy settings and notification settings 934 to receive notifications or indications of contextual POI or places that matched with user's location when user arrived or dwell at that POI or place including selecting one or more pre-created or presented categories, types, hast tags, tags, key phrases, keywords specific (based on keywords related to POI related information or metadata or comments or contents associated with photos or videos captured or shared or photos or videos that was previously taken from that POI or place), prospective objects related (i.e. recognized objects inside collections of photos or videos that was previously taken from that POI or place) or prospective objects related keyword specific
  • user is enable to receive all or limit daily or within scheduled or up-to end of rich story 935 receiving of number of notifications or alerts or indication or presentation of number of contextual POIs 936 and associate information.
  • user is enable to apply do not disturb settings 937 including do not receive notification while user is on call, at night (scheduled or default), while pause, while moving (in car or in ride, but not while walking), while eating food (based on place), while at fixed location and not much moving (e.g.
  • user is enable to ON or OFF one or more types of ring tones or vibrations, make silent, select and set ringtones and/or vibrations for receiving of one or more types of alerts from one or more type of triggers including notification or indication in the event of identification of identification of new POI 938.
  • User can provide rights to receive and view rich story to one or more types of viewers including user only or make it private so only rich story creator user only can access it 940 and/or user can provide rights to receive and view rich story to all or selected one or more contacts, groups or networks 941 and/or all or selected one or more followers of user 942 and/or participants or members 943 of rich story 903 and/or contacts of participants or members 943 of rich story 903 and/or followers of participants 946 and/or contacts of contacts participants 947 and/or one or more target criteria including age, age range, gender, location, place, education, skills, income range, interest, college, school, company, categories, keywords, and one or more named entities specific and/or provided, selected, applied, updated one or more rules specific users of networks or users of one or more 3 rd parties networks, domains, web sites, servers, applications via integrating or accessing Application Programming Interface (API) e.g.
  • API Application Programming Interface
  • User can provide presentation settings and duration or scheduled to view said rich story including enabling viewers to view during start and end of rich story period 955, user can view anytime 966, user can view based on one or more rules 967 including particular one or more dates and associate time or ranges of date and time.
  • User can select auto determined option 968, so system can determine when to send or broadcast or present one or more content(s) or media item(s) related to rich story e.g. rich story 903.
  • enable user to set to notify target viewers or recipients 974 as and when media item(s) related to rich story e.g. rich story 903 shared by user or participant one or more members of rich story.
  • FIG. 10 illustrates a computer-implemented rich story method in accordance with the disclosed architecture. At 1003, if new rich story or gallery e.g.
  • server module 154 (A) ask user to provide contextual information by dynamically generating and presenting contextual form(s) and/or (B) auto identifies associated information based on information about POI or place and user data from one or more sources and/or (C) auto identifies information based on recognized object(s) inside visual media and auto identifies said identified object associated information from one or more sources and/or (D) auto identifies contextual digital items or identifies user associated digital item(s).
  • Figure 11 illustrates exemplary graphical user interface(s) 265 for providing or explaining rich story system.
  • user can provide title or name of rich story e.g. "My USA Tour Story” and tap or click on "Start” button or icon or link or accessible control 1113 to start preparing rich story or adding presented or identified POIs specific one or more types of media specific one or more media item(s) including selected or captured photo(s), selected or recorded video(s) and user generated or provided content item(s).
  • User can configure and manage created story via clicking or tapping on "Manage” icon or label or button or accessible control 1115 (as discuss in Figure 9 and 10).
  • User can input title at 903 and tap on "start” button 1113 to immediately start rich story which created, manage and viewed by user only and later user can configure story and invite one or more contacts or groups or followers or one or more types of users of networks and set or apply or update privacy settings for viewers, members and can provide or update presentation settings via clicking or tapping on "Manage" icon or label or button or accessible control 1115 (as discuss in Figures 9 and 10).
  • user can remove or skip or ignore or hide or close said presented POI by tapping on remove or skip or hide icon 1208 and instruct system to present next available POI or place at 1127 or based on updated geo- location or position information of user device 200, server or rich story system update or present next nearest contextual POI or place or spot or location or position and hide or remove earlier POI or place or location or spot.
  • system enables user to show previous and next POI(s) or place(s) or location(s) or spot(s) for view only and shows current POI for taking associate one or more types of one or more visual media item(s).
  • user can tap on default camera photo capture icon e.g. 1129 or video record icon e.g.
  • 1131 to capture photo and send to selected one or more contacts via icon 1133 in normal ways.
  • user is enable to pause or re-start or stop 1136 rich story e.g. 903 and manage rich story 1135 (as discuss in figures 9 and 10).
  • use is presented with more than one visual media capture controller(s) or menu item(s) e.g. 1180 and 1190 related to more than one created rich stories and display information about current contextual identified POI specific information e.g. 1187 and enable to capture photo or record video (one tap) or record video (hold on label to start and release label when finish video) and add to selected or clicked or tapped visual media capture controller label or icon e.g. 1180 or 1190 specific or related rich story.
  • User is enable to pause, restart, and stop rich story or gallery 1180 via icon 1186 and manage via 1185 (as discuss in figure 9 and 10) and view number of views on shared media item(s) indictor 1182 or pause, restart, and stop rich story or gallery 1190 via icon 1196 and manage via 1195 (as discuss in figure 9 and 10) and view number of views on shared media item(s) indictor 1194.
  • User is enabling to skip or hide or remove or instruct to present next nearest or next prospective POI or place or spot or location or position via icon 1188.
  • user is enable to turn ON the guide system via tap on icon 1104 to start guide system (as discuss in figure 7).
  • system starts guide system (as discuss in figure 7) and based on current user device 200 geo-location or position information 318 specific particular identified or selected POI or place or position or entity (person, object, item, product, shop etc.) related to said POI or place or position e.g.
  • contextual rules 366 provide or present or instruct or guide user for one or more techniques, angels, directions, styles, sequences, story or script type or idea or sequence, scenes, shot categories, types or ideas, transition, spots, motion or actions, costume or makeup idea, levels, effects, positions, focus, arrangement styles of group, step by step rehearsal to take visual media, identify required lights, identify flash range, contextual tips, tricks, concepts, expressions, poses, contextual suggested settings or modes or options of camera application (e.g.
  • contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route or start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information.
  • system identifies and presents contextual or matched photos or videos 1117 previously taken by other users at same current location or POI or place or similar types of POI(s) or place(s) or location(s) based on matching recognized or scanned object(s) inside scene of camera display screen (before capturing photo) with stored media 115 or 394 or 540 at server 110 and provide contextual tips and tricks e.g. 1116 based on one or more types of sensor data, user data, recognized object(s) inside camera display screen (before capture phot ore record video) and identified rules.
  • system provides above resources based on captured photo and recorded video and based on provided resources including tips and tricks and matched previously taken curated photos or videos user can retake photo or video and add to rich story or gallery.
  • user is enable to view statistics including view number of visual media item(s) or content item(s) created or shared or add to rich story by user and participant member(s) (if any), number of views and reactions on each or all visual media item(s) or content item(s) created or shared or ads to rich story by user and each participant member(s) (if any), number of missed POI or place where user or participant member(s) (if any) missed to capture photo or record video or provide related content, number of POIs or places where user or participant member(s) (if any) captured photos or recorded videos or provided related content(s), number of total media item(s) in particular rich story or all rich stories and like.
  • Figure 12 illustrates logical flow and example of rich story e.g. rich story 903.
  • system presents visual media capture controller label (e.g. named same as rich story title) e.g. "My USA Tour Story” 1 140.
  • visual media capture controller label e.g. named same as rich story title
  • user is presented with all contextual prospective POIs and routes.
  • user is provided with route information from 1 st to 2 nd POI including distance, time took to reach or estimated time to rich, start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI.
  • user can view next prospective one or more POIs and view associate contextual or created photos or videos to become well prepare and learn in advance when user will reach at next POI.
  • system maintains logs of routes of user's visits and visually presents or present routes on map with missed POI, suggested POI by user, POIs where user captures or records visual media including photos or videos or provided one or more types of contents, place or position or location where POI not show to user but user captures or records visual media including photos or videos or provided one or more types of contents.
  • POI e.g. "Mumbai Airport” 1201
  • user is presented with POI name and details at 127 and when user taps on 1140 to capture photo then photo saved to rich story 903 gallery or album or folder in user device 200 storage medium and/or server 110 database 115 and/or 3 rd parties one or more storage medium or database or cloud storage.
  • user can view, search, browse, select, edit, update, augment, apply photo filters or lenses or overlays, provide details, remove, sort, filter, drag and drop, order, rank one or more media items of selected rich story e.g. 903 gallery or album or folder.
  • user can views captured photo 1227 at POI1 1201.
  • user can also view other details related to said captured photo or media item including date & time, location, metadata, auto identified keywords based on auto recognized objects associated keywords, file type, size, resolution and like, view statistics including number of receivers, viewers or views, likes, comments, dislikes and ratings, POI1 related information and based on recognized object(s) inside photo(s) or video(s) taken at POI1, identify similar photos and videos, so user can compare and view and determine quality of his captured photo or video, user can view routes from start to POI1 and estimated time to reach, view route from start to POI1 on map and can view calculated time spent on capturing or recording photo(s) or video(s) and providing associated details 1227.
  • view statistics including number of receivers, viewers or views, likes, comments, dislikes and ratings
  • POI1 related information and based on recognized object(s) inside photo(s) or video(s) taken at POI1
  • identify similar photos and videos so user can compare and view and determine quality of his captured photo or video
  • user can view routes from
  • user can visually view visited POIs on map 1212 specific access logs of each POI by tapping or clicking or utilizing contextual menu on each POI and can view logs, captured photos or videos and associated details, route details, 3 rd parties provided details or advertisers or sponsored provided contextual contents, various statistics (discussed above), status, activities, actions, events, transactions conduct or done by user.
  • system e.g. presents POI2 1202, but user e.g.
  • rich story 903 galley or album or presentation interface 1200 shows missed status 1230 and enable user to access said missed POI2 specific contextual contents including information about said POI and photos or videos previously taken by other users.
  • user is enable to auto generate visual media later (i.e. without capturing at particular POI location) based on merging of front as user's pre-stored one or more photos or series of images or video with or without particular color transparent background with background as user selected from list of curated or pre-stored visual media without any human body inside said photos or videos related to said missed POI.
  • rich story has more than one member i.e.
  • user or authorized participant or members can view photos or videos related to one or more POIs of one or more other members.
  • User can filter, search, match and select one or more rich galleries including filter one or more selected POIs wise and/or filter one or more participant member(s) wise and/or filter as per date & time or ranges of date & time and/or view chronologically and/or view as per one or more object keywords, object model(s) or image sample(s) specific and/or one or more keywords, key phrases, Boolean operators and any combination thereof specific media items related to one or more selected galleries.
  • object keywords object model(s) or image sample(s) specific and/or one or more keywords, key phrases, Boolean operators and any combination thereof specific media items related to one or more selected galleries.
  • “Mumbai Airport” when user reaches Boston then based on user's current location and user data, system identifies and hide or remove previous POI2 1202 or updates or presents new POD 1203 related information 1127 on user device 200 at interface or display 1123. For example when user tap on customized or contextual or auto presented visual media capture controller label or icon 1140 then scene 1122 on camera display screen captured and captured and stored photo 1122 automatically post 1232 to rich story e.g. 903 or gallery of rich story 903 so making viewable for other authorized viewers. System updates logs and maps of user visited places or POSs or positions or locations or spots on map 1212 continuously. Authorized user or participant member(s) can view and access as per tights & privileges other participant member(s)'s map 1212.
  • POI4 1204 After exiting from POI3 1203 when user enters into POI4 1204 user is presented with new POI 1204 specific information 1127 or 1169 or 1187 and facilitate user or enable user by guide system to take better photo and video. For example user ask other user to records video clip 1235 of user at POI4 1204 which auto posts to rich story 903 gallery or interface 1200, so user can view, access, remove, edit and update it 1235. In an embodiment user is enabling to provide relation notes 1233 on relation of first media item(s) with second media item.
  • POI5 specific details 1127 or 1169 or 1187 and/or ringing of pre-set ringtone and/or pre-set vibration type and/or send push notification with or without notification tone in the event of device is close.
  • the member of rich story 903 captures photo 1238 and posts at rich story 309 gallery or interface e.g. 1200 then user can view or learn based on shared phots by participant member of rich story e.g. 309.
  • User can tap on indicator 1270 to view all shared media item(s) and associated all information shared by user and/or one or more members of rich gallery e.g. 903.
  • system presents new POI6 specific
  • visual media capture controller label or icon 1127 (labeled as POI name or title) to user on user device 200 display 1123 or 1150 or 1175, so user can tap on said dynamically presented label or user can tap on rich story specific labeled visual media capture controller 1140, so user can tap on it to capture photo or tap and hold to start video and release when video recording finished and post to rich story 903 gallery e.g. interface 1200.
  • user can tap on photo 1240 to sequence wise view all shared media item(s) by all participant members of rich gallery 903 as per set period of interval between presented media items.
  • User can tap on slideshow to close it or swipe to skip present POI related slideshow and show next POI related slideshow of shared media item(s).
  • User can view various status of user and/or participant members at rich story e.g. 903 interfaces e.g. 1200.
  • User can restart 1245 paused 1245 rich story e.g.903 via e.g. tap on icon or button or accessible control 990 or 1136 (play icon) or 1162 (play icon).
  • system After re-starts, system again started to present information about current POI (last paused POI) or present information about newly identified POIs e.g. P07 1207 and P08 1208 (due to both are very near to user or both are within set particular ranges of radius of boundaries), so user can capture photo or record video at that POI (e.g. advertised POI7 by "Baristro") to enable user to capture photo or record video and send to user's contacts or viewed by participant members via rich story e.g. 903 and based on number of sharing or viewing user can gets benefits or offers provide by said advertised POI by "Baristro" (as discussed in figures 5 and 6).
  • POI e.g. advertised POI7 by "Baristro”
  • POI7 After dwelling in POI7 user can visit POI8 and tap on next icon 1199 and view information about POI8 1208 and tap on photo capture icon 1164 or video record icon 1166 to take visual media and auto post to rich story 903 gallery interface e.g. 1200.
  • POI8 After exiting of POI8 when user enters he can view information about further updated newly identified POI9 1209 and can view information 1254 about POI9 for preparing to take POI9 related visual media.
  • user can view information 1254 about next POI e.g. POI10 2010 and be prepare for taking visual media at next POI e.g. POI10.
  • Rich story 903 creator or authorized user can stop rich story 903 via e.g.
  • Rich story 903 creator or authorized user can re-start rich story 903 via e.g. button 990 or icon 1136 (paly icon) or 1162 (play icon), in the event of re-starting of rich story e.g. 903, system presents rich story 903 specific labeled visual media capture controller label or icon 1140 or 1 174 on display of all participant members device(s) and also present last stopped POI related information or newly updated POI specific information e.g. 1127 or 1169.
  • one or more types of presentation interface is used for viewers selections or preferences including present newly shared or updated received media item(s) related to one or more stories or sources in slide show format, visual format, ephemeral format, show on feeds or albums or gallery format or interface, sequence of media items with interval of display timer duration, show filtered media item(s) including filter story(ies) wise, user(s) or source(s) wise, date & time wise, date & time range(s) wise, location(s) or place(s) or position(s) or POI(s) specific and any combination thereof, show push notification associate media item(s) only.
  • computer system 1000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • a personal computer system desktop computer, laptop, notebook, or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • computer system 1000 includes one or more processors 1010 coupled to a system memory 1020 via an input/output (I/O) interface 1030.
  • Computer system 1000 further includes a network interface 1040 coupled to I/O interface 1030, and one or more input/output devices 1050, such as cursor control device 1060, keyboard 1070, multitouch device 1090, and display(s) 1080.
  • I/O input/output
  • embodiments may be implemented using a single instance of computer system 1000, while in other embodiments multiple such systems, or multiple nodes making up computer system 1000, may be configured to host different portions or instances of embodiments.
  • some elements may be implemented via one or more nodes of computer system 1000 that are distinct from those nodes implementing other elements.
  • computer system 1000 may be a uniprocessor system including one processor 1010, or a multiprocessor system including several processors 1010 (e.g., two, four, eight, or another suitable number).
  • Processors 1010 may be any suitable processor capable of executing instructions.
  • processors 1010 may be general- purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA.
  • ISAs instruction set architectures
  • each of processors 1010 may commonly, but not necessarily, implement the same ISA.
  • at least one processor 1010 may be a graphics processing unit.
  • a graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device.
  • Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms.
  • a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU).
  • the methods as illustrated and described in the accompanying description may be implemented by program instructions configured for execution on one of, or parallel execution on two or more of, such GPUs.
  • the GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s).
  • Suitable GPUs may be commercially available from vendors such as NVIDIA Corporation, ATI Technologies, and others.
  • System memory 1020 may be configured to store program instructions and/or data accessible by processor 1010.
  • system memory 1020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory.
  • SRAM static random access memory
  • SDRAM synchronous dynamic RAM
  • program instructions and data implementing desired functions are shown stored within system memory 1020 as program instructions 1025 and data storage 1035, respectively.
  • program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 1020 or computer system 1000.
  • a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system 1000 via I/O interface 1030.
  • Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1040.
  • I/O interface 1030 may be configured to coordinate I/O traffic between processor 1010, system memory 1020, and any peripheral devices in the device, including network interface 1040 or other peripheral interfaces, such as input/output devices 1050.
  • I/O interface 1030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1020) into a format suitable for use by another component (e.g., processor 1010).
  • I/O interface 1030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • I/O interface 1030 may be split into two or more separate components, such as a north bridge and a south bridge, for example.
  • some or all of the functionality of I/O interface 1030, such as an interface to system memory 1020, may be incorporated directly into processor 1010.
  • Network interface 1040 may be configured to allow data to be exchanged between computer system 1000 and other devices attached to a network, such as other computer systems, or between nodes of computer system 1000.
  • network interface 1040 may support communication via wired and/or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol.
  • Input/output devices 1050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 1000. Multiple input/output devices 1050 may be present in computer system 1000 or may be distributed on various nodes of computer system 1000. In some embodiments, similar input/output devices may be separate from computer system 1000 and may interact with one or more nodes of computer system 1000 through a wired and/or wireless connection, such as over network interface 1040.
  • memory 1020 may include program instructions 1025, configured to implement embodiments of methods as illustrated and described in the accompanying description, and data storage 1035, comprising various data accessible by program instructions 1025.
  • program instruction 1025 may include software elements of methods as illustrated and described in the accompanying description.
  • Data storage 1035 may include data that may be used in embodiments. In other embodiments, other or different software elements and/or data may be included.
  • computer system 1000 is merely illustrative and is not intended to limit the scope of methods as illustrated and described in the accompanying description.
  • the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, internet appliances, PDAs, wireless phones, pagers, etc.
  • Computer system 1000 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system.
  • the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components.
  • the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
  • instructions stored on a computer-accessible medium separate from computer system 1000 may be transmitted to computer system 1000 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
  • Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations.
  • a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
  • storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc.
  • RAM e.g. SDRAM, DDR, RDRAM, SRAM, etc.
  • ROM etc.
  • transmission media or signals such as electrical, electromagnetic, or digital signals
  • a program is written as a series of human understandable computer instructions that can be read by a compiler and linker, and translated into machine code so that a computer can understand and run it.
  • a program is a list of instructions written in a programming language that is used to control the behavior of a machine, often a computer (in this case it is known as a computer program).
  • a programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.
  • the syntax of a computer language is the set of rules that defines the combinations of symbols that are considered to be a correctly structured document or fragment in that language. This applies both to programming languages, where the document represents source code, and markup languages, where the document represents data.
  • the syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical or flowchart(s)). Documents that are syntactically invalid are said to have a syntax error. Syntax - the form - is contrasted with semantics - the meaning.
  • semantic processing In processing computer languages, semantic processing generally comes after syntactic processing, but in some cases semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently.
  • the syntactic analysis comprises the frontend, while semantic analysis comprises the backend (and middle end, if this phase is distinguished).
  • semantic analysis comprises the backend (and middle end, if this phase is distinguished).
  • Embodiments of the invention may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a tangible computer readable storage medium or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.
  • any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments of the invention may also relate to a computer data signal embodied in a carrier wave, where the computer data signal includes any embodiment of a computer program product or other data combination described herein.
  • the computer data signal is a product that is presented in a tangible medium or carrier wave and modulated or otherwise encoded in the carrier wave, which is tangible, and transmitted according to any suitable transmission method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system and methods for enabling user to auto identify user's current location surround or nearest one or more point of interest where user can take a photo or video and contextually guide or teach or provides one or more techniques, tips, tricks to user for instructing how to take current point of interest related best photo or video in terms of pose, effect, scene, style, angel, focus, light, sequence, theme, expression, arrangement, concept and idea. In an embodiment user is presented with auto identified destinations, so user can prepare content and post to selected one or more destinations or auto post to auto identified destinations.

Description

TITLE
Providing location specific point of interest and guidance to create visual media rich story
FIELD OF INVENTION
The present invention relates generally to displaying of information about user device's current location specific nearest contextual point of interest or places on the display and provides contextual instructions for enabling user to where and when to take visual media and what and how to take visual media including a photo and a video and share, add, broadcast and post to one or more auto identified destinations or select from suggested destinations or user selected one or more destinations. The present invention also enables user to creating, starting, pausing and managing rich story or rich story gallery or feed or album.
BACKGROUND OF THE INVENTION
At present Snapchat TM or Instagram TM enables user to add one or more photos or videos to "My Stories" or feed for publishing or broadcasting or presenting said added phots or videos or sequence of photos or videos to one or more or all friends or contacts or connections or followers or particular category or type of user. Snapchat TM or Instagram TM enables user to add one or more photos or videos to "Our Stories" or feed i.e. add photos or videos to particular events or place or location or activity or category and making them available to requested user or searching user or connected or related user.
At present Snapchat TM enables user to capture photo or record video and send or share or broadcast said photo or video to one or more selected contacts or connections or followers of sender or send to all friends or selected events related users.
At present Twitter TM and other social networks enable user to create and add hashtags and search or view hashtags specific contents.
There are no mechanisms available to provide to user information about auto identified user's current location surround or nearest location specific one or more point of interest where user can take a photo or video and there are no mechanisms available to provide current camera display screen scene (before capture or record) or captured photo or recorded video specific contextual guide including providing one or more techniques, tips, tricks to user for instructing user to how to take current point of interest related best photo or video in terms of pose, effects, filters, scene, style, angel, focus, light, sequence, theme, expression, arrangement, concept, idea and auto identified overlays. There are no mechanisms available to auto identifying and/or auto presenting contextual destinations to prepare and post contents. There are no mechanisms available to contextually present hashtags on user device and make user remind what to share and enable to one tap share visual media to particular auto presented hashtag or instantly share microblog or one or more types of contents to particular auto presented hashtag.
Therefore, it is with respect to these considerations and others that the present invention has been made. OBJECT OF THE INVENTION
The principal object of the present invention is to continuously auto identify one or more points of interest and provide associated or related details to enabling user to take visual media and auto share to created story or said presented point of interest or place or position associated destination(s).
The other object of present invention is to contextually guide or teach or provides one or more techniques, tips, tricks to user for instructing how to take current point of interest related best photo or video in terms of pose, effect, scene, style, angel, focus, light, sequence, theme, expression, arrangement, concept and idea.
Other important object of present invention is to identify and presenting contextual destinations to user for enabling user to select one or more destinations from presented destinations for sharing one or more types of media.
DETAIL DESCRIPTION
Although the present disclosure is described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments by which the invention may be practiced. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Among other things, the present invention may be embodied as methods or devices. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase "in one embodiment" or "in an embodiment" as used herein does not necessarily refer to the same embodiment, though it may. Furthermore, the phrase "in another embodiment" as used herein does not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments of the invention may be readily combined, without departing from the scope or spirit of the invention.
In addition, as used herein, the term "or" is an inclusive "or" operator, and is equivalent to the term "and/or," unless the context clearly dictates otherwise. The term "based on" is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."
In one embodiment an electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; monitoring geo-location or positions of user device; identify point of interest based on matching nearest point of interest with current geo-location or positions of user device; presenting information about identified point of interest on the display of user device; in response to capturing of photo ore recording of video or taking visual media, post said captured or recorded visual media to point of interest associate gallery. In an embodiment enabling user to pause or stop monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device. In an embodiment enabling user to re-start monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device.
In another embodiment an electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access one or more galleries; identify user selected gallery; monitoring geo-location or positions of user device; identify point of interest based on matching nearest point of interest with current geo-location or positions of user device; presenting information about identified point of interest on the display of user device; in response to capturing of photo or recording of video or taking visual media, post said captured or recorded visual media to said identified user selected gallery. In an embodiment enabling user to pause or stop monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device. In an embodiment enabling user to restart monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device. In an embodiment enabling user to remove one or more selected galleries. In an embodiment enabling user to provide access rights to view one or more galleries created and managed by user. In an embodiment enabling user to provide filter preferences and privacy settings to receive information about certain types of points of interests. In an embodiment enabling user to provide or apply rules and access criteria to enabling viewing of gallery for targeted viewers. In an embodiment enabling user to invite one or more contacts to participate in gallery, in response to acceptance of invitation or acceptance of request to join publish or present or display said gallery and display gallery named visual media capture controller label and/or identified information about points of interest based on matching monitored and identified member device's current geo-location with points of interests data to each participant members for enabling to take visual media and auto post to said gallery. In an embodiment an electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; receiving request to create gallery; creating of gallery; monitoring geo-location or positions of user device; identify point of interest based on matching nearest point of interest with current geo-location or positions of user device; presenting information about identified point of interest on the display of user device; in response to capturing of photo ore recording of video or taking visual media, post said captured or recorded visual media to said created gallery.
In an embodiment an electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; monitoring geo-location or positions of user device; identify and present contextual point of interest; presenting information about identified point of interest on the display or the camera display screen of user device.
An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access information about points of interests or places or locations or positions data, user data and rule base; monitors, tracks and identifies current geo-location or position of user device; identify contextual one or more points of interests or positions based on matching pre-stored points of interests or places or locations or positions data with user's current geo-location or position information, user data and contextually selected and executed rules from rules base; notify or provide indication or present information about contextual one or more points of interests or positions on the display of user device; present said identified one or more points of interest or position related corresponding visual media capture controller label(s) or icon(s) on the camera display screen of the user device; in response to access of a visual media capture controller, alternately capture a photo or start recording of a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release, a non-transient computer readable storage medium, comprising executable instructions to: process haptic contact signals from a display; record a photo based upon a first haptic contact signal; and start recording of a video based upon a second haptic contact signal, wherein the second haptic contact signal is a haptic contact release signal that occurs after a specified period of time after the first haptic contact signal. In an embodiment enable user to pause or stop or end or cancel recording of video. In another embodiment auto ends video based on expiry of set period of time.
In an embodiment a computer implemented method, comprising: accessing information about positions and digital items; identifying current position of user; identifying contextual position(s) based on matching information about positions data with user's current position, user data and executed rules; notifying or alerting or indicating or presenting to user information about contextual position(s); and presenting identified position specific contextual or associated or requested digital items. In an embodiment positions includes location, place, point of interest, site-seeing, spot, entity, person, shop, product, showcase, showroom, department, mall, tourist place, favorite or identified selfie place, art, monuments, garden, tree, restaurant, resort, beach and like. In an embodiment digital items comprises an application including a camera application, camera display interface, form including review form, a web page, a web site, one or more controls, an object, a media, a database or data and one or more types of user actions includes Rate, Like, Dis-like, Comment. In an embodiment auto send or post one or more types of content or media prepared by user by using one or more presented digital items, to said notification related or associated one or more destinations.
In an embodiment one or more types of content or media including captured or selected photo(s), recorded or selected video(s), product or service review(s), comments, ratings, suggestions, feedbacks, complaint, microblog, post, stored or auto generated or identified or provided user activities, actions, events, transactions, logs, status and one or more types of user generated contents or media.
In an embodiment destination including one or more user contacts, connections, followers, groups, categories, hashtags, user profile(s), applications, interfaces, objects, processes, feeds, stories, folders, albums, galleries, web sites, web pages, web services, servers, networks, data storage mediums, networks, devices, authorized persons or one or more types of entity including owner of shop, manufacturer, seller, retailer, salesperson, manger, distributor, government department, In an embodiment rules includes pre-created, user provided and updated rules and contextually select and execute rule(s) from rule base based on one or more types of monitored or tracked or updated user data.
In an embodiment a computer implemented method, comprising: receiving a geo-location data for a device; determining whether the geo-location data corresponds to a geo-location fence associated with a particular position or place or spot or point of interest; supplying a notification or indication list to the device in response to the geo-location data corresponding to the geo- location fence associated with the particular position or place or spot or point of interest.
In an embodiment a computer implemented method, receiving a geo-location data from a user device; determining whether the geo-location data corresponds to a geo-location fence associated with stored or identified prospective or permitted position(s) including an advertised place or entity, curated place and an event; notify about identified prospective or permitted position(s); presenting contextual one or more digital items; and supplying one or more destinations to the device or determining one or more destinations for posting or sharing or sending or broadcasting user generated contents.
In an embodiment enabling to prepare, select, capture, record, augment or edit or update and post or send or broadcast one or more or series of one or more types of media or user generated contents including photos or videos or microblog or review.
In an embodiment identify prospective or permitted position(s) based on number of users taking photos of videos at present at that particular position or place or location, number of or rank of or number of reactions on posted media including photos or video posted at particular place or location or associate geo-location information including geo-location fence, based on information associate with said position or place or location including at present fair is going on particular position or place or location, or birthday or marriage event is happening, sunset, sun rise, migration of birds.
In an embodiment store prospective or permitted position(s) based on curation, identified by server based on user data or 3rd parties data, provided by user, provided by advertiser including advertised place, entity, shop, product, event and events created and posted by user.
In an embodiment based on identified notification presenting associated or contextual or requested or auto present associated or contextual one or more types of interfaces, applications, user actions, one or more or set of controls, one or more types of media, data, web services, objects, web page(s), forms, and any combination thereof.
In an embodiment determination based on whether the user data matched with data associated with prospective or permitted position(s) including an advertised place or entity, curated place and an event.
In an embodiment data associated with prospective or permitted position(s) comprise details, rules, access rights, offers, photo filters, advertisement details including bids, description, media, offers, discount, gift, redeemable points, and rules or conditions to avail one or more types of benefits provided by advertisers, and created event associated settings, preferences, rules, access rights, privacy and selections.
In an embodiment wherein based on receiving of notification automatically open camera display screen, to enabling user to capture photo or record video. In an embodiment enable user to select or tap on notification. In an embodiment in response of tapping on or selection of notification open camera display screen, to enabling user to capture photo or record video.
In an embodiment enable user to prepare one or more media item(s) or contents includes a photo or a video
In an embodiment based on notification determines one or more destinations. In an embodiment in response of tapping on or selection of notification determine associate one or more destinations. In an embodiment based on user data, user pre-set destinations, followers, user preferences & setting determine one or more destinations. In an embodiment determine based on last selections. In an embodiment determine based on user settings.
In an embodiment enable user to post one or more media item(s) to user selected one or more destinations.
In an embodiment destinations comprise user contacts or connections, groups, followers, networks, hashtags, categories, events, suggested, web sites, web pages, user profiles, applications, services or web services, servers, devices, networks, databases or storage medium, albums or galleries, stories, folders, and feeds.
In an embodiment enable user to post one or more media item(s) to user selected one or more destinations, wherein enable user to select from list of suggested destinations In an embodiment auto post one or more media item(s) to auto determined one or more destinations.
In an embodiment based on determining whether the geo-location data corresponds to a geo- location fence associated with prospective or permitted position(s) present one or more types of media including photo and video previously taken at said determined geo-location data specific corresponding geo-location fence associated prospective or permitted position(s).
In an embodiment a server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a gallery comprising a plurality of messages posted by a user for viewing by one or more recipients, wherein each of the messages comprises a photograph or a video or one or more media type, the maintaining of the gallery comprising making the gallery available for viewing users, upon request, via respective user devices associated with the one or more recipients; receiving request, by the server system, to create particular named gallery; creating, by the server system, said named gallery; receiving request, by the server system, to start or initiating said created or selected gallery, wherein in the event of starting or initiating gallery, by the server system, present on the display of the user device said created gallery specific visual media capture controller label or icon, for enabling user to take or capture or record said created named gallery specific one or more types of visual media and starts monitoring and identifying of geo-location or positions of user device; matching, by the server system, identified current geo-location or positions of user device with pre-stored points of interests or places or positions or spots data to identify nearest one or more point of interest; notify and/or present, by the server system, identified one or more point of interest related information on display of user device; enabling, by the server system, to take visual media including capturing of photo by tapping or clicking on said presented visual media capture controller label or icon or recording of video by tap and hold on said presented visual media capture controller label or icon and release to end or stop said video; store, by the server system, at server database; generating news item(s); and presenting news item(s) at viewing user's device.
In an embodiment receiving, by the server system, request to pause or stop monitoring of geo- location or positions of user device and presenting information about identified point of interest on the display of user device. In an embodiment receiving, by the server system, request to re-start monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device.
In an embodiment receiving, by the server system, request to remove one or more selected galleries.
In an embodiment receiving, by the server system, request to provide access rights to view one or more galleries created and managed by user.
In an embodiment receiving, by the server system, filter preferences and privacy settings to filter receiving of information about certain types of points of interests.
In an embodiment receiving, by the server system, request to apply rules and access criteria to enabling viewing of gallery for targeted viewers.
In an embodiment receiving , by the server system, request to invite one or more contacts of user to participate in selected gallery, and in response to acceptance of invitation or acceptance of request to join, publish or present or display said gallery and display gallery named visual media capture controller label and/or identified information about points of interest on the display of each member device based on matching monitored and identified member device's current geo- location with points of interests data to each participant members for enabling to take visual media and auto post to said gallery.
In an embodiment a computer-implemented notification method, comprising acts of: inputting a query to monitor a user computing device relative to a geographical point of interest; configuring a preferences associated with the user computing device; monitoring the geographical location of the user computing device relative to the geographical point of interest; executing a rules based on the user data and geographical location of the user computing device relative to the geographical point of interest; automatically communicating a notification to a user computing device based on the geographical location of the target computing device relative to the point of interest; and utilizing a processor that executes instructions stored in memory to perform at least one of the acts of configuring, monitoring, communicating, or processing.
In an embodiment a computer-implemented notification system, comprising: a storage medium to access point of interest, digital items, user data, and rule base; a notification component that monitors the geographic location of the user device and communicates a notification to a user device when the geographic location of the user device meets geo-location criteria related to the point of interest; a digital items component that presents contextual digital item(s); and a processor that executes computer-executable instructions associated with at least one of the digital items presentation component or the notification component.
In an embodiment a server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a destinations comprising plurality of destinations, maintain rules base and maintain user data comprising plurality types of updated user data; receiving, by the server system, request to listing one or more destinations and associated details; store, by the server system, said one or more destinations and associated details at the storage medium; receiving, by the server system, posting request; identify, by the server system, one or more destinations based on user data and one or more rules; presenting, by the server system, one or more identified destinations on the display of the user device.
In an embodiment destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites, web pages, point of web page, user profiles, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
In an embodiment auto identifying, by the server system, destination(s) based on current location or position information of user device. In an embodiment auto identifying, by the server system, destination(s) based on user settings. In an embodiment auto post to auto identified destinations.
In an embodiment a server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a destinations comprising plurality of destinations, maintain rules base and maintain user data comprising plurality types of updated user data; receiving, by the server system, request to listing one or more destinations and associated details; store, by the server system, said one or more destinations and associated details at the storage medium; identify, by the server system, one or more destinations based on user data and one or more rules; presenting, by the server system, one or more identified destinations on the display of the user device for user selection to select one or more destinations and post one or more types of user generated contents to selected one or more destinations. A server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a destinations comprising plurality of destinations, maintain rules base and maintain user data comprising plurality types of updated user data; receiving, by the server system, request to listing one or more destinations and associated details; store, by the server system, said one or more destinations and associated details at the storage medium; identify, by the server system, one or more destinations based on user data and one or more rules; presenting, by the server system, one or more identified destinations specific visual media capture controller on the display of the user device, for enabling user to one tap capture photo or tap and long press and release to start video and end after expiring of set period of pre-set duration or tap and hold to start video and release to end video and post said captured photo or recorded video to said accessed visual media capture controller associated destination(s).
In an embodiment destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites, web pages, point of web page, user profiles, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
In an embodiment auto identifying destination(s) based on current location or position information of user device. In an embodiment auto identifying destination(s) based on user settings. In an embodiment auto post to auto identified destinations.
In an embodiment auto identifying destination(s) based on user data comprising user profile, user check-in place(s) or current or past location(s), user activities, actions, interactions, sense, status, connections or contacts, events, transactions, logs, behavior, posted or received contents and reactions on contents, preferences, privacy settings, notification settings.
In an embodiment data associated with advertised or listed destination comprise details, associated rules, access rights, offers, photo filters, advertisement details including bids, description, media, offers, discount, gift, redeemable points, and rules or conditions to avail one or more types of benefits provided by advertisers in the event of user posts to advertised destination(s).
In an embodiment an electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access guide system associated data and resources; monitoring geo-location or positions of user device; identify point of interest or place or spot or location based on matching nearest point of interest or place with current geo-location or positions of user device or check-in place associate position information; presenting information about identified point of interest on the display of user device; based on identified point of interest or place or spot or location or position or in response to capturing of photo or recording of video or in response to initiating capturing of photo ore recording of video or taking of visual media via camera display screen or current scene display by camera display screen, providing contextual information, instruction, step-by-step guide or wizard or instruction.
In an embodiment contextually providing information, instruction, step-by-step guide or wizard or instruction for capturing photo or recording of video comprises provide one or more contextual techniques, angels, directions, styles, sequences, story or script type or idea or sequences, scenes, shot categories, types & ideas, transition, spots, motion or actions, costumes and makeups idea, levels, effects, positions, focus, arrangement styles of group, step by step rehearsal to take visual media, identify required lights, identify flash range, contextual tips, tricks, concepts, expressions, poses, contextual suggested settings or modes or options of camera application including focus, resolution, brightness, shooting mode, contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route or start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information, contextual or matched photos or videos previously taken by other users at same current location or POI or place or similar types of POI(s) or place(s) or location(s) based on matching recognized or scanned object(s) inside scene of camera display screen (before capturing photo) with stored media and provide contextual tips and tricks based on one or more types of sensor data, user data, recognized object(s) inside camera display screen (before capture phot ore record video) and identified rules.
In an embodiment contextually providing information, instruction, step-by-step guide or wizard or instruction for capturing photo or recording of video based on geo-location or position information of current user device, identified POI or place associated information including previously taken photos or videos from similar POI or place, recognized object or entity inside photo or video or camera display screen and identified recognized object associated information, contextual rules, guide system resources data, object recognitions including face recognition, text recognition via Optical Character Recognition (OCR), one or more types of sensor data generated or acquired from one or more types of sensors and plurality types of user data.
In an embodiment a server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain user data, point of interest data, rule base, hashtags data and monitored and identified user device current location or position information including coordinates; based on identified current location and/or user data and/or point of interest data and/or hashtags data and/or auto selected or identified or updated one or more executed rule(s), identifying by the server system, one or more contextual hashtag(s); presenting, by the server system, accessible link or control of contextual hashtag(s) on the display; in the event of access or tap on particular link or control of hashtag, presenting by the server system, associated or contextual or requested one or more digital items from one or more sources including present microblog application or present review application or present hashtag named visual media capture controller label or icon (see figure 6 - 610) for enabling user to one tap capture photo or tap & hold to start recording of video and end by user anytime or tap & hold to start video and stop when release and auto post within said one tap to said hashtag or associated one or more destination(s) or present application to preparing, editing one or more types of media or content or microblog and post to said hashtag or associated destination(s).
In an embodiment display on the user interface auto presented hashtag(s) specific contents, wherein contents comprise contents posted by other users of networks.
In an embodiment digital item including an application, and interface, a media, a web page, a web site, a set of controls or user actions including visual media capture controller, like button, chat button, rate control to provide ratings, order or buy or add to cart button or icon, an object and a form or dynamically generated form or structured form.
In an embodiment destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites including 3 rd parties social networks & microblogging sites including Twitter, Instagram, Facebook, Snapchat & like, web pages, point of web page, user profiles, rich story, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
In an embodiment auto present and auto hide hashtags, including verified, advertised, sponsored & user created hashtags, based on or as per context-aware data, sensor data from one or more types of sensors of user device, geo-location or positions data, user data, point of interest data, hashtags data and rules from rule base.
In an embodiment metadata or data of hashtag comprise one or more keywords, categories, taxonomy, date & time of creation, creator identity, source name including user, advertiser, 3rd parties web sites or servers or storage medium or applications or web services or devices or networks, associate or define related rules, privacy settings, access rights & privileges, triggers and events, associated one or more digital item, number of followers and/or viewers, number of contents posted, verified or non-verified status or icon and provide descriptions.
In an embodiment identify hashtag based on most used, most liked or ranked, most viewed or most viewed content of hashtags or most view within last particular days, , most content provided on/related to hashtag.
In an embodiment auto present hashtag(s) based on most ranked, most followers, current trend, most useful, most liked or ranked or viewed or present based on location, place, event, activity, action specific, status, contacts, date & time and any combination thereof.
In an embodiment enable user to remove hashtag(s) presented on the display via swipe or tap on close icon.
In an embodiment enable user to add one or more hashtags via search, match, browse directory, via more button or select from list of suggested hashtags.
In an embodiment enable user to drag and drop hashtag link or icon or accessible control to reorder or change position anywhere on camera screen.
In an embodiment enable user to follow hashtags.
In an embodiment auto or manually verify hashtags to identify uniqueness, related to brand or not, check is made to identify payment is made, spam or inappropriate, duplicate in terms of keyword meaning and making hashtag available for other users after one or more type of verification. In an embodiment enable user to use auto fill or suggest list for searching hashtags
In an embodiment user can use pre-created, created by user including verified or not-verified and created by brands including paid verified hashtags and server created hashtags.
In an embodiment hashtag comprise keyword(s), key phrase(s), categories, taxonomy, keyword(s) icon, and allow space in hashtag keywords.
In an embodiment present contextual menu on accessible hashtag control or link of hashtag to enable user to access one or more contextual menu items including take photo, record video, provide comments, provide structured review, provide microblog, like, dis-like, provide rating, like to buy, make order, buy, add to cart, share, and refer.
In an embodiment enabling user to chat with followers of hashtag(s).
In an embodiment enabling advertiser to advertise created or selected hashtag(s) by advertiser based on pay per auto presenting of said hashtag on user device model or pay per use of hashtag by user model.
In an embodiment auto present hashtag based on physical surround context-aware including user's current location, surround events, current location or place or point of interest related information, weather, date & time, check-in-place, user selected provided status, rules and any combination thereof.
In an embodiment auto present hashtag based on logical context-aware based on user data, subscribed hashtags, search, logs and any combination thereof.
In an embodiment user data comprise user's detail structured profile, user check-in place(s) or current or past location(s), user activities, actions, interactions, sense(s) generated or provided by or acquired from one or more types of sensors, updated status, connections or contacts, events, transactions, logs, behavior, posted or received contents and reactions on contents, preferences, privacy settings, notification settings and any combination thereof.
A geo-fence is a virtual perimeter for a real-world geographic area. A geo-fence could be dynamically generated— as in a radius around a store or point location, or a geo-fence can be a predefined set of boundaries, like school attendance zones or neighborhood boundaries. The use of a geo-fence is called geo-fencing, and one example of usage involves a location- aware device of a location-based service (LBS) user entering or exiting a geo-fence. This activity could trigger an alert to the device's user as well as messaging to the geo-fence operator. This info, which could contain the location of the device, could be sent to a mobile telephone or an email account. Geo-fencing, used with child location services, can notify parents if a child leaves a designated area. Geo-fencing used with locationized firearms can allow those firearms to fire only in locations where their firing is permitted, thereby making them unable to be used elsewhere. Geo-fencing is critical to telematics. It allows users of the system to draw zones around places of work, customer's sites and secure areas. These geo-fences when crossed by an equipped vehicle or person can trigger a warning to the user or operator via SMS or Email. In some companies, geo-fencing is used by the human resource department to monitor employees working in special locations especially those doing field works. Using a geofencing tool, an employee is allowed to log his attendance using a GPS-enabled device when within a designated perimeter. Other applications include sending an alert if a vehicle is stolen and notifying rangers when wildlife stray into farmland. Geofencing, in a security strategy model, provides security to wireless local area networks. This is done by using predefined borders, e.g., an office space with borders established by positioning technology attached to a specially programmed server. The office space becomes an authorized location for designated users and wireless mobile devices.
Geo-fencing (geofencing) is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries. A geofence is a virtual barrier. Programs that incorporate geo-fencing allow an administrator to set up triggers so when a device enters (or exits) the boundaries defined by the administrator, a text message or email alert is sent. Many geo-fencing applications incorporate Google Earth, allowing administrators to define boundaries on top of a satellite view of a specific geographical area. Other applications define boundaries by longitude and latitude or through user-created and Web-based maps. The technology has many practical uses. For example, a network administrator can set up alerts so when a hospital -owned iPad leaves the hospital grounds, the administrator can disable the device. A marketer can geo-fence a retail store in a mall and send a coupon to a customer who has downloaded a particular mobile app when the customer (and his smartphone) crosses the boundary.
Geo-fencing has many uses including: Fleet management- e.g. When a truck driver breaks from his route, the dispatcher receives an alert. Human resource management - e.g. An employee smart card will send an alert to security if an employee attempts to enter an unauthorized area. Compliance management - e.g. Network logs record geo-fence crossings to document the proper use of devices and their compliance with established rules. Marketing - e.g. A restaurant can trigger a text message with the day's specials to an opt-in customer when the customer enters a defined geographical area. Asset management - e.g. An RFID tag on a pallet can send an alert if the pallet is removed from the warehouse without authorization. Law enforcement - e.g. An ankle bracelet can alert authorities if an individual under house arrest leaves the premises.
Rather than using a GPS location, network-based geofencing "uses carrier-grade location data to determine where SMS subscribers are located." If the user has opted in to receive SMS alerts, they will receive a text message alert as soon as they enter the geofence range. As always, users have the ability to opt-out or stop the alerts at any time.
A geofence is a virtual perimeter for a real-world geographic area. It can be a radius around a location or a pre-defined set of boundaries. Some of the third parties API e.g. Locate enables geo-fence within 10 meters also.
Beacons can achieve the same goal as app-based geo-fencing without invading anyone's privacy or using a lot of data. They can't pinpoint the user's exact location on a map like a geo-fence can, but they can still send signals when it's triggered by certain events (like entering or exiting the beacon's signal, or getting within a certain distance of the beacon)— and they can determine approximately how close the user is to the beacon, down to a few inches. Best of all, because beacons rely on Bluetooth technology, they hardly use any data and won't affect the user's battery life. A Beacon is a piece of hardware advertiser or merchant can add to their location or place e.g. shop and target customers or users as they arrive, as they leave or as they dwell. Beacons are great for proximities and knowing at a very granular level that someone is near a certain object or product. Beacons send out messages via Bluetooth connection to application users that enter a specified range and perfect for in-store and micro channels.
Geo-location: identifying the real-world location of a user with GPS, Wi-Fi, and other sensors Geo-fencing: taking an action when a user enters or exits a geographic area
Geo-awareness: customizing and localizing the user experience based on rough approximation of user location, often used in browsers
One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer- implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device. A programmatically performed step may or may not be automatic.
One or more embodiments described herein can be implemented using programmatic modules, engines, or components. A programmatic module, engine, or component can include a program, a sub-routine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions. As used herein, a module or component can exist on a hardware component independently of other modules or components. Alternatively, a module or component can be a shared element or process of other modules, programs or machines.
Some embodiments described herein can generally require the use of computing devices, including processing and memory resources. For example, one or more embodiments described herein may be implemented, in whole or in part, on computing devices such as servers, desktop computers, cellular or smartphones, personal digital assistants (e.g., PDAs), laptop computers, printers, digital picture frames, network equipments (e.g., routers) and tablet devices. Memory, processing, and network resources may all be used in connection with the establishment, use, or performance of any embodiment described herein (including with the performance of any method or with the implementation of any system).
Furthermore, one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium. Machines shown or described with figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention can be carried and/or executed. In particular, the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions. Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers. Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on smartphones, multifunctional devices or tablets), and magnetic memory. Computers, terminals, network enabled devices (e.g., mobile devices, such as cell phones) are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, embodiments may be implemented in the form of computer-programs, or a computer usable carrier medium capable of carrying such a program.
The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention that fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following drawings. In the drawings, like reference numerals refer to like parts throughout the various figures unless otherwise specified.
For a better understanding of the present invention, reference will be made to the following Detailed Description, which is to be read in association with the accompanying drawings, wherein:
FIG. 1 is a network diagram depicting a network system having a client-server architecture configured for exchanging data over a network, according to one embodiment.
FIG. 2 illustrates components of an electronic device implementing notification system in accordance with the invention.
FIG. 3 illustrates a notification system in accordance with the disclosed architecture.
FIG. 4 illustrates flowchart explaining notification system, according to an embodiment.
FIG. 5 and FIG. 6 illustrate exemplary graphical user interface, describing disclosed architecture with examples;
FIG. 7 illustrates flowchart explaining guide system, according to an embodiment.
FIG. 8 illustrates an auto determined, auto suggested, auto identified contextual or matched or dynamic destination(s) and auto send to auto identified destination(s) system with some examples, according to one embodiment.
FIG. 9 illustrates a rich story system configured in accordance with an embodiment of the invention.
FIG. 10 illustrates flowchart explaining rich story system, according to an embodiment.
FIG. 11 illustrate exemplary graphical user interface, describing rich story system with some examples; FIG. 12 illustrate exemplary graphical user interface of rich story related gallery and generated or updated map of visited POIs and enable to access each POI specific information or shared media items via contextual menu, describing rich story system with some examples;
FIG. 13 is a block diagram that illustrates a mobile computing device upon which embodiments described herein may be implemented.
While the invention is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word "may" is used in a permissive sense (e.g., meaning having the potential to), rather than the mandatory sense (e.g., meaning must). Similarly, the words "include", "including", and "includes" mean including, but not limited to.
DETAILED DESCRIPTION OF THE DRAWINGS
FIG. 1 is a network diagram depicting a network system 100 having a client-server architecture configured for exchanging data over a network, according to one embodiment. For example, the network system 100 may be a messaging system where clients may communicate and exchange data within the network system 100. The data may pertain to various functions (e.g., sending and receiving notifications, text and media communication, media items, and receiving search query or search result) associated with the network system 100 and its users. Although illustrated herein as client-server architecture, other embodiments may include other network architectures, such as peer-to-peer or distributed network environments.
A platform, in an example, includes a server 110 which includes various applications including Guide System Application 156, Rich Story Application 154, Auto Identify and auto present Destinations Application 152 and Notification or Indication Application 150, and may provide server-side functionality via a network 125 (e.g., the Internet) to one or more clients. The one or more clients may include users that utilize the network system 100 and, more specifically, the server applications 136, to exchange data over the network 125. These operations may include transmitting, receiving (communicating), and processing data to, from, and regarding content and users of the network system 100. The data may include, but is not limited to, content and user data such as user profiles, user status, user location or checked-in place, search queries, saved search results or bookmarks, privacy settings, preferences, created events, feeds, stories related settings & preferences, user contacts, connections, groups, networks, opt-in contacts, followed feeds, stories & hashtag, following users & followers, user logs of user's activities, actions, events, transactions messaging content, shared or posted contents or one or more types of media including text, photo, video, edited photo or video e.g. applied one or more photo filters, lenses, emoticons, overlay drawings or text, messaging attributes or properties, media attributes or properties, client device information, geolocation information, and social network information, among others.
In various embodiments, the data exchanges within the network system 100 may be dependent upon user-selected functions available through one or more client or user interfaces (UIs). The UIs may be associated with a client machine, such as mobile devices or one or more types of computing device 130, 135, 140, 145, 175. The mobile devices e.g. 130 and 135 may be in communication with the server application(s) 136 via an application server 160. The mobile devices e.g. 130, 135 include wireless communication components, and audio and optical components for capturing various forms of media including photos and videos as described with respect to FIG. 2.
The server messaging application 136, an application program interface (API) server is coupled to, and provides programmatic interface to the application server 160. The application server 160 hosts the server application(s) 136. The application server 160 is, in turn, shown to be coupled to one or more database servers 164 that facilitate access to one or more databases 115.
The Application Programming Interface (API) server 160 communicates and receives data pertaining to notifications, messages, media items, and communication, among other things, via various user input tools. For example, the API server 162 may send and receive data to and from an application running on another client machine (e.g., mobile devices 130, 135, 140, 145 or one or more types of computing devices e.g. 175 or a third party server).
The server application(s) 136 provides messaging mechanisms for users of the mobile devices e.g. 130, 135 to send messages that include text and media content such as pictures and video and search request, feed request or request to view shared media or contents by user or authorized user, subscribe or follow request, request to access search query based feeds and stories or galleries or album. The mobile devices 130, 135 can access and view the messages from the server application(s) 136. The server application(s) 136 may utilize any one of a number of message delivery networks and platforms to deliver messages to users. For example, the messaging application(s) 136 may deliver messages using electronic mail (e-mail), instant message (FM), Push Notifications, Short Message Service (SMS), text, facsimile, or voice (e.g., Voice over IP (VoIP)) messages via wired (e.g., the Internet), plain old telephone service (POTS), or wireless networks (e.g., mobile, cellular, WiFi, Long Term Evolution (LTE), Bluetooth).
FIG. 1 illustrates an example platform, under an embodiment. According to some embodiments, system 100 can be implemented through software that operates on a portable computing device, such as a mobile computing device 110. System 100 can be configured to communicate with one or more network services, databases, objects that coordinate, orchestrate or otherwise provide advertised contents of each user to other users of network. Additionally, the mobile computing device can integrate third-party services which enable further functionality through system 100.
The system for enabling users to use platform for receiving indication(s) or notification(s) or information related to contextual point of interest or place or spots where user can prepare one or more type of media or content including capture photo(s) or record video(s) or broadcast live stream or draft post(s) and share with auto identified contextual one or more types of one or more destinations or entities or selected one or more types of destinations including one or more contacts, groups, networks, feeds, stories, categories, hashtags, servers, applications, services, networks, devices, domains, web sites, web pages, user profiles and storage mediums. Various embodiments of system also enables user to create events or groups, so invited participants or presented members at particular place or location can share media including photos and videos with each other. While FIG. 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the system. In some embodiments, gateway 120, database 115 and server 110 may be implemented in the system as separate systems, a single system, or any combination of systems.
As illustrated in FIG. 1, the system may include a posting or sender user device or mobile devices 130/140 and viewing or receiving user device or mobile devices 135/ 145. Devices or Mobile devices 130/140/135/145 may be particular set number of or an arbitrary number of devices or mobile devices which may be capable of posting, sharing, publishing, broadcasting, advertising, notifying, sensing, sending, presenting, searching, matching, accessing and managing shared contents. Each device or mobile device in the set of posting or sending or broadcasting or advertising or sharing user(s) 130/140 and viewing ore receiving user(s) device or mobile devices 135/140 may be configured to communicate, via a wireless connection, with each one of the other mobile devices 130/140/135/145. Each one of the mobile devices
130/140/135/145 may also be configured to communicate, via a wireless connection, to a network 125, as illustrated in FIG. 1. The wireless connections of mobile devices
130/140/135/145 may be implemented within a wireless network such as a Bluetooth network or a wireless LAN.
As illustrated in FIG. 1, the system may include gateway 120. Gateway 120 may be a web gateway which may be configured to communicate with other entities of the system via wired and/or wireless network connections. As illustrated in FIG. 1, gateway 120 may communicate with mobile devices 130/140/135/145 via network 125. In various embodiments, gateway 120 may be connected to network 125 via a wired and/or wireless network connection. As illustrated in FIG. 1, gateway 120 may be connected to database 115 and server 110 of system. In various embodiments, gateway 120 may be connected to database 115 and/or server 110 via a wired or a wireless network connection.
Gateway 120 may be configured to send and receive user contents or posts or data to targeted or prospective, matched & contextual viewers based on preferences, wherein user data comprises user profile, user connections, connected users' data, user shared data or contents, user logs, activities, actions, events, senses, transactions, status, updates, presence information, locations, check-in places and like) to/from mobile devices 130/140/135/145. For example, gateway 120 may be configured to receive posted contents provided by posting users or publishers or content providers to database 115 for storage.
As another example, gateway 120 may be configured to send or present posted contents to contextual viewers stored in database 115 to mobile devices 130/140/135/145. Gateway 120 may be configured to receive search requests from mobile devices 130/140/135/145 for searching and presenting posted contents.
For example, gateway 120 may receive a request from a mobile device and may query database 115 with the request for searching and matching request specific matched posted contents, sources, followers, following users and viewers. Gateway 120 may be configured to inform server 110 of updated data. For example, gateway 120 may be configured to notify server 110 when a new post has been received from a mobile device or device of posting or publishing or content broadcaster(s) or provider(s) stored on database 115.
As illustrated in FIG. 1, the system may include a database, such as database 115. Database 115 may be connected to gateway 120 and server 110 via wired and/or wireless connections.
Database 115 may be configured to store a database of registered user's profile, accounts, posted or shared contents, followed updated keyword(s), key phrase(s), named entities, nodes, ontology, semantic syntax, categories & taxonomies, user data, payments information received from mobile devices 130/140/135/145 via network 125 and gateway 120.
Database 115 may also be configured to receive and service requests from gateway 120. For example, database 115 may receive, via gateway 120, a request from a mobile device and may service the request by providing, to gateway 120, user profile, user data, posted or shared contents, user followers, following users, viewers, contacts or connections, user or provider account's related data which meet the criteria specified in the request. Database 115 may be configured to communicate with server 110.
As illustrated in FIG. 1, the system may include a server, such as server 110. Server may be connected to database 115 and gateway 120 via wired and/or wireless connections. As described above, server 110 may be notified, by gateway 120, of new or updated user profile, user data, user posted or shared contents, user followed updated keyword(s), key phrase(s), named entities, nodes, ontology, semantic syntax, categories & taxonomies & various types of status stored in database 115.
FIG. 1 illustrates a block diagram of a system configured to implement the various embodiments related to platform where user(s) can notified about contextual points of interest, places, positions, locations or spots, so user come to know that he/she can capture prepare contents related to said identified or notified or indicated point of interest, place or spot or point of interest, place or spot related one or more entities including person, shop, showcase, product, item, thing & service and post automatically to said point of interest or notification associated destination or auto identified distention based on object inside capture photo or recorded video (e.g. based on logo of brand or shop and/or location of shop) e.g. page or feed or gallery or album of said notification associate entity e.g. seller, person, brand, company, manufacturer, organization & like. System intelligently real-time provides contextual notifications to user for enabling user to where, when and what to take photo(s) or video(s) or live stream or prepare one or more types of media and share or post to automatically identified destinations or selected one or more destinations. While FIG. 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the system. In some embodiments, gateway 120, database 115 and server 110 may be implemented in the system as separate systems, a single system, or any combination of systems.
FIG. 2 illustrates an electronic device 200 implementing operations of the invention. In one embodiment, the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236. The processor 230 may be a central processing unit and/or a graphics processing unit. The memory 236 is a combination of flash memory and random access memory. The memory 236 stores a notification application 260 to implement operations of one of the embodiment of the invention. The notification or indication application 260 may include executable instructions to access a server which coordinates operations disclosed herein.
Alternately, the notification application 260 may include executable instructions to coordinate some of the operations disclosed herein, while the server module 150 implements other operations. The memory 236 stores an auto identify and auto present destinations application 261 to contextually identify destination and to auto present auto identified destinations based on plurality of contextual factors to implement operations of another embodiment of the invention. The autos identify and auto present destinations application 261 may include executable instructions to access a server which coordinates operations disclosed herein. Alternately, the autos identify and auto present destinations application 261 may include executable instructions to coordinate some of the operations disclosed herein, while the server module 152 implements other operations. The memory 236 stores a rich story application 265 to implement operations of one of the embodiment of the invention. The rich story application 265 may include executable instructions to access a server which coordinates operations disclosed herein. Alternately, the rich story application 265 may include executable instructions to coordinate some of the operations disclosed herein, while the server module 154 implements other operations. The memory 236 stores a guide system application 276 to implement operations of one of the embodiment of the invention. The guide system application 276 may include executable instructions to access a server which coordinates operations disclosed herein. Alternately, the guide system application 276 may include executable instructions to coordinate some of the operations disclosed herein, while the server module 156 implements other operations. The processor 230 is also coupled to image sensors 238. The image sensors 238 may be known digital image sensors, such as charge coupled devices. The image sensors capture visual media, which is presented on display 210. The image sensors 238 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
A touch controller 215 is connected to the display 210 and the processor 230. The touch controller 215 is responsive to haptic signals applied to the display 210.
The electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220 to provide connectivity to a wireless network. A power control circuit 225 and a global positioning system (GPS) processor 235 may also be utilized. While many of the components of FIG. 2 are known in the art, new functionality is achieved through the notification application 260 operating in conjunction with a server module 150, Auto Identify and auto present Destinations Application 261 operating in conjunction with a server module 152, Rich Story Application 265 operating in conjunction with a server module 154 and Guide System Application 276 in conjunction with a server module 156.
The notification application 260 includes executable instructions to present or sent contextual point of interest or prospective location or place or location or place specific one or more types of entities specific information to user for enabling user to prepare that prospective place or spot or that place or spot specific content or one or more types of media including photo or video or broadcast live stream or draft post for sending to auto identified contextual one or more types of one or more destinations or entities or selected one or more types of destinations including one or more contacts, groups, networks, feeds, stories, categories, hashtags, servers, applications, services, networks, devices, domains, web sites, web pages, user profiles and storage mediums, as discussed below. The notification application 260 presents notification to user at user device 200/360
FIG. 2 shows a block diagram illustrating one example embodiment of a mobile device 200. The mobile device 200 includes an optical sensor 244 or image sensor 238, a Global Positioning System (GPS) sensor 235, a position sensor 242, a processor 230, a storage device 286, and a display 210. The optical sensor 244 includes an image sensor 238, such as, a charge-coupled device. The optical sensor 244 captures visual media. The optical sensor 244 can be used to media items such as pictures and videos.
The GPS sensor 238 determines the geolocation of the mobile device 200 and generates geolocation information (e.g., coordinates including latitude, longitude, aptitude). In another embodiment, other sensors may be used to detect a geolocation of the mobile device 200. For example, a WiFi sensor or Bluetooth sensor or Beacons including iBeacons or other accurate indoor or outdoor location determination and identification technologies can be used to determine the geolocation of the mobile device 200.
The position sensor 242 measures a physical position of the mobile device relative to a frame of reference. For example, the position sensor 242 may include a geomagnetic field sensor to determine the direction in which the optical sensor 240 or the image sensor 244 of the mobile device is pointed and an orientation sensor 237 to determine the orientation of the mobile device (e.g., horizontal, vertical etc.).
The processor 230 may be a central processing unit that includes a media capture application 278, a media display application 280, and a media sharing application 282.
The media capture application 278 includes executable instructions to generate media items such as pictures and videos using the optical sensor 240 or image sensor 244. The media capture application 278 also associates a media item with the geolocation and the position of the mobile device 200 at the time the media item is generated using the GPS sensor 238 and the position sensor 242.
The rich story application 265 includes executable instructions to enable a user of the mobile device 200 to create, start, pause, end, cancel or remove, update, manage one or more stories and facilitates or guide or notify or navigate user to identify contextual points of interest based on user data, user's current location or position information related matched contextual points of interest or points or identify contextual points or places that falls in user's nearest locations or surround to user's location or falls in set particular radius of geographical location boundaries and enable user to capture or record photo or video related to that place or point of interest or point and add to created gallery or story or feed or album or folder. In one embodiment, the media display application 280 includes executable instructions to determine whether the geolocation of the mobile device 200 corresponds to the geolocation of one of the media item stored at server 110 or accessed by server 110. The media display application 280 displays the corresponding media item in the display 210 when the mobile device 200 is at the geolocation where the media item was previously generated by other users.
The media sharing application 282 includes executable instructions to enable the user to share rich story or one or more types of visual media to one or more selected or auto identified destinations or users of network.
The storage device 286 includes a memory that may be or include flash memory, random access memory, any other type of memory accessible by the processor 230, or any suitable combination thereof. The storage device 286 stores the media items generated or shared or received by user and also store the corresponding geolocation information, auto identified system data including date & time, auto recognized keywords, metadata, and user provided information. The storage device 286 also stores executable instructions corresponding to the media capture application 278, the media display application 280, the media sharing application 282, the notification application 260, rich story application 265, auto identify and auto present application 261, and guide system application 276.
The display 210 includes, for example, a touch screen display. The display 210 displays the media items generated by the media capture application 278. A user captures record and selects media items for adding to rich story by touching the corresponding media items on the display 210. A touch controller monitors signals applied to the display 210 to coordinate the capturing, recording, and selection of the media items.
The mobile device 200 also includes a transceiver that interfaces with an antenna. The transceiver may be configured to both transmit and receive cellular network signals, wireless data signals, or other types of signals via the antenna, depending on the nature of the mobile device 200. Further, in some configurations, the GPS sensor 238 may also make use of the antenna to receive GPS signals.
Figure 3 illustrates the notification application 260, the notification application 260 can access one or more digital items storage medium 370, user data 355, rule base storage medium 366, Points of Interest or points or places or positions storage medium 345 and media storage medium and guide system resources database or storage medium 346 (discussed in detail in figure 7 or particularly in 755) from server 110 and/or one or more server(s) of one or more domain(s).
The notification application 260 includes a notification module or component 312, a geolocation module or component 317, a position module or component 317, a user data module or component 353, a points of interest or points module 351, a rules module or rules engine or component 367, a digital items module or component 372 e.g. to access or present camera display screen via camera module or dynamic generated review form module 378.
The camera module 202 communicates with the media capture application 278 to access the media items generated at the mobile device 200. In one example, the camera module 202 accesses the media items selected for adding to rich story or sharing to one or more selected or auto identified destinations by the user of the mobile device 200. In another example, the camera module 202 accesses media items generated from other mobile devices.
The geolocation module or component 317 communicates with the GPS sensor 238 to access geolocation information of the user device 200 and media item captured or selected location by the user. The geolocation information may include GPS coordinates of the mobile device 200 when the mobile device 200 enters, exit point of interest or position or point or place or generated the media items.
The geo-location or position module 317 communicates with the position sensor 242 to access direction information and position information of the mobile device 200 at the time the mobile device 200 or when the user reaches, enters, and/or exits the nearest or contextual or specified point of interest and generated the media item. The direction information may include a direction (e.g., North, South, East, West, or other azimuth angle) in which the mobile device 100 was pointed when the mobile device 200 generated the media item. The orientation information may identify an orientation (e.g., horizon, vertical or one or more types of angles) at which the mobile device 200 was pointed when the mobile device 200 generated the media item.
The notification module or component 312 of a notification application 260 accesses predefined ranges for the geolocation information, direction information, and position information. The predefined ranges identify a range for each parameter (e.g., geolocation, direction, and position). The geolocation or position component or module 317 communicates with the GPS sensor 238 to access an updated or a current geolocation of the mobile device 200. The geolocation information may include updated GPS coordinates of the mobile device 200. In one example, the geolocation or position component or module 317 periodically accesses the geolocation information every minute. In another example, the geolocation or position component or module 317 may dynamically access the geolocation information based on other usage (e.g., every time the mobile device 200 is used by the user). In another embodiment geolocation or position component or module 317 may use various available technologies which determines and identifies accurate user or user device location or position including Accuware TM which provides up-to approx. 6-10 feet indoor or outdoor location accuracy of user device 200 and can be integrated via Application Programming Interface (API). Various types of Beacons including iBeacons helps in identifying user's exact location or position. Many companies tap into Wi-Fi signals that are all around us - including when we are indoors.
The position module 317 communicates with the position sensor 242 to access direction information and position information of the mobile device 200. The direction information may include a direction in which the mobile device 200 is currently pointed. The position information may identify an orientation in which the mobile device 200 is currently kept.
The notification component or module 314 accesses the current geolocation of the mobile device 200, the current direction and position of the mobile device 200, and the corresponding boundaries for the nearest or relevant or contextual points of interest/place/points/ spots e.g. 310 or 305 or 307 or points of interest/place/points/spots e.g. 310 or 305 or 307 within set particular radius of boundaries. The notification component or module 314 compares the current geolocation, direction, and position of the mobile device 200 with the corresponding boundaries of the nearest or relevant or contextual points of interest/place/points/spots or points of interest/place/points/spots within set particular radius of boundaries. If the notification component or module 314 determines that the current geolocation and/or direction and/or position the mobile device 200 are within the boundaries of a nearest or relevant or contextual points of interest/place/points/spots e.g. 310 or 305 or 307 or points of interest / place / points / spots e.g. 310 or 305 or 307 within set particular radius of boundaries, the notification component or module 314 displays the notification 314 to user device 200 in the display 210. In another example, if the notification component or module 314 determines that any
combination of the current geolocation, direction, and position of the mobile device 200 is within a corresponding boundary of a identified or matched nearest or relevant or contextual points of interest/place/points/spots or points of interest / place / points / spots e.g. 310 or 305 or 307 within set particular radius of boundaries, the notification component or module 314 displays the notification 314 in the display 210 of the device 200. For example, the notification component or module 314 displays the notification 314 when the notification component or module 314 determines that a current geolocation of the mobile device 200 is within a geolocation boundary of a identified or matched nearest or relevant or contextual points of interest/place/points/spots e.g. 310 or 305 or 307 or points of interest / place / points / spots e.g. 310 or 305 or 307 within set particular radius of boundaries regardless of a current direction and position of the mobile device 200.
In another example, once the notification component or module 314 determines that a current geolocation of the mobile device 200 is within a geolocation boundary of a identified or matched nearest or relevant or contextual points of interest/place/points/spots e.g. 310 or 305 or 307 or points of interest / place / points / spots e.g. 310 or 305 or 307 within set particular radius of boundaries regardless of a current direction and position of the mobile device 200, the notification component or module 314 generates a notification 314. The notification component or module 314 causes the notification 314 to be displayed in the display 210. The notification informs the user of the mobile device 200 that the user can capture photo or record video or prepare user generated contents and post to one or more selected or auto identified destinations including one or more contacts, connections, users of networks, targeted criteria specific users, locally saved or make them private for users access only, followers, public, one or more applications, servers, domains, web sites, web pages, user profiles, web services, networks, devices, storage mediums, types of albums, galleries, folders, feeds, stories, categories, hashtags, and keywords for making them available or searchable or viewing for recipients or users of networks.
In yet another example, the media display application 280 generates a visual guide, such as an arrow or guided map (turn by turn routes), in the display of the mobile device 200 to guide and direct the user of the mobile device 200 to one or more nearest or relevant or contextual or matched points of interest or places or positions or spots or locations or points. For example, the mobile device 200 may display a right arrow to instruct the user to move and point the mobile device 200 further to the right.
FIG. 3 illustrates the notification application 260 or a notification system 300 in accordance with the disclosed architecture. The system 300 can includes a notification module or component 312, a geolocation module or component 317, a position module or component 317, a user data module or component 353, a points of interest or points module 351, a rules module or rules engine or component 367, a digital items module or component 372 e.g. to access or present camera display screen via camera module or dynamic generated review form module 378. The notification application 260 or a notification system 300 can access digital items storage medium 370, user data 355, rule base storage medium 366, Points of Interest or points or places or positions storage medium345 and media storage medium.
The notification component 312 that manages identifies, generates, provides, configure and present notification 314 related to (via a geographical relationship 306) the geographic location of a user device 200 relative to a point of interest (POI) e.g. 310. The relationship 306 between the user device 200 (or 130/135/140/145) and the point of interest 310 can be defined according to proximity of the user device 200 to the point of interest 310, if the user device 200 is detected to have entered the point of interest 310, if the user device 200 has exited the point of interest 310, and/or lingered (dwell time) at the point of interest 310, for example.
When the geo-location of the user device 200 matches the geo-location information that defines the virtual perimeter (denoted by the dotted line object surrounding the Point of Interest 310 and other Point of Interests) or geographic location of the user device meets geo-location criteria related to the point of interest, specified events can be triggered to occur, such as sending the notification 314 to the user device 200.
A notification component 112 of the system 300 monitors the geographical location of the user device 200 (or 130/135/140/145) and communicates a notification 314 to a user device 200 (or 130/135/140/145) according to the user preferences, privacy settings, one or more types of user data including user profile, logs, activities, actions, events, transactions, behavior, senses, interactions, shared contents and user connections or contacts 355 when the geographic location of the user device 200 (or 130/135/140/145) meets geo-location criteria (e.g., near, in, or exiting) related to the point of interest 310. Although showing only a single user device 200, it is within contemplation of the disclosed architecture that there can be multiple user devices e.g.
130/135/140/145, as is described herein below.
The user device 200 (or 130/135/140/145) can be a mobile device (e.g., a cellphone) the geographical location of which is monitored, and can be a mobile device (e.g., a cellphone) to which the notification 314 is communicated. Alternatively, the user device can include one or more of a computing device (e.g., portable computer, desktop computer, tablet computer, etc.), a web server, a mobile phone, etc.
The point of interest 310 can be a single geographic location specified in association with the advertisement and details related to advertisement including geo-location or place or position information including coordinates including latitude, longitude, and altitude.
The point of interest 310 can be one of a class of locations (e.g., all restaurants, all shopping malls in a five mile radius, etc.) specified in association with the advertisement and the notification 314 is communicated when the user device 200 meets the geo-location criteria for one location (e.g., POI 310) of the class. In other words, a query provided to the system 300 can be "all shopping malls". Thus, when the user device 308 (as carried by a user) enters any shopping mall, such as the point of interest 310, the notification 314 and advertisement 343 are triggered for communication to all matched, intended and approved user devices (e.g., user device 200).
In this embodiment, the notification component 312 can receive geo-location information 318 that determines the geographical relationship 306 between the user device 308 and the point of interest 310. The geo-location information 318 can be obtained from a technology that identifies location of an entity, such as global positioning system (GPS), triangulation, Wi-fi, Bluetooth, iBeacons, 3rd parties accurate location identification technologies including Accuware TM which provides up-to approx.. 6-10 feet indoor or outdoor location accuracy of user device 200 and can be integrated via Application Programming Interface (API), access points, pre-defined geo-fence boundaries, and other techniques used to ascertain the geographical location of the entity (e.g., cell phone).
Geo-fencing technology can be employed to determine the proximity relative to a point of interest. A geo-fence is a predefined virtual perimeter (e.g., within a two mile radius) relative to a physical geographic area. When the geo-location of the user device 200 matches the geo- location information that defines the virtual perimeter (denoted by the dotted line object surrounding the POI 310 and other POIs), specified events can be triggered to occur, such as sending the notification 314 to the user device 200.
FIG.3 illustrates a more detailed embodiment of a notification application 260 in accordance with the disclosed architecture. In this implementation, the notification component 312 can include a geo-location component 317 that continuously identifies geo-location information 318 of user device 392 and based on current position information and user data, prepares query to matches contextual one or more points of interests (e.g., POI 310) relative to a query (e.g., user device is near to that particular location or place or position or coordinates or e.g. all shopping malls) or relative to a user's current location place or check in place or auto identified geo- location data or position or exact position from Points of Interest / Points / Positions / Places / Locations Database 345. The query can be refreshed to include added points of interest (e.g., POI 307) and removed points of interest (e.g., POI 305) associated with a geo-fence. The notification 314 is then processed based on the refreshed query.
In an embodiment advertisers can use Advertiser User Interface 342 to prepare advertisement 343 which stores to server 110 point of interest database 345 including define geo-fence boundaries around their premise or shop or product or showcase, so that when user or customer close by, they receive notification and associated digital items or provide geo-location or position information about advertised one or more products, services, shop(s), item(s), thing(s), person(s), and one or more types of entities by providing geo-location or position information via select from map or select pre-identified & stored places from map or list of places or set current location as geo-location or position information or search and provide geo-location or position information or provide keywords e.g. "all Esbada TM" shops of particular locations e.g. New York City, so when current location of any user device e.g. 200, when arrives near or comes near within particular radius of boundaries of advertised one or more products, services, shop(s), item(s), thing(s), person(s), and one or more types of entities then provide notification to all nearest user devices. Advertiser can provide advertisement description, bids, target criteria including send notification to target user devices who are near or within particular radius, provide offers, media including photo or video or text, select or set or associate one or more digital items 370, customized digital items, digital items available from 3rd parties one or more sources, storage mediums, domains, servers, devices, and networks for enabling notification receiving user to access, open, download, install, use, invoke said digital item(s) 378 from storage medium 370 via digital item component 372 e.g. product review or customized forms for getting suggestions, feedbacks, complaints, comments, structured information & ratings from user, present camera display screen so when user capture photo or record video of advertised object(s) or product(s) or shop(s) then auto send to advertised destinations and/or user's all or selected contacts and in exchange provide offers, discounts, redeemable points and coupons. So advertiser can send notification to all user devices near to advertised shop or product or showroom and send notification to them and in the event of acceptance of notification present one or more types of contextual or advertisement associated digital item(s) 370 including e.g. information, new products arrival information, customized offers for regular customers, photos of new products. In an embodiment advertiser creates, selects, apply rules with advertisement which will apply to target user devices.
For example send notification only to new customers, send notification to customers of other shops e.g. Jeweler shops for upsells, send notification and associated digital item e.g. customized and dynamically selected fields or generated personalized review form based on user data including purchase of product or order of foods, present interface to play game and win lottery, present camera screen display to capture brands photo and add to rich story and/or send or refer to contacts of user and based on numbers of sharing or reactions provide benefits to referrer user.
The notification component 312 access, identifies, selects, sequence, apply, and execute one or more rules from rule base 366 via rules component 367 and/or sensors data generated from one or more types of sensors and/or accessing current location(s) or position information 318 via geo-location or position component 317 and/or camera scan and/or date & time and/or access user data 355 via user data component 353 including user structured profile or fields and associate values including gender, income range, age range, education, skills, interest, home address, work address and like, logs or identification of user activates, actions, events, transactions, interactions, communications, sharing, participations, collaborations, connections, behavior, one or more types of senses and status and identify, customize, dynamically or auto customized or update, select, set, apply and execute one or more rules from rule base 366 via rule component 367 for processing, preparing, generating and sending one or more notifications 314, , for example auto or manually identify various status of user including identify that user starts or currently consuming or finishes food intake, visit particular place or tourist place, finishes viewing of movie or play or drama, enters shop and ask about particular products, viewed particular products, purchase or order products or services and paid bills, identify contacts or users who are with user, identify that user is in moving position (walking, in vehicle) and associate or surround contextual places or positions or point of interests, viewing television, identify recording of video and auto recognized objects, products, brands inside video or series of images, sitting at particular place, identify that user is free and available, identify that user is strolling inside particular place e.g. airport, mall and passing or near to particular shop, ordered particular types of food items, identify that user is exit from particular place e.g. movie theatre, airplane, bus, cruise etc., identify that user is traveling at particular place(s) or location(s) or point(s) of interest on particular date and time, track user's locations, routes, status, position, interaction, voice, behavior and based on numerous combinations present one or more types of contextual or pre-selected or auto selected or customized digital items.
There are numerous examples to practice present invention, for example when user moves on particular road present suggestion or complaint or feedback form related to said road and send to associated destinations or dynamically or contextually or auto select or match or identify one or more destinations for example in this case send filled-up form to government or municipal or toll contextual or associated officers to take further one or more actions or communications or providing of support or answers of queries. In an another example when user moves out of or exit particular airplane then present associated or pre-configured review forms related to that particular airplane and enable user to provide service feedback or suggestions or complaints and auto send to said particular airplane related personals. In another example advertiser or listing user can target female customer and when any female user device(s) identify near to advertised shop or product or showcase then ask to collect gift or sample or try at shop etc. In another example when user visits particular site and after viewing site or apartment, ask about site.
In an embodiment enable advertiser to user advertiser interface 342 and enable to select one or more types of triggers for sending notifications to target users including send notification before particular set period of time of entering of user device at particular listed place or point of interest or location or point or position, when enters, while dwell, after passing of particular set period of duration of dwell, while exit, after particular set period of time or duration of exit, after purchasing of product or taking one or more user actions, activities and transactions or happening of particular event(s) send subsequent notification(s), send notification at particular date & time or ranges of date & time or scheduled period only, send notifications to pre-stored customers or selected or identified users only and apply customized rules specific users or categories of users only.
In an embodiment user interface 378 enables user to provide or ad one or more points of interest, places, points, positions, spots and locations and provide associate metadata, categories, tags, keywords, phrases, taxonomy, description and one or more types of media.
In an embodiment user interface 380 enables user to provide one or more preferences or interests or filters including one or more types or categories or tags or key phrases or favorite places, keywords with Boolean operators (A D/OR/NOT/+/-/Phrases) related points of interest, places, points, positions, spots and locations including shops, products, services, brands, named one or more types of entities, items, tourist places, selfie spots, monuments, structures, roads, restaurants, hotels, foods and like. User can apply notification settings including receiving particular number of notification within particular period of time, on or off receiving of notifications, stop receiving of notification from particular source(s), receive notifications from particular range or radius or boundaries of locations, select and apply one or more rules for receiving notifications, configure do not disturb settings including receive all, scheduled, from selected or all source(s), mute notification for particular period of time or up-to un-mute, change ringtones, alerts tones or vibration type(s), sent notifications as per user selected or manually provided status, sent notification only when user's current location is relative to photo spots or review spots and like, sent notification only from scanned object(s), QR code(s), person(s), product(s), shop(s) item(s), thing(s) or one or more types of entities via camera display screen, User can apply one or more privacy settings including disclose or do not disclose user's identity, location information, one or more types of fields and associated values related to user profile data including name, address, contact information, gender, photo, video, age, qualification, income, skills, interests, work place address, one or more types of activities, actions, events, transactions, senses, behavior and like.
One or more types of user interfaces 375 enables user to provide one or more types of user data including updated user profile, structured fields or fields combination specific values,
User database 355 stores plurality types of user data generated by user, received from one or more sources, and auto generated data.
Point of interest or positions or spots or places or locations related to one or more types of entities including shop, product, item, thing, art, person, showroom, showcase, school, college, road, tourist places, selfie spots, user identified or provided or suggested places database 355 comprise advertised and/or non-advertised including user suggested or 3rd parties provided point of interest or positions or spots or places or locations related to one or more types of entities.
FIG. 4 illustrates a computer-implemented notification method in accordance with the disclosed architecture. At 405, the geographical location of the user computing device e.g. mobile phone is monitored by server module 150 relative to the geographical point of interest. At 408
matchmaking, by server module 150, of user device's current location or position information and user data with points of Interest or positions or places or points or locations or spots data and associated one or more types of data and further select, customize, auto select or auto update based on parameters, apply and execute one or more rule(s) from rule base to identify nearest contextual one or more point(s) of interest or points or positions or places or spots or locations. At 412, the notification is processed and generated, by server module 150, based on the geographical location of the user computing device relative to the geographical point of interest and plurality types of user data. At 413, by server module 150, checks whether application is open if no then follows process 432 and if not open then follows process 416. At 416, a notification is automatically communicated, by server module 150, to a user computing device 200 at client application 260 based on the geographical location of the user computing device 200 relative to the point of interest. At 418, server module 150 presents one or more notifications at client application 260 of user device 200. At 430 client application 260 identifies that user tap on notification to access, open or invoke notification and in the event of tapping or clicking on particular notification At 432 client application 260, opens, auto opens, allow to access one or more associated digital items including one or more applications, interfaces, objects, user controls, user actions, web site, web page, dynamically created or customized or pre-selected forms, one or more type of media including one or more or set or series or sequence of photos, videos, text, data or content or information, voice, emoticons, photo filters, digital coupons, multimedia, interactive contents, advertisements, enable to access associated web services and data from one or more sources. At 434, server module 150 auto identifies notification and/or user and/or digital item(s) associated one or more destinations or enables to select from suggested or enable to select one or more destinations from list of destinations including one or more contacts, connections, groups, target criteria specific users, contextual or associated or preselected users of network, feeds, stories, categories, hashtags, servers, applications, services, networks, devices, domains, web sites, web pages, user profiles and storage mediums.
Figure 5 illustrates an example prospective places or spots notification service, wherein user is notified about contextual or matched or associated prospective places or spots where user can capture photo or record video or broadcast live stream or prepare one or more type of media or contents related to that place or spot.
In example, advertiser(s) or merchant(s) 505 is/are enables to provide or create one or more advertisements campaigns, associate advertisement groups and advertisements including geo location or positional or place information of advertised entity for example shop, product, particular part of shop, department, showroom, showcase, item, thing, person or physical establishment, advertisement description, offer information including discount, coupon, redeemable points, offers or parts or components of notification message including one or more type of media including text, links, image, video, user actions (e.g. "Like" button, "Take Photo" button, "Provide Video Review" button) 545/548/550/552, advertisement targeted keywords, bids, condition associate with said offer or discount (e.g. send photo to at least 10 user or get 10 likes or reactions on captured photo and like), photo filter or media to overlay on photo by user, target criteria including type of user or user profile, for example gender, age, age range, education, qualification, skills, income range, hobbies, languages, region, home or work or other location(s) or place(s) or location boundaries, preferences of user, one or more type of user profile data or fields or associated values or ranges of values or selections. Information provided by advertiser(s) or merchant(s) 505 is sent to server 110 and store at database or storage medium 525 via prospective places or spots module 512, wherein prospective places or spots are places or spots where contextual or associated or matched notified user can prepare or provide user generated contents including one or more types of media, for example photo, video, live video stream, review, blog, video review, ratings, likes, dislikes, notes, suggestions, micro blogging and like.
In example, user(s) 510 is/are enables to provide information of prospective places or spots including geolocation or positional information, user actions, details, name of entity including school, college, shop, hotel, restaurant, home or house, apartment, tourist places, art gallery, beaches, roads, movable people or person, temple, malls, railway station, airport, bus stops, particular show rooms, show case & like where in the event of detecting or matching of current location of other users of network with said prospective place or location, user notifies about prospective place where user can prepare and post or share or broadcast or send one or more types of media including select or capture photo or record video or live stream video to one or more destinations, sources, users, contacts, connections, followers, groups, networks, devices, storage mediums, web services, web sites, applications, web pages, user profile, hashtags, tags, categories, events and like. Information provided by user(s) 510 is sent to server 110 and store at database or storage medium 525 via prospective places or spots module 512, wherein prospective places or spots are places or spots where contextual or associated or matched notified user can prepare or provide user generated contents including one or more types of media, for example photo, video, live video stream, review, blog, video review, ratings, likes, dislikes, notes, suggestions, micro blogging and like.
In example, database or storage medium 530 stores information about user provided by user or any other sources including user profile or user information , activities, actions, events, transactions, shared media, current location or check-in place or positional information, user preferences, privacy settings and other one or more types of settings.
In example, database or storage medium 535 stores or updates rules.
In example, database or storage medium 540 stores various types of media including text, photo, video, voice, files, emoticons, virtual goods, links & like shared or provided by user, In another embodiment server 110 can access various types of media from 3rd parties applications, services, servers, networks, web sites, devices and storage mediums.
In example, database or storage medium 541 stores digital items, applications, message templates, web service links, objects, interfaces, media, fonts, photo filters, virtual goods, digital coupons, user actions which advertisers or merchants select while preparing advertisements or notification message or user can select while preparing or providing prospective places or locations or spots information.
In example, based on user's 510 current location or place or check-in place or positional information and users one or more type of data 530, server 110 matching user data 530 with data of prospective places or spots 525 and applying one or more contextual or associate or identified rules stored at rule base database or storage medium 535 to identify or recognize or detect matched prospective locations or places or spots from database or storage medium 525 where user 510 is alerted or notified by sending rich notification message e.g. 525/530/535/540/541 ay user device 560, wherein rich notification message is prepared by advertiser 505 or user 510 or server 110 based on data stored at 525/530/540/541, and enabling user 510 to access rich notification information and associated links or user action links 545/548/550/552 for providing or sending or posting or sharing or broadcasting one or more type of media or information 565 related to that place or location or that place or location associate entity including shop, product, person, company, firm, club, school, college, organization, hotel, restaurant, food, dress, item, thing, object, service, arts, art gallery, event, activity, action, transaction and like.
In example, user 510 can receive or view said server 110 provided rich notification(s) e.g.
525/530/535/540/541 at client device 560 via one or more communication interfaces or services or applications or channels including push notification service. In example user 510 can access said one or more e.g. 525/530/535/540/541 rich notification messages. For example user 510 received and views rich notification 548 and tap on "Take Photo/Video" user action or active link or accessible link, wherein based on user access or tap or click on said link, server 110 presents or downloads or installs or enabling user to access said link associate application or interface or object or digital item or form or one or more controls (buttons, combo box, textbox etc.). For example based on user's 510 tapping on "Take Photo/Video" link 548, server 110 presents photo or video application or camera display feature of application 583 at user device 560 so user 510 can take one or more photos or videos related to rich notification 548 for example capturing "Cafe Coffee Day" photo or video 565 via "photo" icon or button 586. In an embodiment server 110 identifies or detects or verified or matches object e.g. "coffee cup" 580 / e.g. "cafe coffee day" text 599 inside said user's 510 said captured photo 565 with said advertiser's 505 said notification message 548 associate entity or details to identify that user 510 took photo or video 565 related to said clicked notification 548.
In example, user 510 can view offer 572 associate with that notification message 548 related advertisements stored at 525 and posted by advertiser 505. User 510 can get offer when user 510 captures a photo 565 and share said captured photo with user's 510 contacts via posting or sending to selected contacts or followers or story or feed 595, which stores at database or storage medium 540 of server 110 via Media Receiving Module 566.
In an another embodiment server 110 enabling user 510 to view media or photos or videos of other users related to that place or spot to similar other places or spots, so user can view angels, scene and learn from them that how, what, where, when to prepare media or capture photo or record video or broadcast live stream.
In an another embodiment enabling user 510 to edit media or photo or video including edit content, add media, apply one or more photo filters or doodles or lenses 585 related to that advertisement or notification message 548.
In an another embodiment user 510 can receive notification(s) e.g. 525/530/535/540/541 at spectacles 590 or device e.g. 560 connected with spectacles 590 and enabling user 510 to capture photo or video via spectacles 590 which have an integrated wireless video camera 598 that enable user to capture photo or record video clips and save them in spectacles 590 or to user device 560 connected with spectacles 590 via one or more communication interface or save to server 110 database or storage medium 540. The glasses begin to capture photo or record video after user 510 tap a small button near the left camera 598. The camera can record videos for particular period of time or up-to user stops it. The snaps will live on user's Spectacles until user transfer them to smartphone 560 and upload to server database or storage medium 540 via Bluetooth or Wi-Fi or any communication interface, channel, medium, application or service.
In example, by providing rich notification service related to identifying user specific contextual prospective media generation location or place or spot, user 510 is enabled to capture each possible photos or record videos or share what user is thinking about particular location or place or said location or place related entity including shop, product, movie, person, thing, item, object, service & like and add to feed or day to day story and/or share to one or more contacts, groups, followers, categories users of network, auto identified contextual users, servers, destinations, devices, networks, web sites, web pages, user profiles, applications, services, web pages, databases, storage mediums, social networks and like. User can describe whole day visually without missing any possible selfies or photos or videos related to user's day to day life or activities or actions or events or transactions and share with friends and contacts or followers.
For example user 510 visits various places New City then user is notified at every possible prospective spots or places or locations during user' s travelling, so user can capture each possible moments via photos or videos or describe moments via micro blogging or notes without missing prospective photos or videos or selfies at various visited places, locations, spots and that visited places, locations, spots related people, products, services, foods, items, natural scene, objects, arts, photos, artistic roads, monuments, structures & buildings, fair, light show e.g. Times Square, showrooms, dresses, selfie booth or spots. At present user is misses many prospective photos or videos or micro blog sharing because user does not know where and when what to capture, what to record or what to broadcast or what to write or post or share.
In an embodiment server 110 can auto identify and stores at storage medium 525 prospective places or spots based on location or place or positional information of particular number of users' particular number of photos or videos taken by users or media posts by users or particular number of reactions on them within particular period of time or at current particular range of time.
In an embodiment server 110 generates or updates map or presentation of prospective palaces or spots via prospective palaces or spots module for user 510 based on database of prospective palaces or spots 525, user data including user's current location 530, and rule base 535 and present to user or requesting user or searching user and enable user to access information related to each prospective places or spots and access associate one or more digital items, applications, interfaces, photo filters, objects, links, user actions, advertisements, offers, digital coupons, media and like.
In an embodiment rule base comprises identifying or matching or executing contextual or associated various rules based on intelligently identify, request, alert, store, process various data of user or prospective places or spots including enabling user to provide check-in place(s), updated status or auto identify check-in place based on monitoring current location of user for particular period of time identify user associate or accompanied friends based on monitoring of similar current location for particular period of time or check-in place, identify user's intent to capture photo or video based on sensors, based on monitoring of particular speed of changing of current location identify that user or connected users of user is moving i.e. traveling via car or any other vehicle or walking or seated at fixed location, identify place or spot where number of time user photo or video taken or posting posts or last photo or video taken or not taken, ask structured information from user regarding user's current or daily or at particular date & time or range of date & time or scheduled activities, actions, events, transactions, plan, participation, interest & preferences,
In an embodiment update user about various statistics including number of prospective places or spots where user took photos or videos or share posts or number of identified prospective places or spots which user missed i.e. user not took photos or videos or share posts at that places or spots, number of reactions user's posts get on shared media, logs of user's activities including offers, discount, coupons user get from advertisers or merchants.
In an embodiment Point of Interest, Positions, Places, Spots Map Generation Module 520 discussed in detail in Figure 12
Figure 6 illustrates exemplary graphical user interface(s), in the event of tapping on notification 548, user device 200 is presented with notification associated digital item e.g. camera display screen interface 583 and in the camera display screen interface user is auto presented with notification associated destination(s) visual media capture controller (e.g. hashtag (e.g. for post to twitter and/or Instagram and/or Facebook and/or one or more web sites applications, storage medium, servers, devices, networks and domains), and all contacts of user 610) and associated one or more types of information and media 611, so user can one tap on said auto presented contextual visual media capture controller 610 to capture photo or record set particular duration video 631 and auto send to notification associated destination as well as pre-set criteria by advertiser specific destinations including all phone contacts, all social contacts, top 10 friends, family, one or more types of groups including college, school, work, associate and followers . In another embodiment user is also presented with one or more user actions 613 including selected pre-created message to send with said capture photo or recorded video, Like, provide status including purchased, viewed, ate, drink and watch, instruct receiver including Refer others, Re- share to others, provide comments and like. In another embodiment user can remove 606 said auto presented or notification associate contextual visual media capture controller 610. In another embodiment system automatically identify one or more objects inside captured photos or recorded videos e.g. brand name "cafe coffee day" 694 and coffee cup 694 and based on matching user's exact place position or geo-location information with location of captured photo or recorded location system identifies and verifies that user capture said advertised notification related photo or video and send to advertise specific destinations e.g. all contacts of user and based on number of recipients system identifies sending of photo or video to advertiser set particular number of recipients e.g. if send to more than 10 contacts then provide pre-specified benefits to said user including digital coupon, discount, gift, offer, cash back, redeemable points, voucher, invite to take sample and like.
In an another embodiment in the event of application or interface e.g. camera display screen is already open then user is auto presented with contextual or matched destination visual media capture controller 610 based on matching of points of interest or points or advertised places or products positions or spots database 345 with user's current location 317, user data, preferences, privacy settings 355 as discuss above.
In an another embodiment based on monitoring, tracking and identification of current user device's 200 current updated geo-location and position information 317 system auto removes previously presented visual media capture controller 610 and associated presented information 611 and associated presented user action(s) 613 and present another contextual or matched destination visual media capture controller (if any) based on matching of points of interest or points or advertised places or products positions or spots database 345 with user's current updated or nearest or particular set radius specific location 318, user data, preferences, privacy settings 355 as discuss above. So user continuously updated with notification and/or contextual visual media capture controller label or icon with or without information and/or user action(s), so user is continuously updated about new or nearest point of interest or place or point or advertised pale or product or shop and enable to one tap capture photo or record pre-set particular duration of video and auto send to or send to selected destinations including one or more contacts, groups, advertised destination(s) including advertiser feed, story, album, gallery, web site, web page, database, application, server, storage medium, device and like
In an embodiment in the event of capturing of photo or recording of video and auto sending to associated one or more destination(s) via tapping or clicking on auto presented visual media capture controller 610, user is presented with pre-set particular duration of delay sending message 671 and passed updated reaming duration indicator 672, so within said duration 672 user is able to preview and remove 673 captured photo or video 631 and capture again if user want for sending another photo or video.
In another example when user device 200 dwell in another place "Tea House" and based on time spent or dwell in "Tea House" for particular duration or based on user selected status "Drinking" or "Drinking Tea" or manual user status "Drinking Tea at Tea House" or based on order o payment of bill or transaction indication from "Tea House" or from user or server or after spending particular period of time and exiting from that place, system identifies that user consume tea or particular type of tea at "Tea House", so based on that system sends notification to user and in response to accepting or tapping or clicking on notification 645 user device 200 is presented with customized or pre-created and pre-stored or dynamically generated or "Tea House" as an advertiser provided digital item e.g. Review Interface 655. So user can provide review immediately after consuming tea and user provided review auto sent to authorize person of "Tea House" for further process and action.
System continuously or up-to system off or mute by user, monitoring, tracking and identifying of user device's 200 current location or position information 318, identifying or matching or relating nearest or surround place or point of interest or advertised place or points or positions or selfie spots from database 355 and filter based on user data including user preferences and privacy settings and if application is not open then send notification and in the event of tap on notification present associated digital items including one or more applications, interfaces, dynamically generated or customized or pre-created forms, web sites, web pages, set of user actions or controls, objects and one or more types of associated web services, data or contents or one or more types of media from one or more sources.
FIG.7 illustrates a computer-implemented guided system for dynamically, context aware, contextually and artificial intelligently facilitating user to take visual media including capture photo, record video or prepare one or more types of media including text, microblog & like in accordance with the disclosed architecture. At 705, the geographical location of the user computing device e.g. mobile phone 200 is monitored, by server module 156, relative to the geographical point of interest e.g. 310. At 708, by server module 156, matchmaking of user device's 200 current location or position information 318 and user data 355 with points of Interest or positions or places or points or locations or spots data and associated one or more types of data 345 and further select, customize, auto select or auto update based on parameters, apply and execute one or more rule(s) from rule base 366 to identify nearest contextual one or more point(s) of interest or points or positions or places or spots or locations e.g. 310 (or 307 or 305). At 712, the indication is processed and generated, by server module 156, based on the geographical location 318 of the user computing device 200 relative to the geographical point of interest 310 and plurality types of user data 355. At 713, client application 276, checks whether application is open if no then follows process 732 and if application not open then follows process 716. At 716 an indication is automatically communicated, by server module 156, to a user computing device 200 based on the geographical location 318 of the user computing device 200 relative to the point of interest 310. At 718, present one or more indications, by server module 156, at user device 200. At 730 identify that user tap on indication to access, open or invoke indication and in the event of tapping or clicking on particular indication At 732 open, auto open, allow to access one or more associated digital items including one or more applications including camera application to capture, display and share visual media, interfaces, objects, user controls, user actions, web site, web page, dynamically created or customized or pre-selected forms, one or more type of media including one or more or set or series or sequence of photos, videos, text, data or content or information, voice, emoticons, photo filters, digital coupons, multimedia, interactive contents, advertisements, enable to access associated web services and data from one or more sources. At 735 start guide system in the event of identifying 737 that user wants to take visual media or prepare one or more types of media or content for current context or current location or environment including current or identified nearest point of interest, position or place or associated one or more types of entities including object(s), product(s), item(s), shop, person, infrastructure & like. At 740 display, by server module 156, curated or contextual or pre-stored or pre-configured or pre-selected one or more types of one or more media items which is/are previously taken or generated or provided from/at identified POI e.g. 394 or 3rd parties contextual stock photos or videos or similar types or pattern of places or locations or POIs or positions. At 750 provide, by server module 156, identified nearest or relative POI related information including pre-stored, auto identified based on recognized objects inside visual media, from other users of network provided at/from/for same POI or place or position. At 755 provide or dynamically context aware present or instruct or augment or guide user (via visually, text, voice etc.) for one or more techniques, angels, directions, styles, sequences, story or script type or idea or sequence, scenes, shot categories, types or ideas, transition, spots, motion or actions, costume or makeup idea, levels, effects, positions, focus, arrangement styles of group, step by step rehearsal to take visual media, identify required lights, identify flash range, contextual tips, tricks, concepts, expressions, poses, contextual suggested settings or modes or options of camera application (e.g. focus, resolution, brightness, shooting mode etc.), contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information based on user device's 200 current location 318 specific particular identified or selected POI or place or position or entity (person, object, item, product, shop etc.) related to said POI or place or position e.g. 310 related data 345, guide system resources data 346 via guide system 276 or 348 of server 110, contextual or selected or pre-set or identified or matched executed one or more Rules 366 via rules component 367, object recognition, user device sensors and associate or generated or acquired sensor data, User data 355 and surround or nearest user contact(s)' data including user profile, identity, user device type or various types of configurations, camera display or camera application's current settings, current date & time and position or geo-location information, access from 3rd parties one or more databases or storage mediums, servers, devices, networks, domains, and web sites, identify current angels, light level, focus based on sensors, utilization of various types of object recognition technologies on camera display screen related scene before, while or currently or after capturing of photo or recording of video or after first or subsequent takes of photo or video. At 780, auto identify indication and/or user and/or digital item(s) associated one or more destinations or enable to select from suggested or enable to select one or more destinations from list of destinations including one or more contacts, connections, groups, target criteria specific users, contextual or associated or pre-selected users of network, feeds, stories, categories, hashtags, servers, applications, services, networks, devices, domains, web sites, web pages, user profiles and storage mediums.
Figure 8 illustrates auto presenting, present suggested or enable user to select one or more destinations for sharing one or more types of media or content. Advertiser 805 is enable to prepare listing of one or more types of destinations, via advertiser user interface, including provide, select, edit, update and set one or more brand pages, brand web sites, brand related hashtags, tags, categories, servers, services or web services, databases or storage mediums, applications, objects, interfaces, applets, servers, devices, networks, created galleries, albums, stories, feeds, events, profiles and like, in which users can share, send, broadcast and provide one or more types of media or user generated or user prepared contents including photos, videos, live steam, text, posts, reviews, micro blogs, comments, user reactions and like, and also enable advertiser 805 to provide or update or select associate details, select, set, apply one or more associated rules 835, policies, privacy settings, target criteria, bids, preferences, geo-location or positional information, set or define geo-fence boundaries information and target location query e.g. "all Cafe Coffee Day TM shops at NYC", select, associate or pre-configure or set one or more digital items 841 from one or more sources, provide offers, discounts, points, cash back, redeemable points, gift, sample, free try, digital coupons & vouchers in the event of sharing one or more types of particular number of media items to one or more or number of destinations, advertisement or listing running schedules which stores at destination information database 825 via prospective destination module 812 server 110. In another embodiment user 810 also enable to provide, create, select, update & set one or more destination lists including user connections like phone contacts, events, networks , groups & followers 830, web sites, web pages, user profiles, storage mediums, applications, web services, categories, hashtags, galleries, albums, stories, feeds, events, social network accounts and like, user 810 is also enable to provide, select, update and set or apply or associate one or more rules, privacy settings, policies, preferences and details which stores at database 825 via prospective destination module 812 of server 110.
In an embodiment auto determine or auto suggest or auto matched or provide contextual destinations based on user device's current positions or geo-location information, check-in-place, date & time, points of interest or place databases, auto identify or auto select and executed one or more contextual rules from rule base, user or connected user(s)' data including user profile, user connections, status, activities, actions, events, transactions, senses, behavior, communications, sharing, collaborations, interactions, preferences, privacy settings and auto recognized objects and auto identify object(s) associated details. For example in the event of identifying of user device's 200 current location near to or relative to advertised location or position specific destination e.g. "Cafe Coffee Day" then user device 200 is presented with associated digital items e.g. camera application 855 and visual media capture controller 898 and associate information 896. User can remove visual media capture controller 898 via remove icon 893 or user can switch to other visual media capture controllers via previous icon 894 or next icon 895. When user device 200 captures photo 850 via photo capture button or icon 862 or visual media capture controller 898 then based on auto presented visual media capture controller associated destination or user device's location or position specific and identified or recognized object e.g. "coffee cup" 852 and "cafe coffee day" text or word or logo 858 inside photo via employing one or more types of object recognition technologies and optical character recognition (OCR) technologies, system auto identifies or auto matched contextual one or more destinations e.g. destinations 869 and 876 and auto select and initiate auto sending to said auto identified destinations within set particular period of duration, so user can preview 885 and cancel 884 sending of photo before expiration of said duration 882. After expiration of said duration 882, system auto sends captured photo 885 to associated or defined or set or auto identified destinations 869 and 876. In an embodiment user can change destination via manually selecting one or more destinations from list of destinations 882 before expiration of said timer or duration 882. In an another embodiment user can manually select one or more destinations from list of destinations 875 including user contacts and groups, opt-in contacts, and auto suggested destinations e.g. 877, 878 and 879.
In an embodiment system receives a geo-location data 318 from a user device 200; determines whether the geo-location data 318 corresponds to a geo-location fence associated with stored or identified prospective or permitted position(s) 306 or 345 including an advertised place or entity, curated place and an event 345; notify or indicate about identified prospective or permitted position(s) 314; presenting contextual one or more digital items 378; and supplying one or more destinations and information about destination(s) from destination database 825 to the device 200 or determining one or more destinations for posting or sharing or sending or broadcasting user generated contents based on one or more associated rules, user data, privacy settings and preferences.
In an embodiment identify or present destinations based on recognized object(s) inside photo or video.
In an embodiment enabling user to prepare, select, capture, record, augment or edit or update and post or send or broadcast one or more or series of one or more types of media or user generated contents including photos or videos or microblog or review to one or more auto matched, auto identified, suggested or selected one or more destinations.
In an embodiment determine destination(s) based on notification associated one or more destinations
In an embodiment determine destination(s) based on user data, user pre-set destinations, followers, user preferences & setting. In an embodiment determine destination(s) based on last selections of destination(s).
In an embodiment enable user to post one or more media item(s) to user selected one or more destinations.
In an embodiment destinations comprise user contacts or connections, groups, followers, networks, hashtags, categories, events, suggested, web sites, web pages, user profiles, applications, services or web services, servers, devices, networks, databases or storage medium, albums or galleries, stories, folders, and feeds.
In an embodiment enable user to post one or more media item(s) to user selected one or more destinations, wherein enable user to select from list of suggested destinations
In an embodiment auto post one or more media item(s) to auto determined one or more destinations.
Figure 9 illustrates user interface 265 to enabling user to configure rich story, wherein rich story comprise one or more media items created, selected, generated, updated, managed and shared by user and/or one or more participants and based on privacy settings and preferences making them available to one or more types of destinations or users including provide name or title of rich story 903, select or upload or search and select or drag and drop icon or image or photo or video or animation for rich story icon 903, provide or add or update or search or select one or more categories, tags, taxonomy, keywords, key phrases and hashtags 905, provide or update details or description related to rich story 908. In another embodiment system dynamically present category or title or keywords or details or subject or story specific structured forms (not shown in figure). User can provide schedule (date and time) 913 of start and/or end of story. User can select or search and select 931 and invite 930 one or more or all contacts, groups, networks, followers, dynamically created group(s) based on location or location of event and one or more rules and criteria, selected or target criteria specific matched users of network based on their privacy settings and preferences to participate or become member in rich story. In an
embodiment user can provide one or more types of admin rights to one or more members and provide one or more or all types of access rights 987. User can accept one or more requests 933 from other users of network to become member and/or admin of particular rich story e.g. rich story "My USA Tour" 903. User can provide preferences and apply privacy settings and notification settings 934 to receive notifications or indications of contextual POI or places that matched with user's location when user arrived or dwell at that POI or place including selecting one or more pre-created or presented categories, types, hast tags, tags, key phrases, keywords specific (based on keywords related to POI related information or metadata or comments or contents associated with photos or videos captured or shared or photos or videos that was previously taken from that POI or place), prospective objects related (i.e. recognized objects inside collections of photos or videos that was previously taken from that POI or place) or prospective objects related keyword specific
(identified keywords from recognized objects inside collections of photos or videos that was previously taken from that POI or place), advertiser POIs, User provided POIs, location or geo- fence boundaries specific POIs, selected one or more types or named one or more types of one or more entities including brands, companies, schools, shops, malls, products, and like, provided or selected or created query including natural query or Structured Query Language (SQL) specific e.g. "all mall of New York City, downtown", supplied or uploaded sample image specific or image associated recognized one or more object specific POIs e.g. logo(s), one or more types of selfie spots, while in move, defined radius or range of radius specific, events specific, most reacted including most liked, most captured, most shared, most commented, most ranked, most viewed and all auto suggested by system or server 110 contextual POIs 937. In another embodiment user is enable to receive all or limit daily or within scheduled or up-to end of rich story 935 receiving of number of notifications or alerts or indication or presentation of number of contextual POIs 936 and associate information. In another embodiment user is enable to apply do not disturb settings 937 including do not receive notification while user is on call, at night (scheduled or default), while pause, while moving (in car or in ride, but not while walking), while eating food (based on place), while at fixed location and not much moving (e.g. while seated), while using one or more types of applications, while auto identified busy status of user (e.g. in Queue e.g. at airport, at taxi stand etc. (identify based on position information)), tourist places specific, and like. In an another embodiment user is enable to ON or OFF one or more types of ring tones or vibrations, make silent, select and set ringtones and/or vibrations for receiving of one or more types of alerts from one or more type of triggers including notification or indication in the event of identification of identification of new POI 938.
User can provide rights to receive and view rich story to one or more types of viewers including user only or make it private so only rich story creator user only can access it 940 and/or user can provide rights to receive and view rich story to all or selected one or more contacts, groups or networks 941 and/or all or selected one or more followers of user 942 and/or participants or members 943 of rich story 903 and/or contacts of participants or members 943 of rich story 903 and/or followers of participants 946 and/or contacts of contacts participants 947 and/or one or more target criteria including age, age range, gender, location, place, education, skills, income range, interest, college, school, company, categories, keywords, and one or more named entities specific and/or provided, selected, applied, updated one or more rules specific users of networks or users of one or more 3rd parties networks, domains, web sites, servers, applications via integrating or accessing Application Programming Interface (API) e.g. view by users situated or dwell only in particular location(s) or defined radius or defined geo-fence boundary specific users or view when system detects one or more types of pre-defined activities, actions, events, status, sense via one or more types of sensor and transaction or user who scan one or more QR codes or object or product or shop or one or more types of pre-define objects or items or entities via camera display screen 948 and/or all or one or more selected networks 951 and/or groups 952 and/or allow to receive or view by anybody or allow system to auto determine or auto identify 954 whom to send rich story associate media items.
User can provide presentation settings and duration or scheduled to view said rich story including enabling viewers to view during start and end of rich story period 955, user can view anytime 966, user can view based on one or more rules 967 including particular one or more dates and associate time or ranges of date and time. User can select auto determined option 968, so system can determine when to send or broadcast or present one or more content(s) or media item(s) related to rich story e.g. rich story 903. In an another embodiment enable user to set to notify target viewers or recipients 974 as and when media item(s) related to rich story e.g. rich story 903 shared by user or participant one or more members of rich story. In an embodiment enable user to set view or display duration with rich story or one or more or each media item(s) related to rich story 958, so recipients or viewers can view only said set period of duration only and in the event of expiration of said set period of time remove or hide from recipient(s) or viewer(s) device and/or server and/or user's device. Use can also enable to allow target viewer(s) to view set particular times only for/within set particular period of duration only 960. Use can also enable to allow target viewer(s) to view unlimited times for/within set particular period of duration only 962. User can also set to auto post each media item(s) to selected or set target viewers or auto determined target viewer(s) or recipient(s) or destination(s) 970 or user can set to ask each time to user to post or send or share or broadcast or present to target recipient(s) or viewer(s) media item(s) 972. User is enabled to manage and view list of one or more rich stories 980 including remove one or more selected rich stories and update particular selected story and associated configuration settings, privacy settings and preferences including add members, remove members, invite members, change target viewers or criteria of viewership and change view duration &
presentation settings. User is enabled to add or create one or more rich stories 982. User is enabled to save or draft 982 one or more rich stories at client device and/or server 110 databases 115 via server module 154. User can share or publish created story to/with one or more participant members of rich story 986, so participant can become members of rich story and add media item(s) to said rich story, can remove membership from rich story, request to become admin of rich story.
In an another embodiment rich story creator user can allow one or more or all participants or one or more admin of rich story to pause the rich story e.g. rich story 903 and/or stop 992 the rich story 903 and/or remove 993 the rich story 903 and/or invite or add members to rich story and/or change confirmation settings of rich story (not shown in figure 9).
In an another embodiment enabling to any participant members to pause 991 or re-start 990 rich story 903 for stop to receiving notifications up-to re-start 990 and adding or updating media item(s) to rich story.
In an embodiment auto pause rich story to receive notifications or indications based on predefined or determined events or triggers, for example when phone is busy in phone call or do not disturb apply by user and when user feel disturbed or obstructed.
After creation and configuration of story user is enable to manually start 990 rich story 903 or auto start as per pre-set schedule 913. User is enable to pause particular selected story e.g. rich story 903, in the event of pausing of rich story e.g. rich story 903, system stops to providing realtime or location specific contextual notifications or indications to user and participant members of rich story e.g. rich story 903. In the event of stop or done by user or authorized user, system stops adding further any media item(s) to rich story, change in rich story configuration including adding or removing members and like for creator user or admin user(s) or any participant members of rich story up-to rich story re-start 990 by creator of rich story user or authorized user. In the event of removal 993 of rich story by user or authorized user(s), based on settings, system removes rich story from server and/or creator user device and/or all participant members' device(s) and/or viewer device(s) of rich story e.g. rich story 903. FIG. 10 illustrates a computer-implemented rich story method in accordance with the disclosed architecture. At 1003, if new rich story or gallery e.g. 903 then at 1005 create rich story or gallery at server 110 database 115 via server module 154 and/or at client device 200 storage medium 286 via rich story application 265 and if not or already exists then enable rich story or gallery creator user or authorized user(s) to start 1007 rich story system for all participant members or start by member for his own purpose for said created rich story e.g. 903 via e.g. clicking or tapping on "start" button 990 or 1113. At 1010 present said created rich story specific named Visual Media Capture Controller icon and/or label e.g. 1140 or 1180 or 1190 on display e.g. e.g. 1123 or 1150 or 1175 at device(s) e.g. 200/140/145 via server module 154 of all participant members of rich story e.g. 903 and/or present information or one or more contextual or associated digital items e.g. 1127 or 1169 or 1 187 or 1197 related to created gallery or story e.g. 903 or 1110 on camera display screen e.g. 1123 or 1150 or 1175 at user device 200/140/145. At 1015 check is made, server module 154, whether rich story or gallery creator user or authorized user(s) paused the rich story e.g. 903 for all participant members or pause by member for his own purpose or system auto paused the rich story e.g. 903, if pause or auto pause = yes then start when rich story or gallery creator user or authorized user(s) starts 1007 rich story system for all participant members or start by member for his own purpose for said created rich story e.g. 903 via e.g. clicking or tapping on "start" button 990 or 1113. If not paused at 1015, then check is made that rich story or gallery creator user or authorized user stopped the rich story e.g. 903 then if stop=yes then hide the visual media capture controller from all participant members or if stop by member then hide the visual media capture controller from member device's display. If stop=no then at 1020 check is made whether rich story or gallery creator user or authorized user(s) removed the rich story e.g. 903, if yes then remove the rich story from all participant members 1021 or if member remove the rich story or gallery e.g. 903 then remove rich story e.g. 903 from member's device. If at 1020 rich story e.g. 903 not removed then at 1025, the geographical location of the user computing device e.g. mobile phone 200 is monitored, by server module 154, relative to the geographical point of interest e.g. 310. At 1027 matchmaking, by server module 154, of user device's 200 current location or position information 318 and user data 355 with points of Interest or positions or places or points or locations or spots data and associated one or more types of data 345 and further select, customize, auto select or auto update, by the server module 154, based on parameters, apply and execute one or more rule(s) from rule base 366 to identify nearest contextual one or more point(s) of interest or points or positions or places or spots or locations e.g. 310 (or 307 or 305). At 1028, server module 154, identifies new POI or place or location then at 1030, the indication is processed and generated, by server module 154, based on the geographical location 318 of the user computing device 200 relative to the geographical point of interest 310 and plurality types of user data 355 and notify or indicate or display information, by the server module 154, e.g. 1127 or 1169 or 1187 or 1197 about new POI or spot or place or location or position at user device e.g. 200 (optionally also provide associate one or more types of details or media item(s) or digital item(s)). Based on presented identified or contextual and generated visual media capture controller label or icon e.g. 1140 or 1180 or 1190 and POI specific information e.g. 1127 or 1169 or 1187 or 1197 or presented information 1174 and 1169 about POI or place or location, by the server module 154, below on camera display default photo capture button or at prominent place to enabling user to tap or click on visual media capture controller label or icon e.g. 1140 or 1180 or 1190 or e.g. photo capture icon 1164 or video icon 1166 to capture photo or record video related to said identified POI or place or location or spot. If system detects at 1032 that whether user capture photo or record video or prepared content then at 1033 user is enable to augment and edit captured photo or recorded video or prepared content including apply one or more photo filters, lenses, overlays, draw, text, emoticons, stickers, and edit visual media, provide notes, comments, structured information (present fields to enable providing of associate values) and metadata. At 1035 system 265 stores said captured photo or recorded video at user device 200 and/or at server 100 database 115 or 394 or 540 or 840 via server module 154. At 1040 server module 154, (A) ask user to provide contextual information by dynamically generating and presenting contextual form(s) and/or (B) auto identifies associated information based on information about POI or place and user data from one or more sources and/or (C) auto identifies information based on recognized object(s) inside visual media and auto identifies said identified object associated information from one or more sources and/or (D) auto identifies contextual digital items or identifies user associated digital item(s). At 1042 generating one or more rich story item(s) or news item(s) or newfeed item(s) by server 110 module 154 based on identified or associated information, digital item(s) and metadata or system data at 1040 and at 1044 server 110 module 154 posts or send or post or update or broadcast said generated one or more story item(s) or news item(s) or newfeed item(s) to/in said created rich story or gallery e.g. 903 for making them available to allowed or authorized viewers based on privacy settings and preferences and/or media item creator user selected or pre-set one or more destinations.
Figure 11 illustrates exemplary graphical user interface(s) 265 for providing or explaining rich story system. At 903 user can provide title or name of rich story e.g. "My USA Tour Story" and tap or click on "Start" button or icon or link or accessible control 1113 to start preparing rich story or adding presented or identified POIs specific one or more types of media specific one or more media item(s) including selected or captured photo(s), selected or recorded video(s) and user generated or provided content item(s). User can configure and manage created story via clicking or tapping on "Manage" icon or label or button or accessible control 1115 (as discuss in Figure 9 and 10). User can input title at 903 and tap on "start" button 1113 to immediately start rich story which created, manage and viewed by user only and later user can configure story and invite one or more contacts or groups or followers or one or more types of users of networks and set or apply or update privacy settings for viewers, members and can provide or update presentation settings via clicking or tapping on "Manage" icon or label or button or accessible control 1115 (as discuss in Figures 9 and 10).
At 1101, user can turn On or Off guide system (as discuss in figure 7). For example in the event of creation of rich story e.g. 903 user device 200 is auto presented with visual media capture controller label or icon 1140 and in the event of monitoring or tracking of user device's geo- location or position 318, system or server module 154 identifies nearest POI(s) or place(s) e.g. 310 where user can capture photo or record video e.g. 1122 associated with said identified and presented POI or place or location or object or entity or product or item associated with said POI or place or location 1127 by tapping on said auto presented created gallery or rich story specific visual media capture controller label or icon e.g. 1 140. In another embodiment user can remove or skip or ignore or hide or close said presented POI by tapping on remove or skip or hide icon 1208 and instruct system to present next available POI or place at 1127 or based on updated geo- location or position information of user device 200, server or rich story system update or present next nearest contextual POI or place or spot or location or position and hide or remove earlier POI or place or location or spot. In an embodiment system enables user to show previous and next POI(s) or place(s) or location(s) or spot(s) for view only and shows current POI for taking associate one or more types of one or more visual media item(s). In an embodiment user can tap on default camera photo capture icon e.g. 1129 or video record icon e.g. 1131 to capture photo and send to selected one or more contacts via icon 1133 in normal ways. In another embodiment user is enable to pause or re-start or stop 1136 rich story e.g. 903 and manage rich story 1135 (as discuss in figures 9 and 10).
In another example when user went to another city e.g. New York City then system identifies another POI e.g. "Baristro" based on user device's 200 place e.g. "Downtown" 1169. In another presentation user device is presented with started created rich story or gallery label or name or tile e.g. "My USA Tour Story" 1197 and associate current identified POI information e.g. 1169 and enabling sure to tap on camera photo capture icon 1164 or record video icon 1166 to capture photo or record video and post said captured photo or recorded video to created rich story or gallery e.g. 903. In another embodiment user is enable to switch to other rich story via previous rich story icon 1174 or next rich story icon 1178. In another embodiment user is enable to view number of views 1157 or 1144 or 1182 by viewers of shared media item(s) or content item(s) by user. In an another embodiment user can view, user or skip presented more than one nearest or next prospective or contextual or identified or presented POIs or places or spots or locations or positions via tapping on previous icon 1198 or next icon 1199. Ina n another embodiment user can view newly received number of media item(s) 151 shared by other participant members of rich story e.g. 903. In an another embodiment when user pause via 1162 (pause icon) rich story e.g. 903 then user is enable to take normal photo or video via camera icons 1164 or 1166 and send to selected contact(s) and/or group(s) and/or my story and/or our story via icon 1168.
In another example, In another embodiment use is presented with more than one visual media capture controller(s) or menu item(s) e.g. 1180 and 1190 related to more than one created rich stories and display information about current contextual identified POI specific information e.g. 1187 and enable to capture photo or record video (one tap) or record video (hold on label to start and release label when finish video) and add to selected or clicked or tapped visual media capture controller label or icon e.g. 1180 or 1190 specific or related rich story. User is enable to pause, restart, and stop rich story or gallery 1180 via icon 1186 and manage via 1185 (as discuss in figure 9 and 10) and view number of views on shared media item(s) indictor 1182 or pause, restart, and stop rich story or gallery 1190 via icon 1196 and manage via 1195 (as discuss in figure 9 and 10) and view number of views on shared media item(s) indictor 1194. User is enabling to skip or hide or remove or instruct to present next nearest or next prospective POI or place or spot or location or position via icon 1188.
In another embodiment user is enable to turn ON the guide system via tap on icon 1104 to start guide system (as discuss in figure 7). In the event of turn ON guide system by user via icon 1104, system starts guide system (as discuss in figure 7) and based on current user device 200 geo-location or position information 318 specific particular identified or selected POI or place or position or entity (person, object, item, product, shop etc.) related to said POI or place or position e.g. 310 or 1127 or 1198 or 1187, contextual rules 366, guide system resources data 346 via guide system 276 or 348 of server 110, object recognition and user data 355, provide or present or instruct or guide user for one or more techniques, angels, directions, styles, sequences, story or script type or idea or sequence, scenes, shot categories, types or ideas, transition, spots, motion or actions, costume or makeup idea, levels, effects, positions, focus, arrangement styles of group, step by step rehearsal to take visual media, identify required lights, identify flash range, contextual tips, tricks, concepts, expressions, poses, contextual suggested settings or modes or options of camera application (e.g. focus, resolution, brightness, shooting mode etc.) , contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route or start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information. For example in the event of start to capture photo or video, system, identifies and presents contextual or matched photos or videos 1117 previously taken by other users at same current location or POI or place or similar types of POI(s) or place(s) or location(s) based on matching recognized or scanned object(s) inside scene of camera display screen (before capturing photo) with stored media 115 or 394 or 540 at server 110 and provide contextual tips and tricks e.g. 1116 based on one or more types of sensor data, user data, recognized object(s) inside camera display screen (before capture phot ore record video) and identified rules. In another embodiment system provides above resources based on captured photo and recorded video and based on provided resources including tips and tricks and matched previously taken curated photos or videos user can retake photo or video and add to rich story or gallery.
In an embodiment user is enable to view statistics including view number of visual media item(s) or content item(s) created or shared or add to rich story by user and participant member(s) (if any), number of views and reactions on each or all visual media item(s) or content item(s) created or shared or ads to rich story by user and each participant member(s) (if any), number of missed POI or place where user or participant member(s) (if any) missed to capture photo or record video or provide related content, number of POIs or places where user or participant member(s) (if any) captured photos or recorded videos or provided related content(s), number of total media item(s) in particular rich story or all rich stories and like.
Figure 12 illustrates logical flow and example of rich story e.g. rich story 903. At 1225 when creator of rich tory or authorized user starts rich story e.g. 903 via "Start" icon 1113 or 990 then system presents visual media capture controller label (e.g. named same as rich story title) e.g. "My USA Tour Story" 1 140. At 1227 when use device 200 arrives within or near to particular radius boundary or reach near to pre-defined geo-fence boundaries or system determine that user reach near to POI 1201 then user is presented with information about POI 1201 including place name, location information, place details from one or more sources or curated information 1127 and in the event of guide system=ON e.g. 1104 by user then present POI1 1201 specific contextual sample or curated (select by server admin or editor picked or most liked, ranked, commented, & viewed) photos or videos or photos or videos was taken previously by other users related to that particular POI e.g. POI 1201 and in the event of guide system=ON e.g. 1104 by user then provide associate contextual tips and tricks e.g. 1116 from guide system resources data 346 via guide system 276 or 348 of server 110, so user can view contextual sample photo or video and learn various tips, ticks, techniques, poses, angels, directions, ideas, concepts, movements, expressions, styles, shot sequences, effects, type of settings used to get better photo or video, applied photo filters or lenses, provided text overlays and like. In an embodiment based on user's travel plan defined by user on digital interactive map then user is presented with all contextual prospective POIs and routes. In an embodiment user is provided with route information from 1st to 2nd POI including distance, time took to reach or estimated time to rich, start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI. In an embodiment user can view next prospective one or more POIs and view associate contextual or created photos or videos to become well prepare and learn in advance when user will reach at next POI. In an embodiment system maintains logs of routes of user's visits and visually presents or present routes on map with missed POI, suggested POI by user, POIs where user captures or records visual media including photos or videos or provided one or more types of contents, place or position or location where POI not show to user but user captures or records visual media including photos or videos or provided one or more types of contents. For example when user reaches at POI (e.g. "Mumbai Airport") 1201 and user is presented with POI name and details at 127 and when user taps on 1140 to capture photo then photo saved to rich story 903 gallery or album or folder in user device 200 storage medium and/or server 110 database 115 and/or 3rd parties one or more storage medium or database or cloud storage. In an embodiment user can view, search, browse, select, edit, update, augment, apply photo filters or lenses or overlays, provide details, remove, sort, filter, drag and drop, order, rank one or more media items of selected rich story e.g. 903 gallery or album or folder. For example user can views captured photo 1227 at POI1 1201. In an embodiment user can also view other details related to said captured photo or media item including date & time, location, metadata, auto identified keywords based on auto recognized objects associated keywords, file type, size, resolution and like, view statistics including number of receivers, viewers or views, likes, comments, dislikes and ratings, POI1 related information and based on recognized object(s) inside photo(s) or video(s) taken at POI1, identify similar photos and videos, so user can compare and view and determine quality of his captured photo or video, user can view routes from start to POI1 and estimated time to reach, view route from start to POI1 on map and can view calculated time spent on capturing or recording photo(s) or video(s) and providing associated details 1227. In an another embodiment user can visually view visited POIs on map 1212 specific access logs of each POI by tapping or clicking or utilizing contextual menu on each POI and can view logs, captured photos or videos and associated details, route details, 3rd parties provided details or advertisers or sponsored provided contextual contents, various statistics (discussed above), status, activities, actions, events, transactions conduct or done by user. After exiting from POI 1201 when e.g. user enters into "Mumbai Airport" then system e.g. presents POI2 1202, but user e.g. did not capture photo or record video or provide content for said presented POI2 then rich story 903 galley or album or presentation interface 1200 shows missed status 1230 and enable user to access said missed POI2 specific contextual contents including information about said POI and photos or videos previously taken by other users. In an embodiment user is enable to auto generate visual media later (i.e. without capturing at particular POI location) based on merging of front as user's pre-stored one or more photos or series of images or video with or without particular color transparent background with background as user selected from list of curated or pre-stored visual media without any human body inside said photos or videos related to said missed POI. In an embodiment if rich story has more than one member i.e. other than creator of rich story user, then user or authorized participant or members can view photos or videos related to one or more POIs of one or more other members. User can filter, search, match and select one or more rich galleries including filter one or more selected POIs wise and/or filter one or more participant member(s) wise and/or filter as per date & time or ranges of date & time and/or view chronologically and/or view as per one or more object keywords, object model(s) or image sample(s) specific and/or one or more keywords, key phrases, Boolean operators and any combination thereof specific media items related to one or more selected galleries. For example after existing from POI2 1202 i.e. "Mumbai Airport" when user reaches Boston then based on user's current location and user data, system identifies and hide or remove previous POI2 1202 or updates or presents new POD 1203 related information 1127 on user device 200 at interface or display 1123. For example when user tap on customized or contextual or auto presented visual media capture controller label or icon 1140 then scene 1122 on camera display screen captured and captured and stored photo 1122 automatically post 1232 to rich story e.g. 903 or gallery of rich story 903 so making viewable for other authorized viewers. System updates logs and maps of user visited places or POSs or positions or locations or spots on map 1212 continuously. Authorized user or participant member(s) can view and access as per tights & privileges other participant member(s)'s map 1212. After exiting from POI3 1203 when user enters into POI4 1204 user is presented with new POI 1204 specific information 1127 or 1169 or 1187 and facilitate user or enable user by guide system to take better photo and video. For example user ask other user to records video clip 1235 of user at POI4 1204 which auto posts to rich story 903 gallery or interface 1200, so user can view, access, remove, edit and update it 1235. In an embodiment user is enabling to provide relation notes 1233 on relation of first media item(s) with second media item. After exiting from POI4 when user enters POI5 1205 then user is presented or notified or alerted or indicated or instructed with POI5 specific details 1127 or 1169 or 1187 and/or ringing of pre-set ringtone and/or pre-set vibration type and/or send push notification with or without notification tone in the event of device is close. For example on of the member of rich story 903 captures photo 1238 and posts at rich story 309 gallery or interface e.g. 1200 then user can view or learn based on shared phots by participant member of rich story e.g. 309. User can tap on indicator 1270 to view all shared media item(s) and associated all information shared by user and/or one or more members of rich gallery e.g. 903. After exiting POI5 when user enters into or dwell into P06 boundaries or reach near (set particular radius ranges area boundaries or determined by system), system presents new POI6 specific
information or visual media capture controller label or icon 1127 (labeled as POI name or title) to user on user device 200 display 1123 or 1150 or 1175, so user can tap on said dynamically presented label or user can tap on rich story specific labeled visual media capture controller 1140, so user can tap on it to capture photo or tap and hold to start video and release when video recording finished and post to rich story 903 gallery e.g. interface 1200. In an embodiment user can tap on photo 1240 to sequence wise view all shared media item(s) by all participant members of rich gallery 903 as per set period of interval between presented media items. User can tap on slideshow to close it or swipe to skip present POI related slideshow and show next POI related slideshow of shared media item(s). In the event of pause 1245 of rich story 903 by user or authorized user(s) of user device 200, then system stops presenting of information about new or updated POI(s) or stops in sending of notification or providing of indication of information about new POI to all participant members of rich story 309. If member of rich story 903 pauses 1245 the rich story 903 then system stops presenting or sending of information about new POI to said member only. User can view various status of user and/or participant members at rich story e.g. 903 interfaces e.g. 1200. User can restart 1245 paused 1245 rich story e.g.903 via e.g. tap on icon or button or accessible control 990 or 1136 (play icon) or 1162 (play icon). After re-starts, system again started to present information about current POI (last paused POI) or present information about newly identified POIs e.g. P07 1207 and P08 1208 (due to both are very near to user or both are within set particular ranges of radius of boundaries), so user can capture photo or record video at that POI (e.g. advertised POI7 by "Baristro") to enable user to capture photo or record video and send to user's contacts or viewed by participant members via rich story e.g. 903 and based on number of sharing or viewing user can gets benefits or offers provide by said advertised POI by "Baristro" (as discussed in figures 5 and 6). After dwelling in POI7 user can visit POI8 and tap on next icon 1199 and view information about POI8 1208 and tap on photo capture icon 1164 or video record icon 1166 to take visual media and auto post to rich story 903 gallery interface e.g. 1200. After exiting of POI8 when user enters he can view information about further updated newly identified POI9 1209 and can view information 1254 about POI9 for preparing to take POI9 related visual media. In an embodiment user can view information 1254 about next POI e.g. POI10 2010 and be prepare for taking visual media at next POI e.g. POI10. Rich story 903 creator or authorized user (if any) can stop rich story 903 via e.g. button 992 or icon 1136 (stop icon) or 1162 (stop icon), in the event of stopping of rich story e.g. 903, system hides or removes rich story visual media capture controller label or icon 1140 or 1174 and hide or removes any information about current POI e.g. 1127 or 1169 from all participant members of rich story 903 to make them stop to receive notification about next POI or capture or record associated visual media or share visual media and posts to rich story 903. Rich story 903 creator or authorized user (if any) can re-start rich story 903 via e.g. button 990 or icon 1136 (paly icon) or 1162 (play icon), in the event of re-starting of rich story e.g. 903, system presents rich story 903 specific labeled visual media capture controller label or icon 1140 or 1 174 on display of all participant members device(s) and also present last stopped POI related information or newly updated POI specific information e.g. 1127 or 1169.
In another embodiment one or more types of presentation interface is used for viewers selections or preferences including present newly shared or updated received media item(s) related to one or more stories or sources in slide show format, visual format, ephemeral format, show on feeds or albums or gallery format or interface, sequence of media items with interval of display timer duration, show filtered media item(s) including filter story(ies) wise, user(s) or source(s) wise, date & time wise, date & time range(s) wise, location(s) or place(s) or position(s) or POI(s) specific and any combination thereof, show push notification associate media item(s) only.
It is contemplated for embodiments described herein to extend to individual elements and concepts described herein, independently of other concepts, ideas or system, as well as for embodiments to include combinations of elements recited anywhere in this application.
Although embodiments are described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments. As such, many modifications and variations will be apparent to practitioners skilled in this art. Accordingly, it is intended that the scope of the invention be defined by the following claims and their equivalents. Furthermore, it is contemplated that a particular feature described either individually or as part of an embodiment can be combined with other individually described features, or parts of other embodiments, even if the other features and embodiments make no mentioned of the particular feature. Thus, the absence of describing combinations should not preclude the inventor from claiming rights to such combinations.
Various components of embodiments of methods as illustrated and described in the
accompanying description may be executed on one or more computer systems, which may interact with various other devices. One such computer system is illustrated by FIG. 13. In different embodiments, computer system 1000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
In the illustrated embodiment, computer system 1000 includes one or more processors 1010 coupled to a system memory 1020 via an input/output (I/O) interface 1030. Computer system 1000 further includes a network interface 1040 coupled to I/O interface 1030, and one or more input/output devices 1050, such as cursor control device 1060, keyboard 1070, multitouch device 1090, and display(s) 1080. In some embodiments, it is contemplated that embodiments may be implemented using a single instance of computer system 1000, while in other embodiments multiple such systems, or multiple nodes making up computer system 1000, may be configured to host different portions or instances of embodiments. For example, in one embodiment some elements may be implemented via one or more nodes of computer system 1000 that are distinct from those nodes implementing other elements.
In various embodiments, computer system 1000 may be a uniprocessor system including one processor 1010, or a multiprocessor system including several processors 1010 (e.g., two, four, eight, or another suitable number). Processors 1010 may be any suitable processor capable of executing instructions. For example, in various embodiments, processors 1010 may be general- purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 1010 may commonly, but not necessarily, implement the same ISA. In some embodiments, at least one processor 1010 may be a graphics processing unit. A graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device. Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms. For example, a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU). In various embodiments, the methods as illustrated and described in the accompanying description may be implemented by program instructions configured for execution on one of, or parallel execution on two or more of, such GPUs. The GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s). Suitable GPUs may be commercially available from vendors such as NVIDIA Corporation, ATI Technologies, and others.
System memory 1020 may be configured to store program instructions and/or data accessible by processor 1010. In various embodiments, system memory 1020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those for methods as illustrated and described in the accompanying description, are shown stored within system memory 1020 as program instructions 1025 and data storage 1035, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 1020 or computer system 1000. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system 1000 via I/O interface 1030. Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1040.
In one embodiment, I/O interface 1030 may be configured to coordinate I/O traffic between processor 1010, system memory 1020, and any peripheral devices in the device, including network interface 1040 or other peripheral interfaces, such as input/output devices 1050. In some embodiments, I/O interface 1030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1020) into a format suitable for use by another component (e.g., processor 1010). In some embodiments, I/O interface 1030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 1030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the functionality of I/O interface 1030, such as an interface to system memory 1020, may be incorporated directly into processor 1010.
Network interface 1040 may be configured to allow data to be exchanged between computer system 1000 and other devices attached to a network, such as other computer systems, or between nodes of computer system 1000. In various embodiments, network interface 1040 may support communication via wired and/or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol.
Input/output devices 1050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 1000. Multiple input/output devices 1050 may be present in computer system 1000 or may be distributed on various nodes of computer system 1000. In some embodiments, similar input/output devices may be separate from computer system 1000 and may interact with one or more nodes of computer system 1000 through a wired and/or wireless connection, such as over network interface 1040.
As shown in FIG.13, memory 1020 may include program instructions 1025, configured to implement embodiments of methods as illustrated and described in the accompanying description, and data storage 1035, comprising various data accessible by program instructions 1025. In one embodiment, program instruction 1025 may include software elements of methods as illustrated and described in the accompanying description. Data storage 1035 may include data that may be used in embodiments. In other embodiments, other or different software elements and/or data may be included. Those skilled in the art will appreciate that computer system 1000 is merely illustrative and is not intended to limit the scope of methods as illustrated and described in the accompanying description. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, internet appliances, PDAs, wireless phones, pagers, etc. Computer system 1000 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components.
Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computer system 1000 may be transmitted to computer system 1000 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations.
Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent examples of embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
In an embodiment a program is written as a series of human understandable computer instructions that can be read by a compiler and linker, and translated into machine code so that a computer can understand and run it. A program is a list of instructions written in a programming language that is used to control the behavior of a machine, often a computer (in this case it is known as a computer program). A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program. In computer science, the syntax of a computer language is the set of rules that defines the combinations of symbols that are considered to be a correctly structured document or fragment in that language. This applies both to programming languages, where the document represents source code, and markup languages, where the document represents data. The syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical or flowchart(s)). Documents that are syntactically invalid are said to have a syntax error. Syntax - the form - is contrasted with semantics - the meaning. In processing computer languages, semantic processing generally comes after syntactic processing, but in some cases semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently. In a compiler, the syntactic analysis comprises the frontend, while semantic analysis comprises the backend (and middle end, if this phase is distinguished). There are millions of possible combinations, sequences, ordering, permutations & formations of inputs, interpretations, and outputs or outcomes of set of instructions of standardized or specialized or generalized or structured or functional or object oriented programming language(s).
The present invention has been described in particular detail with respect to a limited number of embodiments. Those of skill in the art will appreciate that the invention may additionally be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Furthermore, the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component. Additionally, although the foregoing
embodiments have been described in the context of a social network website, it will apparent to one of ordinary skill in the art that the invention may be used with any social network service, even if it is not provided through a website. Any system that provides social networking functionality can be used in accordance with the present invention even if it relies, for example, on e-mail, instant messaging or any other form of peer-to-peer communications, or any other technique for communicating between users. Systems used to provide social networking functionality include a distributed computing system, client-side code modules or plug-ins, client-server architecture, a peer-to peer communication system or other systems. The invention is thus not limited to any particular type of communication system, network, protocol, format or application.
The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a tangible computer readable storage medium or any type of media suitable for storing electronic instructions, and coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Embodiments of the invention may also relate to a computer data signal embodied in a carrier wave, where the computer data signal includes any embodiment of a computer program product or other data combination described herein. The computer data signal is a product that is presented in a tangible medium or carrier wave and modulated or otherwise encoded in the carrier wave, which is tangible, and transmitted according to any suitable transmission method.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims

I claim:
1. An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; monitoring geo-location or positions of user device; identify point of interest based on matching nearest point of interest with current geo-location or positions of user device; presenting information about identified point of interest on the display of user device; in response to capturing of photo or recording of video or taking visual media, post said captured or recorded visual media to point of interest associate gallery.
2. The electronic device of claim 1 wherein enabling user to pause or stop monitoring of geo- location or positions of user device and presenting information about identified point of interest on the display of user device.
3. The electronic device of claim 1 wherein enabling user to re-start monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device.
4. An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access one or more galleries; identify user selected gallery; monitoring geo-location or positions of user device; identify point of interest based on matching nearest point of interest with current geo-location or positions of user device; presenting information about identified point of interest on the display of user device; in response to capturing of photo or recording of video or taking visual media, post said captured or recorded visual media to said identified user selected gallery.
5. The electronic device of claim 4 wherein enabling user to pause or stop monitoring of geolocation or positions of user device and presenting information about identified point of interest on the display of user device.
6. The electronic device of claim 4 wherein enabling user to re-start monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device.
7. The electronic device of claim 4 wherein enabling user to remove one or more selected galleries.
8. The electronic device of claim 4 wherein enabling user to provide access rights to view one or more galleries created and managed by user.
9. The electronic device of claim 4 wherein enabling user to provide filter preferences and privacy settings to receive information about certain types of points of interests.
10. The electronic device of claim 4 wherein enabling user to provide or apply rules and access criteria to enabling viewing of gallery for targeted viewers.
11. The electronic device of claim 4 wherein enabling user to invite one or more contacts to participate in gallery, in response to acceptance of invitation or acceptance of request to join publish or present or display said gallery and display gallery named visual media capture controller label and/or identified information about points of interest based on matching monitored and identified member device's current geo-location with points of interests data to each participant members for enabling to take visual media and auto post to said gallery..
12. An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; receiving request to create gallery; creating of gallery; monitoring geo-location or positions of user device; identify point of interest based on matching nearest point of interest with current geo-location or positions of user device; presenting information about identified point of interest on the display of user device; in response to capturing of photo ore recording of video or taking visual media, post said captured or recorded visual media to said created gallery.
13. An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; monitoring geo-location or positions of user device; identify and present contextual point of interest; presenting information about identified point of interest on the display or the camera display screen of user device.
14. An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access information about points of interests or places or locations or positions data, user data and rule base; monitors, tracks and identifies current geo-location or position of user device; identify contextual one or more points of interests or positions based on matching pre-stored points of interests or places or locations or positions data with user's current geo-location or position information, user data and contextually selected and executed rules from rules base; notify or provide indication or present information about contextual one or more points of interests or positions on the display of user device; present said identified one or more points of interest or position related corresponding visual media capture controller label(s) or icon(s) on the camera display screen of the user device; in response to access of a visual media capture controller, alternately capture a photo or start recording of a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release, a non- transient computer readable storage medium, comprising executable instructions to: process haptic contact signals from a display; record a photo based upon a first haptic contact signal; and start recording of a video based upon a second haptic contact signal, wherein the second haptic contact signal is a haptic contact release signal that occurs after a specified period of time after the first haptic contact signal.
15. The electronic device of claim 14 wherein enable user to pause or stop or end or cancel recording of video.
16. The electronic device of claim 14 wherein enabling to auto end video based on expiry of set period of time.
17. A computer implemented method, comprising: accessing information about positions and digital items; identifying current position of user; identifying contextual position(s) based on matching information about positions data with user's current position, user data and executed rules; notifying or alerting or indicating or presenting to user information about contextual position(s); and presenting identified position specific contextual or associated or requested digital items.
18. The computer implemented method of claim 17 wherein positions includes location, place, point of interest, site-seeing, spot, entity, person, shop, product, showcase, showroom, department, mall, tourist place, favorite or identified selfie place, art, monuments, garden, tree, restaurant, resort, beach and like.
19. The computer implemented method of claim 17 wherein digital items comprises an application including a camera application, camera display interface, form including review form, a web page, a web site, one or more controls, an object, a media, a database or data and one or more types of user actions includes Rate, Like, Dis-like, Comment.
20. The computer implemented method of claim 17 wherein auto send or post one or more types of content or media prepared by user by using one or more presented digital items, to said notification related or associated one or more destinations.
21. The computer implemented method of claim 17 wherein one or more types of content or media including captured or selected photo(s), recorded or selected video(s), product or service review(s), comments, ratings, suggestions, feedbacks, complaint, microblog, post, stored or auto generated or identified or provided user activities, actions, events, transactions, logs, status and one or more types of user generated contents or media.
22. The computer implemented method of claim 17 wherein destination including one or more user contacts, connections, followers, groups, categories, hashtags, user profile(s), applications, interfaces, objects, processes, feeds, stories, folders, albums, galleries, web sites, web pages, web services, servers, networks, data storage mediums, networks, devices, authorized persons or one or more types of entity including owner of shop, manufacturer, seller, retailer, salesperson, manger, distributor, government department,
23. The computer implemented method of claim 17 wherein rules includes pre-created, user provided and updated rules and contextually select and execute rule(s) from rule base based on one or more types of monitored or tracked or updated user data.
24. A computer implemented method, comprising: receiving a geo-location data for a device; determining whether the geo-location data corresponds to a geo-location fence associated with a particular position or place or spot or point of interest; supplying a notification or indication list to the device in response to the geo-location data corresponding to the geo- location fence associated with the particular position or place or spot or point of interest.
25. A computer implemented method, receiving a geo-location data from a user device; determining whether the geo-location data corresponds to a geo-location fence associated with stored or identified prospective or permitted position(s) including an advertised place or entity, curated place and an event; notify about identified prospective or permitted position(s); presenting contextual one or more digital items; and supplying one or more destinations to the device or determining one or more destinations for posting or sharing or sending or broadcasting user generated contents.
26. The computer implemented method of claim 25 wherein enabling to prepare, select, capture, record, augment or edit or update and post or send or broadcast one or more or series of one or more types of media or user generated contents including photos or videos or microblog or review.
27. The computer implemented method of claim 25 wherein identify prospective or permitted position(s) based on number of users taking photos of videos at present at that particular position or place or location, number of or rank of or number of reactions on posted media including photos or video posted at particular place or location or associate geo-location information including geo-location fence, based on information associate with said position or place or location including at present fair is going on particular position or place or location, or birthday or marriage event is happening, sunset, sun rise, migration of birds.
28. The computer implemented method of claim 25 wherein store prospective or permitted position(s) based on curation, identified by server based on user data or 3 rd parties data, provided by user, provided by advertiser including advertised place, entity, shop, product, event and events created and posted by user.
29. The computer implemented method of claim 25 wherein based on identified notification presenting associated or contextual or requested or auto present associated or contextual one or more types of interfaces, applications, user actions, one or more or set of controls, one or more types of media, data, web services, objects, web page(s), forms, and any combination thereof.
30. The computer implemented method of claim 25 wherein determination based on whether the user data matched with data associated with prospective or permitted position(s) including an advertised place or entity, curated place and an event.
31. The computer implemented method of claim 25 wherein user data comprise user profile, user check-in place(s) or current or past location(s), user activities, actions, interactions, sense, status, connections or contacts, events, transactions, logs, behavior, posted or received contents and reactions on contents, preferences, privacy settings, notification settings.
32. The computer implemented method of claim 25 wherein data associated with prospective or permitted position(s) comprise details, rules, access rights, offers, photo filters, advertisement details including bids, description, media, offers, discount, gift, redeemable points, and rules or conditions to avail one or more types of benefits provided by advertisers, and created event associated settings, preferences, rules, access rights, privacy and selections.
33. The computer implemented method of claim 25 wherein based on receiving of notification automatically open camera display screen, to enabling user to capture photo or record video.
34. The computer implemented method of claim 25 wherein enable user to select or tap on notification.
35. The computer implemented method of claim 25 wherein in response of tapping on or selection of notification open camera display screen, to enabling user to capture photo or record video.
36. The computer implemented method of claim 25 wherein enable user to prepare one or more media item(s) or contents includes a photo or a video
37. The computer implemented method of claim 25 wherein based on notification determine one or more destinations
38. The computer implemented method of claim 25 wherein in response of tapping on or selection of notification determine associate one or more destinations
39. The computer implemented method of claim 25 wherein based on user data, user pre-set destinations, followers, user preferences & setting determine one or more destinations
40. The computer implemented method of claim 25 wherein determine based on last selections.
41. The computer implemented method of claim 25 wherein determine based on user settings.
42. The computer implemented method of claim 25 wherein enable user to post one or more media item(s) to user selected one or more destinations.
43. The computer implemented method of claim 25 wherein destinations comprise user contacts or connections, groups, followers, networks, hashtags, categories, events, suggested, web sites, web pages, user profiles, applications, services or web services, servers, devices, networks, databases or storage medium, albums or galleries, stories, folders, and feeds.
44. The computer implemented method of claim 25 wherein enable user to post one or more media item(s) to user selected one or more destinations, wherein enable user to select from list of suggested destinations
45. The computer implemented method of claim 25 wherein auto post one or more media item(s) to auto determined one or more destinations.
46. The computer implemented method of claim 25 wherein based on determining whether the geo-location data corresponds to a geo-location fence associated with prospective or permitted position(s) present one or more types of media including photo and video previously taken at said determined geo-location data specific corresponding geo-location fence associated prospective or permitted position(s).
47. A server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a gallery comprising a plurality of messages posted by a user for viewing by one or more recipients, wherein each of the messages comprises a photograph or a video or one or more media type, the maintaining of the gallery comprising making the gallery available for viewing users, upon request, via respective user devices associated with the one or more recipients; receiving request, by the server system, to create particular named gallery; creating, by the server system, said named gallery; receiving request, by the server system, to start or initiating said created or selected gallery, wherein in the event of starting or initiating gallery, by the server system, present on the display of the user device said created gallery specific visual media capture controller label or icon, for enabling user to take or capture or record said created named gallery specific one or more types of visual media and starts monitoring and identifying of geo-location or positions of user device; matching, by the server system, identified current geo-location or positions of user device with pre-stored points of interests or places or positions or spots data to identify nearest one or more point of interest; notify and/or present, by the server system, identified one or more point of interest related information on display of user device; enabling, by the server system, to take visual media including capturing of photo by tapping or clicking on said presented visual media capture controller label or icon or recording of video by tap and hold on said presented visual media capture controller label or icon and release to end or stop said video; store, by the server system, at server database; generating news item(s); and presenting news item(s) at viewing user's device.
48. The server of claim 53 wherein receiving request to pause or stop monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device.
49. The server of claim 47 wherein receiving request to re-start monitoring of geo-location or positions of user device and presenting information about identified point of interest on the display of user device.
50. The server of claim 47 wherein receiving request to remove one or more selected galleries.
51. The server of claim 47 wherein receiving request to provide access rights to view one or more galleries created and managed by user.
52. The server of claim 47 wherein receiving filter preferences and privacy settings to filter receiving of information about certain types of points of interests.
53. The server of claim 47 wherein receiving request to apply rules and access criteria to enabling viewing of gallery for targeted viewers.
54. The server of claim 47 wherein receiving request to invite one or more contacts of user to participate in selected gallery, and in response to acceptance of invitation or acceptance of request to join, publish or present or display said gallery and display gallery named visual media capture controller label and/or identified information about points of interest on the display of each member device based on matching monitored and identified member device's current geo-location with points of interests data to each participant members for enabling to take visual media and auto post to said gallery.
55. A computer-implemented notification method, comprising acts of: inputting a query to monitor a user computing device relative to a geographical point of interest; configuring a preferences associated with the user computing device; monitoring the geographical location of the user computing device relative to the geographical point of interest; executing a rules based on the user data and geographical location of the user computing device relative to the geographical point of interest; automatically communicating a notification to a user computing device based on the geographical location of the target computing device relative to the point of interest; and utilizing a processor that executes instructions stored in memory to perform at least one of the acts of configuring, monitoring, communicating, or processing.
56. A computer-implemented notification system, comprising: a storage medium to access point of interest, digital items, user data, and rule base; a notification component that monitors the geographic location of the user device and communicates a notification to a user device when the geographic location of the user device meets geo-location criteria related to the point of interest; a digital items component that presents contextual digital item(s); and a processor that executes computer-executable instructions associated with at least one of the digital items presentation component or the notification component.
57. A server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a destinations comprising plurality of destinations, maintain rules base and maintain user data comprising plurality types of updated user data; receiving, by the server system, request to listing one or more destinations and associated details; store, by the server system, said one or more destinations and associated details at the storage medium; receiving, by the server system, posting request; identify, by the server system, one or more destinations based on user data and one or more rules; presenting, by the server system, one or more identified destinations on the display of the user device.
58. The server of claim 57 wherein destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites, web pages, point of web page, user profiles, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
59. The server of claim 57 wherein auto identifying destination(s) based on current location or position information of user device.
60. The server of claim 57 wherein auto identifying destination(s) based on user settings.
61. The server of claim 59 or 60 wherein auto post to auto identified destinations.
62. A server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a destinations comprising plurality of destinations, maintain rules base and maintain user data comprising plurality types of updated user data; receiving, by the server system, request to listing one or more destinations and associated details; store, by the server system, said one or more destinations and associated details at the storage medium; identify, by the server system, one or more destinations based on user data and one or more rules; presenting, by the server system, one or more identified destinations on the display of the user device for user selection to select one or more destinations and post one or more types of user generated contents to selected one or more destinations.
63. A server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain a destinations comprising plurality of destinations, maintain rules base and maintain user data comprising plurality types of updated user data; receiving, by the server system, request to listing one or more destinations and associated details; store, by the server system, said one or more destinations and associated details at the storage medium; identify, by the server system, one or more destinations based on user data and one or more rules; presenting, by the server system, one or more identified destinations specific visual media capture controller on the display of the user device, for enabling user to one tap capture photo or tap and long press and release to start video and end after expiring of set period of pre-set duration or tap and hold to start video and release to end video and post said captured photo or recorded video to said accessed visual media capture controller associated destination(s).
64. The server of claim 63 wherein destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites, web pages, point of web page, user profiles, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
65. The server of claim 63 wherein auto identifying destination(s) based on current location or position information of user device.
66. The server of claim 63 wherein auto identifying destination(s) based on user settings.
67. The server of claim 65 or 66 wherein auto post to auto identified destinations.
68. The server of claim 63 wherein auto identifying destination(s) based on user data comprising user profile, user check-in place(s) or current or past location(s), user activities, actions, interactions, sense, status, connections or contacts, events, transactions, logs, behavior, posted or received contents and reactions on contents, preferences, privacy settings, notification settings.
69. The server of claim 63 wherein data associated with advertised or listed destination comprise details, associated rules, access rights, offers, photo filters, advertisement details including bids, description, media, offers, discount, gift, redeemable points, and rules or conditions to avail one or more types of benefits provided by advertisers in the event of user posts to advertised destination(s).
70. An electronic device, comprising: a Global Positioning System (GPS) sensor configured to generate the geolocation information of the mobile device; digital image sensors to capture visual media; a display to present the visual media from the digital image sensors; a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display; access guide system associated data and resources; monitoring geo-location or positions of user device; identify point of interest or place or spot or location based on matching nearest point of interest or place with current geo-location or positions of user device or check-in place associate position information; presenting information about identified point of interest on the display of user device; based on identified point of interest or place or spot or location or position or in response to capturing of photo or recording of video or in response to initiating capturing of photo ore recording of video or taking of visual media via camera display screen or current scene display by camera display screen, providing contextual information, instruction, step-by-step guide or wizard or instruction.
71. The electronic device of claim 70 wherein contextually providing information, instruction, step-by-step guide or wizard or instruction for capturing photo or recording of video comprises provide one or more contextual techniques, angels, directions, styles, sequences, story or script type or idea or sequences, scenes, shot categories, types & ideas, transition, spots, motion or actions, costumes and makeups idea, levels, effects, positions, focus, arrangement styles of group, step by step rehearsal to take visual media, identify required lights, identify flash range, contextual tips, tricks, concepts, expressions, poses, contextual suggested settings or modes or options of camera application including focus, resolution, brightness, shooting mode, contextual information or links to purchase accessories required to take particular style or quality of visual media including lenses, stickers & like, guided turn-by-turn location route or start turn-by-turn voice guided navigation to reach at particular identified or selected place or POI or position, contextual or recognized objects inside media specific digital photo filter(s), lenses, editing tools, text, media, music, emoticons, stickers, background, location information, contextual or matched photos or videos previously taken by other users at same current location or POI or place or similar types of POI(s) or place(s) or location(s) based on matching recognized or scanned object(s) inside scene of camera display screen (before capturing photo) with stored media and provide contextual tips and tricks based on one or more types of sensor data, user data, recognized object(s) inside camera display screen (before capture phot ore record video) and identified rules.
72. The electronic device of claim 70 wherein contextually providing information, instruction, step-by-step guide or wizard or instruction for capturing photo or recording of video based on geo-location or position information of current user device, identified POI or place associated information including previously taken photos or videos from similar POI or place, recognized object or entity inside photo or video or camera display screen and identified recognized object associated information, contextual rules, guide system resources data, object recognitions including face recognition, text recognition via Optical Character Recognition (OCR), one or more types of sensor data generated or acquired from one or more types of sensors and plurality types of user data.
73. A server, comprising: a processor; and a memory storing instructions executed by the processor to: maintain user data, point of interest data, rule base, hashtags data and monitored and identified user device current location or position information including coordinates; based on identified current location and/or user data and/or point of interest data and/or hashtags data and/or auto selected or identified or updated one or more executed rule(s), identifying by the server system, one or more contextual hashtag(s); presenting, by the server system, accessible link or control of contextual hashtag(s) on the display; in the event of access or tap on particular link or control of hashtag, presenting by the server system, associated or contextual or requested one or more digital items from one or more sources including present microblog application or present review application or present hashtag named visual media capture controller label or icon (see figure 6 - 610) for enabling user to one tap capture photo or tap & hold to start recording of video and end by user anytime or tap & hold to start video and stop when release and auto post within said one tap to said hashtag or associated one or more destination(s) or present application to preparing, editing one or more types of media or content or microblog and post to said hashtag or associated destination(s).
74. The method of claim 73 wherein display on the user interface auto presented hashtag(s) specific contents, wherein contents comprise contents posted by other users of networks.
75. The method of claim 73 wherein digital item including an application, and interface, a media, a web page, a web site, a set of controls or user actions including visual media capture controller, like button, chat button, rate control to provide ratings, order or buy or add to cart button or icon, an object and a form or dynamically generated form or structured form.
76. The method of claim 73 wherein destinations comprise one or more user phone contacts, social connections, email addresses, groups, followers, networks, hashtags, keywords, categories, events, web sites or addresses of web sites including 3rd parties social networks & microblogging sites including Twitter, Instagram, Facebook, Snapchat & like, web pages, point of web page, user profiles, rich story, applications, instant messenger services or web services, servers, devices, networks, databases or storage medium, albums, galleries, stories, folders, feeds and one or more types of digital identities or unique identity where server or storage medium can post or send or advertised or broadcast or update and destination(s) or recipient(s) can notify and/or receive, access and view said post(s).
77. The method of claim 73 wherein auto present and auto hide hashtags, including verified, advertised, sponsored & user created hashtags, based on or as per context-aware data, sensor data from one or more types of sensors of user device, geo-location or positions data, user data, point of interest data, hashtags data and rules from rule base.
78. The method of claim 73 wherein metadata or data of hashtag comprise one or more keywords, categories, taxonomy, date & time of creation, creator identity, source name including user, advertiser, 3rd parties web sites or servers or storage medium or applications or web services or devices or networks, associate or define related rules, privacy settings, access rights & privileges, triggers and events, associated one or more digital item, number of followers and/or viewers, number of contents posted, verified or non-verified status or icon and provide descriptions.
79. The method of claim 73 wherein identify hashtag based on most used, most liked or ranked, most viewed or most viewed content of hashtags or most view within last particular days, , most content provided on/related to hashtag.
80. The method of claim 73 wherein auto present hashtag(s) based on most ranked, most followers, current trend, most useful, most liked or ranked or viewed or present based on location, place, event, activity, action specific, status, contacts, date & time and any combination thereof.
81. The method of claim 73 wherein enable user to remove hashtag(s) presented on the display via swipe or tap on close icon.
82. The method of claim 73 wherein enable user to add one or more hashtags via search, match, browse directory, via more button or select from list of suggested hashtags.
83. The method of claim 73 wherein enable user to drag and drop hashtag link or icon or accessible control to reorder or change position anywhere on camera screen.
84. The method of claim 73 wherein enable user to follow hashtags.
85. The method of claim 73 wherein auto or manually verify hashtags to identify uniqueness, related to brand or not, check is made to identify payment is made, spam or inappropriate, duplicate in terms of keyword meaning and making hashtag available for other users after one or more type of verification.
86. The method of claim 73 wherein enable user to use auto fill or suggest list for searching hashtags
87. The method of claim 73 wherein user can use pre-created, created by user including verified or not-verified and created by brands including paid verified hashtags and server created hashtags.
88. The method of claim 73 wherein hashtag comprise keyword(s), key phrase(s), categories, taxonomy, keyword(s) icon, and allow space in hashtag keywords.
89. The method of claim 73 wherein present contextual menu on accessible hashtag control or link of hashtag to enable user to access one or more contextual menu items including take photo, record video, provide comments, provide structured review, provide microblog, like, dis-like, provide rating, like to buy, make order, buy, add to cart, share, and refer.
90. The method of claim 73 wherein enabling user to chat with followers of hashtag(s).
91. The method of claim 73 wherein enabling advertiser to advertise created or selected hashtag(s) by advertiser based on pay per auto presenting of said hashtag on user device model or pay per use of hashtag by user model.
92. The method of claim 73 wherein auto present hashtag based on physical surround context- aware including user's current location, surround events, current location or place or point of interest related information, weather, date & time, check-in-place, user selected provided status, rules and any combination thereof.
93. The method of claim 73 wherein auto present hashtag based on logical context-aware based on user data, subscribed hashtags, search, logs and any combination thereof.
94. The method of claim 73 or 93 wherein user data comprise user's detail structured profile, user check-in place(s) or current or past location(s), user activities, actions, interactions, sense(s) generated or provided by or acquired from one or more types of sensors, updated status, connections or contacts, events, transactions, logs, behavior, posted or received contents and reactions on contents, preferences, privacy settings, notification settings and any combination thereof.
PCT/IB2017/057082 2016-11-19 2017-11-14 Providing location specific point of interest and guidance to create visual media rich story WO2018092016A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IBPCT/IB2016/056987 2016-11-19
IB2016056987 2016-11-19
IB2017050932 2017-02-18
IBPCT/IB2017/050932 2017-02-18

Publications (1)

Publication Number Publication Date
WO2018092016A1 true WO2018092016A1 (en) 2018-05-24

Family

ID=62146214

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/057082 WO2018092016A1 (en) 2016-11-19 2017-11-14 Providing location specific point of interest and guidance to create visual media rich story

Country Status (1)

Country Link
WO (1) WO2018092016A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245339A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Article generation method, device, equipment and storage medium
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
WO2020032555A1 (en) * 2018-08-08 2020-02-13 Samsung Electronics Co., Ltd. Electronic device and method for providing notification related to image displayed through display and image stored in memory based on image analysis
CN110827099A (en) * 2018-08-07 2020-02-21 北京优酷科技有限公司 Household commodity recommendation method, client and server
CN111177544A (en) * 2019-12-24 2020-05-19 浙江禾连网络科技有限公司 Operation system and method based on user behavior data and user portrait data
CN111191151A (en) * 2019-12-20 2020-05-22 上海淇玥信息技术有限公司 Method and device for pushing information based on POI (Point of interest) tag and electronic equipment
US20200245096A1 (en) * 2015-11-04 2020-07-30 xAd, Inc. Systems and Methods for Creating and Using Geo-Blocks for Location-Based Information Service
CN111796754A (en) * 2020-06-30 2020-10-20 上海连尚网络科技有限公司 Method and device for providing electronic books
US10939233B2 (en) 2018-08-17 2021-03-02 xAd, Inc. System and method for real-time prediction of mobile device locations
CN112785353A (en) * 2021-03-04 2021-05-11 深圳大智软件技术有限公司 Method for guiding vermicelli to be added to sales WeChat
CN113015018A (en) * 2021-02-26 2021-06-22 上海商汤智能科技有限公司 Bullet screen information display method, device and system, electronic equipment and storage medium
US11064175B2 (en) 2019-12-11 2021-07-13 At&T Intellectual Property I, L.P. Event-triggered video creation with data augmentation
CN113177054A (en) * 2021-05-28 2021-07-27 广州南方卫星导航仪器有限公司 Equipment position updating method and device, electronic equipment and storage medium
CN113221178A (en) * 2021-06-03 2021-08-06 河南科技大学 Interest point recommendation method based on location privacy protection in social networking service
CN113395462A (en) * 2021-08-17 2021-09-14 腾讯科技(深圳)有限公司 Navigation video generation method, navigation video acquisition method, navigation video generation device, navigation video acquisition device, server, equipment and medium
US11134359B2 (en) 2018-08-17 2021-09-28 xAd, Inc. Systems and methods for calibrated location prediction
US11172324B2 (en) 2018-08-17 2021-11-09 xAd, Inc. Systems and methods for predicting targeted location events
CN113888159A (en) * 2021-06-11 2022-01-04 荣耀终端有限公司 Opening method of function page of application and electronic equipment
CN114390214A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390215A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN115119004A (en) * 2019-05-13 2022-09-27 阿里巴巴集团控股有限公司 Data processing method, information display method, device, server and terminal equipment
WO2023072241A1 (en) * 2021-10-30 2023-05-04 花瓣云科技有限公司 Media file management method and related apparatus
WO2024061274A1 (en) * 2022-09-20 2024-03-28 成都光合信号科技有限公司 Method for filming and generating video, and related device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130066821A1 (en) * 2011-03-04 2013-03-14 Foursquare Labs, Inc. System and method for providing recommendations with a location-based service

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130066821A1 (en) * 2011-03-04 2013-03-14 Foursquare Labs, Inc. System and method for providing recommendations with a location-based service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MARIA DEL CARMEN ET AL., LOCATION-AWARE RECOMMENDATION SYSTEMS: WHERE WE ARE AND WHERE WE RECOMMEND TO GO, 19 September 2015 (2015-09-19), XP055502295, Retrieved from the Internet <URL:http://ceur-ws.org/Vol-1405/paper-Ol.pdf> [retrieved on 20180502] *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200245096A1 (en) * 2015-11-04 2020-07-30 xAd, Inc. Systems and Methods for Creating and Using Geo-Blocks for Location-Based Information Service
US20230413010A1 (en) * 2015-11-04 2023-12-21 xAd, Inc. Systems and Methods for Mobile Device Location Prediction
US11683655B2 (en) * 2015-11-04 2023-06-20 xAd, Inc. Systems and methods for predicting mobile device locations using processed mobile device signals
US20210195366A1 (en) * 2015-11-04 2021-06-24 xAd, Inc. Systems and Methods for Creating and Using Geo-Blocks for Location-Based Information Service
US10880682B2 (en) * 2015-11-04 2020-12-29 xAd, Inc. Systems and methods for creating and using geo-blocks for location-based information service
CN110827099B (en) * 2018-08-07 2022-05-13 阿里巴巴(中国)有限公司 Household commodity recommendation method, client and server
CN110827099A (en) * 2018-08-07 2020-02-21 北京优酷科技有限公司 Household commodity recommendation method, client and server
US10924677B2 (en) 2018-08-08 2021-02-16 Samsung Electronics Co., Ltd. Electronic device and method for providing notification related to image displayed through display and image stored in memory based on image analysis
KR20200017072A (en) * 2018-08-08 2020-02-18 삼성전자주식회사 Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis
WO2020032555A1 (en) * 2018-08-08 2020-02-13 Samsung Electronics Co., Ltd. Electronic device and method for providing notification related to image displayed through display and image stored in memory based on image analysis
KR102598109B1 (en) * 2018-08-08 2023-11-06 삼성전자주식회사 Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis
US10939233B2 (en) 2018-08-17 2021-03-02 xAd, Inc. System and method for real-time prediction of mobile device locations
US11134359B2 (en) 2018-08-17 2021-09-28 xAd, Inc. Systems and methods for calibrated location prediction
US11172324B2 (en) 2018-08-17 2021-11-09 xAd, Inc. Systems and methods for predicting targeted location events
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
CN115119004B (en) * 2019-05-13 2024-03-29 阿里巴巴集团控股有限公司 Data processing method, information display device, server and terminal equipment
CN115119004A (en) * 2019-05-13 2022-09-27 阿里巴巴集团控股有限公司 Data processing method, information display method, device, server and terminal equipment
CN110245339A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Article generation method, device, equipment and storage medium
CN110245339B (en) * 2019-06-20 2023-04-18 北京百度网讯科技有限公司 Article generation method, article generation device, article generation equipment and storage medium
US11064175B2 (en) 2019-12-11 2021-07-13 At&T Intellectual Property I, L.P. Event-triggered video creation with data augmentation
US11575867B2 (en) 2019-12-11 2023-02-07 At&T Intellectual Property I, L.P. Event-triggered video creation with data augmentation
CN111191151B (en) * 2019-12-20 2023-08-25 上海淇玥信息技术有限公司 Method and device for pushing information based on POI (point of interest) tag and electronic equipment
CN111191151A (en) * 2019-12-20 2020-05-22 上海淇玥信息技术有限公司 Method and device for pushing information based on POI (Point of interest) tag and electronic equipment
CN111177544A (en) * 2019-12-24 2020-05-19 浙江禾连网络科技有限公司 Operation system and method based on user behavior data and user portrait data
CN111177544B (en) * 2019-12-24 2023-07-28 浙江禾连网络科技有限公司 Operation system and method based on user behavior data and user portrait data
CN111796754A (en) * 2020-06-30 2020-10-20 上海连尚网络科技有限公司 Method and device for providing electronic books
CN113015018B (en) * 2021-02-26 2023-12-19 上海商汤智能科技有限公司 Bullet screen information display method, bullet screen information display device, bullet screen information display system, electronic equipment and storage medium
CN113015018A (en) * 2021-02-26 2021-06-22 上海商汤智能科技有限公司 Bullet screen information display method, device and system, electronic equipment and storage medium
CN112785353B (en) * 2021-03-04 2024-03-22 深圳大智软件技术有限公司 Method for adding vermicelli to sales WeChat in guiding way
CN112785353A (en) * 2021-03-04 2021-05-11 深圳大智软件技术有限公司 Method for guiding vermicelli to be added to sales WeChat
CN113177054A (en) * 2021-05-28 2021-07-27 广州南方卫星导航仪器有限公司 Equipment position updating method and device, electronic equipment and storage medium
CN113221178A (en) * 2021-06-03 2021-08-06 河南科技大学 Interest point recommendation method based on location privacy protection in social networking service
CN113221178B (en) * 2021-06-03 2022-09-06 河南科技大学 Interest point recommendation method based on location privacy protection in social networking service
CN113888159A (en) * 2021-06-11 2022-01-04 荣耀终端有限公司 Opening method of function page of application and electronic equipment
CN113395462A (en) * 2021-08-17 2021-09-14 腾讯科技(深圳)有限公司 Navigation video generation method, navigation video acquisition method, navigation video generation device, navigation video acquisition device, server, equipment and medium
WO2023072241A1 (en) * 2021-10-30 2023-05-04 花瓣云科技有限公司 Media file management method and related apparatus
CN114390214B (en) * 2022-01-20 2023-10-31 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390215B (en) * 2022-01-20 2023-10-24 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390214A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
WO2023140782A3 (en) * 2022-01-20 2023-09-28 脸萌有限公司 Video generation method and apparatus, and device and storage medium
CN114390215A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
WO2024061274A1 (en) * 2022-09-20 2024-03-28 成都光合信号科技有限公司 Method for filming and generating video, and related device

Similar Documents

Publication Publication Date Title
WO2018092016A1 (en) Providing location specific point of interest and guidance to create visual media rich story
US20220179665A1 (en) Displaying user related contextual keywords and controls for user selection and storing and associating selected keywords and user interaction with controls data with user
US11973840B2 (en) Systems and methods for providing location-based cascading displays
WO2020148659A2 (en) Augmented reality based reactions, actions, call-to-actions, survey, accessing query specific cameras
Ghose TAP: Unlocking the mobile economy
US11589193B2 (en) Creating and utilizing services associated with maps
US10117074B2 (en) Systems and methods for establishing communications between mobile device users
US8559980B2 (en) Method and system for integrated messaging and location services
US9288079B2 (en) Virtual notes in a reality overlay
TWI612494B (en) Devices and methods for location based social network
US20180032997A1 (en) System, method, and computer program product for determining whether to prompt an action by a platform in connection with a mobile device
WO2018104834A1 (en) Real-time, ephemeral, single mode, group &amp; auto taking visual media, stories, auto status, following feed types, mass actions, suggested activities, ar media &amp; platform
US20130227017A1 (en) Location associated virtual interaction, virtual networking and virtual data management
US20100145947A1 (en) Method and apparatus for an inventive geo-network
US20150245168A1 (en) Systems, devices and methods for location-based social networks
JP6321035B2 (en) Battery and data usage savings
US20070161382A1 (en) System and method including asynchronous location-based messaging
US20110238762A1 (en) Geo-coded comments in a messaging service
US20140297617A1 (en) Method and system for supporting geo-augmentation via virtual tagging
JP2016507820A (en) Rerank article content
TW201237657A (en) Geo-location systems and methods
US20150261813A1 (en) Method to form a social group for a real time event
US9813861B2 (en) Media device that uses geolocated hotspots to deliver content data on a hyper-local basis
JP2016505983A (en) Social cover feed interface
US20140297669A1 (en) Attract mode operations associated with virtual tagging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17872859

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17872859

Country of ref document: EP

Kind code of ref document: A1