WO2018104834A1 - Prise en temps réel, éphémère, en mode unique, en groupe et automatique de média visuels, d'histoires, état automatique, types de flux en suivi, actions de masse, activités suggérées, média ar et plate-forme - Google Patents
Prise en temps réel, éphémère, en mode unique, en groupe et automatique de média visuels, d'histoires, état automatique, types de flux en suivi, actions de masse, activités suggérées, média ar et plate-forme Download PDFInfo
- Publication number
- WO2018104834A1 WO2018104834A1 PCT/IB2017/057578 IB2017057578W WO2018104834A1 WO 2018104834 A1 WO2018104834 A1 WO 2018104834A1 IB 2017057578 W IB2017057578 W IB 2017057578W WO 2018104834 A1 WO2018104834 A1 WO 2018104834A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- visual media
- video
- photo
- ephemeral
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1686—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
Definitions
- Present invention enables to auto open or unlocks user device, auto open camera display screen, auto capture photo or auto start recording of video, auto open media viewer when user wants to view, apply ephemeral or non-ephemeral and real-time content access rules and settings for one or more destinations and/or sources, enables user to search, match, save, bookmark, subscribe and view one or more object criteria specific contents, user related visual media captured by other users related privacy settings, supplied object models specific advertisements in visual media feeds, enables sender of media to access media shared by sender at recipient device, enables various embodiments relates to the display of ephemeral messages and real-time ephemeral messages, enables multi-tasking intelligent visual media capture controller so user can easily take front or back photo or video or live stream and view received media items and/or access one or more pre-set interfaces or applications, enables auto generating of user's current status, user's current activities and auto generate emoticons, emoji, cartoon based on front and/or back camera photo(s) and
- the computing system may generate a display of a moving object on the display screen of the computing system.
- An eye tracking system may be coupled to the computing system.
- the eye tracking system may track eye movement of the user.
- the computing system may determine that a path associated with the eye movement of the user substantially matches a path associated with the moving object on the display and switch to be in an unlocked mode of operation including unlocking the screen.” All above prior arts requires particular hardware or user intervention to unlock device.
- Most of the smart device including mobile devices now enabled user to use camera while device is lock by tapping on camera icon.
- Present invention enables user to either unlock device by using eye tracking system via employing user device image sensor or auto open camera display screen by identifying pre- defined types of device orientation and pre-defined aye gaze via eye tracking system. Because present invention wants to auto open camera on lacked device which at present user has to tap on camera icon to open the camera, so it's possible to employ simple eye tracking system and orientation sensor(s) to auto open camera and there is no issue of privacy and security to need to employ advance fingerprint hardware or each time issue voice command.
- present invention identifies user intention to take photo or video based on one or more types of eye tracking system to detect that user want to start camera screen display to capture photo or record video or invoke & access camera display screen and based on that automatically open camera display screen or application or interface without user need to manually open it each time when user want's to capture photo or record video/voice or access device camera.
- Eye tracking system identifies eye positions and movement by eye tracking to measure the point of gaze to identify eye position e.g.
- present invention identifies user intention to take photo or video based on one or more types of sensors by identifying device position i.e. far from body. Based on one or more types of eye tracking system and/or sensors, present invention also detect or identifies user intention to view received or shared contents or one or more types of media including photos or videos or posts from one or more contacts or sources e.g. Eye tracking system identifies eye positions and movement by eye tracking to measure the point of gaze to identify eye position and proximity sensor identifies distance of user device (e.g. device in hand in viewing position), so based on that system identifies user intention to read or view media.
- Eye tracking system identifies eye positions and movement by eye tracking to measure the point of gaze to identify eye position and proximity sensor identifies distance of user device (e.g. device in hand in viewing position), so based on that system identifies user intention to read or view media.
- Snapchat TM or Instagram TM enables user to view received ephemeral message or one or more types of visual media items or content items from senders for pre-set view duration set by sender and in the event of expiration of said timer, remove said ephemeral message from recipient's device and/or server. Because there are plurality types of user contacts including friends, relatives, family, other types of contacts there is need of identifying or providing different ephemeral and/or non-ephemeral settings for different types of users. For example for family members user wants that they can save user's posts or view alter and for other users e.g.
- some friends wants they can view user posted content items for pre-set view duration only and in the event of expiry of said pre-set duration of timer remove said user ported content items from their device.
- some contacts e.g. best friends user wants they can real-time view and react real-time. So present invention enables sender user and receiving user to select, input, apply, and set one or more types of ephemeral and non-ephemeral settings for one or more contacts or senders or recipients or sources or destinations for sending or receiving of contents.
- U.S. Patent No. 9,148,569 teaches "according to one embodiment of the present invention, a check's image is automatically captured.
- a stabilization parameter of the mobile device is determined using a movement sensor. It is determined whether the stabilization parameter is greater than or equal to a stabilization threshold.
- An image of the check is captured using the mobile device if the stabilization parameter is greater than or equal to the stabilization threshold.”
- said invention does not tech about in single mode capturing of photo or video based on the stabilization threshold and the receiving of haptic contact engagement or tap on the single mode input icon which determines whether a photograph will be recorded or a video.
- Present invention teaches, based on monitored device stabilization parameter via device senor, stabilization threshold is identified.
- stabilization parameter In the event of stabilization parameter is greater than or equal to a stabilization threshold and in response to haptic contact engagement, photo is captured and photo is store. If stabilization parameter is not greater than or not equal to a stabilization threshold or in the event of stabilization parameter is less than to a stabilization threshold and in response to haptic contact engagement, start recording of video and start timer and in an embodiment stop video, store video and stop or re-initiate timer in the event of expiration of pre- set timer.
- Snapchat TM or Instagram TM enables user to add one or more photos or videos to "My stories" or feed for publishing or broadcasting or presenting said added phots or videos or sequence of photos or videos to one or more or all friends or contacts or connections or followers or particular category or type of user.
- Snapchat TM or Instagram TM enables user to add one or more photos or videos to "Our stories" or feed i.e. add photos or videos to particular events or place or location or activity or category and making them available to requested user or searching user or connected or related user.
- the photo sharing applications enables user to prepare one or more types of media including capture photo or record video or prepare text contents or any combination thereof and add to user's stories or add to particular type or category related feeds or add to particular event(s) for making them available to one or more or all friends or contacts or connections or networks or followers or groups or making available for all or particular type of users. None of the presently available type of feed(s) or story or stories enables user to provide object criteria i.e.
- model of object or sample image or sample photo and provide one or more criteria or conditions or rules or preferences or settings and based on that provide object model or sample image and provided one or more criteria or conditions or rules or preferences or settings identify or recognize or track or match one or more matched object or full or part of image or photo or video inside captured or presented or live photo or video and merge all identified photos or videos and present to user.
- user can provide object model or sample image of "coffee” and/or provide “coffee” keyword and/or provide location as “Mumbai” for searching all coffee related photos and videos by identifying or recognizing "coffee” object inside photo or video and matching said provided "coffee” object or sample image with said identified "coffee” object or image inside said captured or selected or live photo or video and processing or merging or separating or sequencing said all identified photos and videos and presenting to user or searching user or requesting user or recipient user or targeting user.
- Google glass TM or Snapchat spectacles TM enables user to capture photo or record video and send or post to one or more selected contacts or one or more types of stories or feeds.
- Present invention enables user to allow or not allow all or selected one or more users and/or allow or not allow all or selected one or more users at particular one or more pre-defined location(s) or place(s) or pre-defined geo-fence boundaries and/or allow or not allow all or selected one or more users at particular one or more pre-defined location(s) or place(s) or predefined geo-fence boundaries at particular pre-set schedule(s) to capture user's photo or record video or allow or not allow to capture photo or record video at particular selected location(s) or place(s) or within pre-defined geo-fence boundaries and/or at particular pre-set schedule(s) and/or for all or one or more selected users (phone contacts, social contacts, clients, customers, guests, subscribers, attendees, ticket holders etc.) or one or more types of pre-defined users or pre-defend particular type of characteristics of users.
- Google TM search engine enables user to search as per user provided search query or keywords and present search result.
- Advertisers can create and manage one or more campaign and associate advertisements groups and associate advertisements. Advertisers can provide keywords, bids, advertisement text or description, image, video and settings. Based on said created advertisements related keywords and bids Google TM search present advertisement to searching user by matching user's search keywords with advertisement related keywords and present highest bids advertisement top position or in prominent place on search result page.
- Google Image Search TM search and present matched or some level identical images based on user provided image.
- Present invention enables user to provide or upload or set or apply image or image of part of or particular object, item, face of people, brand, logo, thing, product or object model and/or provide textual description, keywords, tags, metadata, structured fields and associate values, templates, samples & requirement specification and/or provide one or more conditions including similar, exact match, partially match, include & exclude, Boolean operators including A D/OR/NOT/+/-/Phrases, rules.
- server Based on provided object model, object type, metadata, object criteria & conditions server identifies, matches, recognize photos or videos stored by server or accessed but server provided from one or more sources, databases, networks, applications, devices, storage mediums and present to users wherein presented media includes series of or collections of or groups of or slide shows of photos or merged or sequences of or collection of videos or live streams.
- plurality of merchant can upload videos of available products and/or associate details which server stores at server database.
- Searching user is enabled to provide or input or select or upload one or more image(s) or object model of particular object or product e.g. "mobile device" and provide particular location or place name as search query. Based on said search query server matches or identifies said object e.g.
- mobile device with recognized or detected or identified object inside videos and/or photos and/or live stream and/or one or more types of media stored or accessed from one or more sources and present or present merged or series of said identified or searched or matched videos and/or photos and/or live stream and/or one or more type of media to searching user or contextual user or requested user or user(s) of network.
- Present invention teaches various embodiments related to ephemeral contents including rule base ephemeral message system, enabling sender to post content or media including photo and enable to add or update or edit or delete one or more content item including photo or video from one or more recipient devices, whom sender sends or adds based on connection with recipient and/or privacy settings of recipient.
- 8,914,752 teaches "present on a display indicia of a set of ephemeral messages available for viewing; present on the display a first ephemeral message of the set of ephemeral messages for a first transitory period of time defined by a timer, wherein the first ephemeral message is deleted when the first transitory period of time expires; receive from a touch controller a haptic contact signal indicative of a gesture applied to the display during the first transitory period of time; wherein the ephemeral message controller deletes the first ephemeral message in response to the haptic contact signal and proceeds to present on the display a second ephemeral message of the set of ephemeral messages for a second transitory period of time defined by the timer, wherein the ephemeral message controller deletes the second ephemeral message upon the expiration of the second transitory period of time; wherein the second ephemeral
- a message sender may specify the length of viewing time for the message recipient.
- the set viewing period for a given piece of content can exceed the viewing period desired by the message recipient. That is, the message recipient may want to terminate the current piece of content to view the next piece of content.
- U.S. Patent Application No. 8,914,752 discloses an electronic device comprises a display and an ephemeral message controller to present on the display an ephemeral message for a transitory period of time.
- a touch controller identifies haptic contact on the display during the transitory period of time.
- the ephemeral message controller terminates the ephemeral message in response to the haptic contact.
- an electronic device comprises a display and an ephemeral message controller to present on the display an ephemeral message for a transitory period of time.
- a sensor controller identifies user sense on the display or application or device during the transitory period of time.
- the ephemeral message controller terminates the ephemeral message in response to the receiving of one or more type of identified user sense via one or more type of sensors.
- Present invention also teaches multi-tabs presentation of ephemeral messages and based on switching of tab, pausing of view timer associated with each message presented on current tab and presents set of content items related to switched tab and start timers associated with each presented message related to switched tab and in the event of expiry of timer or haptic contact engagement remove ephemeral message and present next one or more (if any) ephemeral messages.
- At present photo applications enables user to capture photo or record video and send to one or more contacts or feeds or stories or destinations and recipient or viewing user can view said posted content items at their time and provide reactions e.g. like, dislike, rating or emoticons at any time.
- Present invention enables real-time or maximum possible near real-time sharing and/or viewing and/or reacting of/on posted or broadcasted or sent one or more types of one or more content items or visual media items or news items or posts or ephemeral message(s).
- Ephemeral messaging may rely on a timer to determine the length of viewing time for content.
- a message sender may specify the length of viewing time for the message recipient.
- the set viewing period for a given piece of content can exceed the viewing period desired by the message recipient. That is, the message recipient may want to terminate the current piece of content to view the next piece of content.
- U.S. Patent Application No. 8,914,752 discloses an electronic device comprises a display and an ephemeral message controller to present on the display an ephemeral message for a transitory period of time.
- a touch controller identifies haptic contact on the display during the transitory period of time.
- the ephemeral message controller terminates the ephemeral message in response to the haptic contact.
- Present invention discloses or teaches about various types of ephemeral stories, feeds, galleries or albums including view and completely scroll content item to remove or hide from presentation interface or remove after pre-set wait duration, load more or push to refresh or auto refresh (after pre-set interval duration) to remove currently present content and present next set of contents (if any) from presentation or user interface or enable sender to apply or pre-set view duration for set of content items and present said posted set of or one or more content items to viewers or target o intended recipients and present for pre-set duration of timer and in the event of expiry of timer remove set of presented content items or visual media items and display next set of content items or visual media items or enable sender to apply view timer for each posted content item and present more than one content items and each content item having different pre-set view duration and in the event of
- GroupOn TM and other group deals sites enables group deals or Group buying, also known as collective buying, offers products and services at significantly reduced prices on the condition that a minimum number of buyers would make the purchase.
- these websites feature a "deal of the day", with the deal kicking in once a set number of people agree to buy the product or service. Buyers then print off a voucher to claim their discount at the retailer.
- Many of the group-buying sites work by negotiating deals with local merchants and promising to deliver a higher foot count in exchange for better prices.
- Present invention enables one or more types of mass user action(s) including participate in mass deals, buy or make order, view and like & review movie trailer release first time, download, install & register application, listen first time launched music, buy curate product, buy latest technology products, subscribe services, like and comment on movie song or story, buy books, view latest or top trend news or tweet or advertisement or announcement at specified date & time for particular pre-set duration of time with customized offer e.g. get points, get discounts, get free samples, view first time movie trailer, refer friends and get more discounts (like chain marketing), which mainly enables curate or verified or picked up or mas level effected but less known or first time launch products, services, mobile application to get huge traction and sells or bookings.
- an electronic device includes digital image sensors to capture visual media, a display to present the visual media from the digital image sensors and a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display.
- a visual media capture controller alternately records the visual media as a photograph or a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- a device may include a media application to capture digital photos or digital video. In many cases, the application needs to be configured into a photo-specific mode or video-specific mode. Switching between modes may cause delays in capturing a scene of interest. Further, multiple inputs may be needed thereby causing further delay. Improvements in media applications may therefore be needed.
- an apparatus may comprise a touch controller, a visual media capture component, and a storage component.
- the touch controller may be operative to receive a haptic engagement signal.
- the visual media capture component may be operative to be configured in a capture mode based on whether a haptic disengagement signal is received by the touch controller before expiration of a first timer, the capture mode one of a photo capture mode or video capture mode, the first timer started in response to receiving the haptic engagement signal, the first timer configured to expire after a first preset duration.
- the storage component may be operative to store visual media captured by the visual media capture component in the configured capture mode.
- Users of client devices often use one or more messaging applications to send messages to other users associated with client devices.
- the messages include a variety of content ranging from text to images to videos.
- the messaging applications often provide the user with a cumbersome interface that requires users to perform multiple user interactions with multiple user interface elements or icons in order to capture images or videos and send the captured images or videos to a contact or connection associated with the user. If a user simply wishes to quickly capture a moment with an image or video and send to another user, typically the user must click through multiple interfaces to take the image/video, select the user to whom it will be sent, and initiate the sending process.
- Facebook U.S. Patent Application No. 14/561733 discloses a user interacts with a messaging application on a client device to capture and send images to contacts or connections of the user, with a single user interaction.
- the messaging application installed on the client device, presents to the user a user interface.
- the user interface includes a camera view and a face tray including contact icons.
- the messaging application captures an image including the current camera view presented to the user, and sends the captured image to the contact represented by the contact icon.
- the messaging application may receive a single user interaction with a contact icon for a threshold period of time, and may capture a video for the threshold period of time, and send the captured video to the contact.
- U.S. Patent Application No. 15/079836 et. El. Yogesh Rathod discloses devices are configured to capture and share media based on user touch and other interaction.
- Functional labels show the user the operation being undertaken for any media captured.
- functional labels may indicate a group of receivers, type of media, media sending method, media capture or sending delay, media persistence time, discrimination type and threshold for capturing different types of media, etc., all customizable by the user or auto-generated. Media is selectively captured and broadcast to receivers in accordance with the configuration of the functional label.
- a user may engage the device and activate the functional label through a single haptic engagement, allowing highly-specific media capture and sharing through a single touch or other action, without having to execute several discrete actions for capture, sending, formatting, notifying, deleting, storing, etc.
- Some of the said prior arts teach about single mode capturing of photo or video and some of the prior arts disclose presenting contact(s) or group(s) specific visual media capture controller control or label and/or icon or image and one tap photo capturing or video recording and optionally previewing and auto sending to said contact specific visual media capture controller control or label and/or icon or image associated contact(s) or group(s).
- Present invention enables multi-tasking visual media capture controller control or label and/or icon or image including selecting front or back camera mode, capturing photo or start recording video and sop while further tap on icon or recording pre-set duration video or stop before pre-set duration of video or remove pre-set duration of video and capture as per user need, optionally preview or preview for pre-set duration, auto send to said visual media capture controller control or label and/or icon or image associated or pre-configured contact(s) or group(s) or destination(s) and view said visual media capture controller control or label and/or icon or image associated or pre-configured contact(s) or group(s) or destination(s) specific received content items or view pre-configured one or more interfaces and optionally via status (e.g.
- Neeraj Jhanji teaches systems for "sharing current location information among users by using relationship information stored in a database, the method comprising: a) receiving data sent from a sender's communication device, the data containing self-declared location information indicating a physical location of the sender at the time the sender sent the data determined without the aid of automatic location determination technology; b) determining from the data the sender's identity and based on the sender's identity and the relationship information stored in the database, determining a plurality of users associated with the sender and who have agreed to receive messages about the sender, each of the plurality of users having a communication device; c) wherein the data sent from the sender's communication device does not contain an indication of contact information of said plurality of users; and d) sending a notification message to the communication devices of, among the users, only the determined users, the notification message containing the sender's self-declared location information.
- Snapchat TM enable' s to provide geo-location based emoji and customized emoji or photo filter, but none of these teaches generating and presenting cartoon or emoji or avatar based on auto generated user's status i.e. based on plurality of factors including scanning or providing image via camera display screen and/or user's voice and/or user and connected users' data (current location, date & time) and identification of user's related activities, actions, events, entities, and transactions.
- At present photo applications enables user to capture and share visual media in plurality of ways. But user has to each time start camera application and each time start recording of video which will takes time.
- Present invention suggests always on camera (when user's intention to take visual media based on recognition of particular type of pre-defined user's eye gaze) and auto start video (even if user is not ready to take proper scene) and then enable user to trim unnecessary part and mark as start of video and mark as end of one or more videos and enable to capture one or more photos and share all or one or more or preset or default or updated one or more contacts and/or one or more types of destination(s) (make it one or more types of ephemeral or non-ephemeral and/or real-time viewing based on setting(s)) and/or enable to record front camera video (for providing video commentaries) simultaneously with back camera video during parent video recording session.
- So like eye (always ON and always recoding views) user can instantly, real-time view and simultaneously record one or more videos and captures one or more photos and provide commentary with back camera video and share with one or more contacts and make it one or more types of ephemeral or non-ephemeral and/or real-time viewing.
- Mobile devices such as smartphones, are used to generate messages.
- the messages may be text messages, photographs (with or without augmenting text) and videos. Users can share such messages with individuals in their social network. However, there is no mechanism for sharing messages with strangers that are participating in a common event.
- U.S. Patent no. 9, 113,301 et. el. Spiegel Evan - title Geo-location based event gallery
- a computer implemented method includes receiving a message and geo-location data for a device sending the message. It is determined whether the geo-location data corresponds to a geo-location fence associated with an event. The message is posted to an event gallery associated with the event when the geo- location data corresponds to the geo-location fence associated with the event.
- the event gallery is supplied in response to a request from a user.
- a computer implemented method comprising: receiving a message and geo-location data for a device sending the message, wherein the message includes a photograph or a video; determining whether the geo-location data corresponds to a geo-location fence associated with an event; supplying a destination list to the device in response to the geo-location data corresponding to the geo-location fence associated with the event, wherein the destination list includes a user selectable event gallery indicium associated with the event and a user selectable entry for an individual in a social network; adding a user of the device as a follower of the event in response to the event gallery indicium being selected by the user; and supplying an event gallery in response to a request from the user, wherein the event gallery includes a sequence of photographs or videos and wherein the event gallery is available for a specified transitory period.
- Present invention discloses user created gallery or event including provide name, category, icon or image, schedule(s), location or place information of event or predefine characteristics or type of location (via SQL or natural query or wizard interface), define participant members criteria or characteristics including invited or added members from contact list(s) or request accepted or based on SQL or natural query or wizard and define viewers criteria or characteristics and define presentation or feed type.
- system Based on auto starting or manually started by creator or authorized user(s) and said provided settings and information, in the event of matching of said defined target criteria specific including schedule(s) and/or location(s) and/or authorization matching with user device current location or type or category of location and/or user device current date & time and/or user identity or type of user based on user data or user profile, system presents one or more visual media capture controller control or icon and/or label on user device display or camera screen display or device camera for enabling user to alerting, notifying, capturing, recording, storing, previewing, and autos sending to visual media capture controller control or icon and/or label associated created one or more galleries or events or folders or visual stories and/or one or more types of destination(s) and/or contact(s) and/or group(s).
- U.S. Patent No. 8,099,332 discloses methods that include the actions of receiving a touch input to access an application management interface on a mobile device; presenting an application management interface; receiving one or more inputs within the application management interface including an input to install a particular application; installing the selected application; and presenting the installed application.
- augmented reality applications available at application stores (e.g. Google Play Store TM or Apple App store TM) e.g. Pokemon Go TM, Google Translate TM, and Wikitude World Browser TM etc.
- augmented reality applications functions, features, controls (e.g. button), and interfaces search engine, platform and client application available.
- Present invention enables registered developers of network to register, verify, make listing payment (if paid), upload or listing with details and making them searchable one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces and enabling user to download and install augmented reality client application and enabling searching users including advertisers of network to search, match, select, select from one or more types of categories, make payment (if paid), download, update, upgrade, install or access from server or select link(s) of one or more augmented reality applications, functions, features, controls (e.g.
- buttons and interfaces and enable to customize or configure and associate with defined or created named publication or advertisement said one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces and provide publication criteria including provide object criteria including object model(s), so when user scan said object then auto present said one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces related to said user or advertiser or publisher and/or target audience criteria, so user data match with said target criteria then present one or more augmented reality applications, functions, features, controls (e.g.
- augmented reality button and interfaces and/or target location(s) or place(s) or defined location(s) based on structured query language (SQL), natural query and step by step wizard to define location(s) type(s), categories & filters (e.g. all shops related to particular brand, all flower seller - system identifies based on pre-stored categories, types, tags, keywords, taxonomy, associated information associated with location or place or point of interest or spot or location point or location or each identified or updated places or point of interests or spots or location points on map of world or databases or storage mediums of locations or places, so when user device monitored current location is matched with or near to said target location or place then auto present said one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces and any combination thereof.
- user or developer or advertiser or server 110 may be publisher of one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces.
- yahoo answers TM enables user to post question and gets answers from users of network in exchange of points.
- Present invention enables user to post structured or freeform requirement specification and get response from matched actual customers, contacts of user, experts, users of network and sellers.
- System maintains logs of who saved use's time, money, and energy and provides best price, best matched and best quality of matched products and services to user.
- Present invention provides user to user saving money (best price, quality and matched products and services) platform.
- At present plurality of web sites, social network, search engines and applications including chatting, instant messaging & communication applications accumulate user data including user associated keywords based on user's search queries, search result item(s) selection or access, sharing of content, viewing of posts, subscribing or following of users or sources and viewing messages posted by followed users, exchanging of messages, logging of user activities, status, locations, checked-in places, and like. All these web sites and applications accumulated user related keywords indirectly or automatically (without user intervention or user mediated action or editing or acceptance or permission or verifying that particular keyword(s) is/are useful and actually related to user), without directly asking user to provided user associated keywords.
- Present invention enables user to provide, search, match, select, add, identify, recognize, notify user to add, update & remove keywords, key phrases, categories, tags and associated relationships including type of one or more entities, activities, actions, events, transactions, connections, status, interaction, locations, communication, sharing, participation, expression senses & behavior and associate information, structured information (e.g. selected one or more fields and provided one or more data types or types specific one or more values) metadata & system data, Boolean operators, natural queries, Structures Query Language (SQL).
- structured information e.g. selected one or more fields and provided one or more data types or types specific one or more values
- SQL Structures Query Language
- each user related, associated, provided, accumulated, identified, recognized, ranked, updated keywords, key phrases, tags, hashtags and one or more types of one or more identified or updated or created or defined relationships and ontologies among or between said one or more keywords, key phrases, tags, hashtags, associated one or more categories, sub-categories, taxonomy, system can utilize said user contextual relational keywords for plurality of purposes including search, match user specific contextual visual media items, content items to user, present contextual advertisements, enable 3 rd parties enterprise subscribers to access or data mine said users' contextual and relational keywords for one or more types of purposes based on user permission, privacy settings & preferences.
- Present invention enables users to relieve from said manual process and enables to send request to auto select or select from map nearest available or ranked visual media taking service provider and in the event of acceptance of request enable both to find out each other, enable visual media taking service provider or photographer to capture, preview, cancel, retake, and auto send visual media of requestor user or his/her group from visual media taking service provider's or photographer's smartphone camera or camera device to said visual media taking service consumer's or tourists or requestor user's device and enabling him/her to preview, cancel, and accept said visual media including one or more photos or videos, send request to retake or take more or enable both to finish or done current photo taking or shooting service session and provide ratings and reviews of each other.
- contextual activities wants suggestions for types of contextual activities) and suggests (by server based on match making, by user's contacts, by 3 rd parties) contextual activities including shopping, viewing movie or drama, tours & packages, paly games, eat foods, visit places based on one or more types of user and connected users' data including duration and date & time of free or available time for conducting one or more activities, type of activity do (e.g.
- Present invention enables user to utilize user available time for conducting various activities in best possible manner by suggesting or updating various activities from plurality of sources.
- Twitter TM enables user to post tweet or message and make available said posted tweets or message to followers of user at each follower's feed and enables user to follow via search, select one or more users from directory and follow or follow from user's profile page.
- Each user can directly post and have one feed where all tweets or message from all followed users are presented to user. But due to this each post of user presented at each follower's feed and each follower receive each posted message from each followed users. So there is grate possibilities that user receives irrelevant tweets or message from followed users.
- Present invention enables user to create one or more type of feeds including e.g.
- personal, relatives specific, best friends specific, friends specific, interest specific, professional type specific, news specific and enable to provide configuration settings, privacy settings, presentation settings and preferences for one or more or each feeds including allow to contacts to follow or subscribe said particular type of feed(s), allow all or allow invited or allow request accepted of requestors or allow to follow or subscribe one or more types of created feeds to pre-defined characteristics of followers only based on use data and user profile.
- personal feeds type is allow following or subscribing only to user's contacts.
- News type of feed allowing following or subscribing to all users of network.
- professional type of feed allowing to subscribe or follow to connected users only or to pre-defined types of characteristics users of network only.
- following user can provide scale to indicate how much content user likes to receive from all or particular followed user or particular feed of particular followed user(s) and/or also provide one or more keywords, categories, hashtags inside posted messages which user likes to receive said keywords, categories, hashtags specific messages from followed user(s).
- In another embodiment enable searching user to provide search query and search users and related one or more types of feeds and select from search result users and/or related one or more types of feeds and follow all or selected one or more types of feeds of one or more selected users or provide search query to search posted contents or messages of users of network and select source(s) or user(s) or related one or more types of feeds associated with posted message or content items and follow source(s) or related one or more types of feeds or follow user's all or selected one or more types of feeds of user from search result item or from user's profile page or from categories directory item or from suggested list or from user created list or from 3rd parties' web sites or applications.
- follower receives posted messages or one or more types of one or more content items or visual media items from followed user(s)'s followed type of feed(s) related posted messages at/under/within//in said categories or type of feed(s) presented messages.
- follower receives posted messages or one or more types of one or more content items or visual media items from followed user(s)'s followed type of feed(s) related posted messages at/under/within//in said categories or type of feed(s) presented messages.
- server 110 presents said posted message related to "Sports" type feed of user [Y] at following user [A]'s "Sports” category tab so receiving user can view all followed “Sports” type feed related messages from all followed users in said "Sports” category tab.
- Present invention also enables group of users to posts under one or more created and selected types of feeds for making them available for common followers of group.
- Google Search Engine TM enables user to search based on one or more keywords and presents search query specific search results.
- Google Map TM enables user to search, navigate and select particular location(s) or place(s) or point of interest(s) or particular type or category specific location(s) or place(s) or spot(s) on map and enable to view information, user posted photos, reviews, nearby locations, find route and direction.
- users enables users to provides user status (online, busy, offline, away etc.), and manual status ("I am watching movie", "I am at gym” etc.) and structured status (e.g. selecting one or more types of user activities or actions watching, reading etc.).
- At present some applications identifies user device current location and enable user to share with other users or connected users of user or enabling user to manually checked-in place and make them available to or share with one or more friends, contacts or connected users of user.
- At present messaging applications enables user to exchange messages. All these websites and applications are either indirectly identifies keywords in user's exchanging of messages, search queries keywords or directly identifies based on user status and location or place sharing, which are very limited.
- Present invention enables user to input (auto- fill from suggested keywords) or selects keywords, key phrases, categories, and associated relationships or ontology(ies) from auto presented suggested list of keywords, key phrases, categories, and associated relationships or ontology(ies) based on user's voice or talks, user scanning, providing object criteria, object model(s) (select or take visual media including photo(s) or video(s)), user device(s) monitored current location, user provided status, user domain specific profile, provided structured information via structured forms, templates and wizard related to user's activities, actions, events, transactions, interests, hobbies, likes, dislikes, preferences, privacy, requirement specifications, search queries, and selecting, applying and executing one or more rules to identify contextual keywords, key phrases, categories, and associated relationships or ontology(ies).
- Present invention enables user to search and present location on map or navigate map to select location or search address for finding particular location or place on map and further enable user to provide said location or place associate one or more searching keywords and/or object criteria or object model(s) and/or preferences and/or filter criteria to search, match, select, identify recognize and presents visual media items in sequence and display next based on haptic contact engagement on display or tapping on presentation interface or presented visual media item or auto advance next visual media item based on pre-set interval period of time. So user can view location or place and/or associated supplied object criteria and/or filter criteria specific visual media items.
- search engine searches and matches user generated and posted or conference administrator generated or posted visual media items or content items related to said particular keywords and present to user sequentially.
- So user can view visual media items based on plurality of angles, attributes, properties, ontology, characteristics including reviews, presentation, video, menu, price, description, catalogues, photos, blogs, comments, feedbacks, complaints, suggestions, how to use or access or made, manufacturing or how to made, questions and answers, customer interviews or opinions, video survey, particular product or object model specific or type of product related visual media, view design, wearable particular designed or particular type of cloths by customers, user experience videos, learning and educational or entertainment visual media, view interior, view management or marketing style, tips, tricks & live marketing of various products or services.
- user can search based on defined type(s) or characteristics of location(s) or place(s) via structured query language (SQL) or natural query or wizard interface e.g.
- SQL structured query language
- Gardens of world and then provide one or more object criteria (e.g. sample image or object model of Passiflora flower) and/or keywords "how to plant” then system identifies all gardens of Passiflora flower and presents visual media related to plantings posted by visitors or users of network or posted by 3 rd parties, who associated or provided one or more flower related ontology(ies), keywords, tags, hashtags, categories, and information.
- object criteria e.g. sample image or object model of Passiflora flower
- keywords “how to plant”
- system identifies all gardens of Passiflora flower and presents visual media related to plantings posted by visitors or users of network or posted by 3 rd parties, who associated or provided one or more flower related ontology(ies), keywords, tags, hashtags, categories, and information.
- At present video calling applications enable user to select one or more contacts or group(s) and initiate or start video calling and in the event of acceptance of call by called user, starts video communication or talking with each other and in the event of end of call, terminates or closes the video communication between calling and called user(s).
- User has to open video call application each time of video calling, each time of video calling user has to search & select or select contact(s) and/or group(s) to call them.
- Each video calling user has to wait for call acceptance by callee or called user(s) and each time user (caller or callee) has to end video call to end current video call and if user wants to again video talk then again same process happen.
- present invention enables user to provide voice command to start video talk with voice command related contact and auto ON user's and called user's device, auto open application and front camera video interface of camera display screen in both caller and called user's device and enable them starts video talking and/or chatting and/or voice talk and/or sharing one or more types of media including photo, video, video streaming, files, text, blog, links, emoticons, edited or augmented or photo filter(s) applied photo or video with each other.
- the object of present invention is to identify user intention to take photo or video and automatically invokes, open and show camera display screen so user is enable to capture photo or video without each time manually open camera application.
- the object of present invention is to identify user's intention to view media and show interface to view media.
- the object of present invention is to auto capture photo or auto record video.
- the object of present invention is to single mode visual media capture that alternately produces photographs and videos.
- the object of present invention is to enabling sender or source to select, input, update, apply and configure one or more types of ephemeral or non- ephemeral content access privacy and presentation settings for one or more types of one or more destination(s) or recipient(s) or contact(s).
- the object of present invention is to enabling content receiving user or destination(s) of contents to select, input, update, apply and configure one or more types of privacy settings, presentation settings and ephemeral or non- ephemeral settings for receiving of contents and making of contents as ephemeral or non-ephemeral received from one or more types of one or more source(s) or sender(s) or contact(s).
- the another important object of the present invention is to enable user to provide, select, input, apply one or more criteria including one or more keywords, preferences, settings, metadata, structured fields including age, gender, education, school, college, company, place, location, activity or actions or transaction or event name or type, category, one or more rules, rules from rule base, conditions including level of matching, similar, exact match, include, exclude, Boolean operators including AND, OR, NOT, Phrases, object criteria i.e. provide image or object model or sample image or photo or pattern or structure or model of match making for matching object inside the photo or video with captured or stored photos or videos or matching text criteria e.g.
- keywords with text content or matching voice with voice content for identifying, matching, processing, merging, separating, searching, matching, subscribing, generating, storing or saving, viewing, bookmarking, sequencing, serving, presenting one or more types of feeds or stories or set of sequences identified media including one or more types of media, photos, images, videos, voice, sound, text and like.
- the object of present invention is to enabling user to select, input, update, apply and configure privacy settings for allowing or not-allowing other users to capture or record visual media related to user.
- the object of present invention is to enabling advertiser to create visual media advertisements with target criteria including object criteria or supplied object model or sample image and/or target audience criteria for presenting with or integrating in or embedded within visual media stories related to said recognized target object model inside said user presented matched visual media items for presenting to requesting or searching or viewing or subscriber users of network.
- sender of media to access media shared by sender at recipient device including add, remove, edit & update shared media at recipient's device or application or gallery or folder.
- Another important object of present invention is to enable accelerated display of ephemeral messages based on sender provided view or display timer as well as one or more types of pre- defined user sense via user device one or more types of sensor(s).
- Another important object of present invention is to real-time starting session of displaying or broadcasting of ephemeral messages.
- Other important object of present invention is to provide various types of ephemeral stories, feeds, galleries or albums including view and completely scroll content item to remove or hide from presentation interface or remove after pre-set wait duration, load more or push to refresh or auto refresh (after pre-set interval duration) to remove currently present content and present next set of contents (if any) from presentation or user interface or enable sender to apply or pre-set view duration for set of content items and present said posted set of or one or more content items to viewers or target o intended recipients and present for pre-set duration of timer and in the event of expiry of timer remove set of presented content items or visual media items and display next set of content items or visual media items or enable sender to apply view timer for each posted content item and present more than one content items and each content item having different pre-set view duration and in the event of expiry of view duration remove each expired content item and present new content item, enable receiver to pre-set number of times of views and remove after said pre-set number of times of views or enabling viewing user to mark content item as ephe
- Other important object of present invention is to enable mass user actions at particular date & time for pre-set period of time and during that period enabling user to take presented one or more types of content (group deal, application details, advertisement, news, movie trailer etc.) specific one or more types of action(s) including buy or participate in group deals, buy or order product, subscribe service, view news or movie trailer, listen music, register web site, confirm to participate in event, like, provide comments, reviews, feedback, complaints, suggestions, answers, idea & rating, fill survey form, view visual media or content items, book tickets.
- content group deal, application details, advertisement, news, movie trailer etc.
- action(s) including buy or participate in group deals, buy or order product, subscribe service, view news or movie trailer, listen music, register web site, confirm to participate in event, like, provide comments, reviews, feedback, complaints, suggestions, answers, idea & rating, fill survey form, view visual media or content items, book tickets.
- Other important object of present invention is to provides multi-tasking visual media capture controller control or label and/or icon or image including selecting front or back camera mode, capturing photo or start recording video and sop while further tap on icon or recording pre-set duration video or stop before pre-set duration of video or remove pre-set duration of video and capture as per user need, optionally preview or preview for pre-set duration, auto send to said visual media capture controller control or label and/or icon or image associated or pre-configured contact(s) or group(s) or destination(s) and view said visual media capture controller control or label and/or icon or image associated or pre-configured contact(s) or group(s) or destination(s) specific received content items or view pre-configured one or more interfaces and optionally via status (e.g.
- Another important object of present invention is to enable multi-tabs accelerated display of ephemeral messages and based on switching of tab, pausing of timer associated with each message presented on current tab and presents set of content items related to switched tab and start timers associated with each presented message related to switched tab and in the event of expiry of timer or haptic contact engagement remove ephemeral message and present next one or more (if any) ephemeral messages.
- Other important object of present invention is to auto identify, prepare, generate and present user status (description about user's current activities, actions, events, transactions, expressions, feeling, senses, places, accompanied persons or user contacts, purpose, requirements, date & time etc.) based on real-time user supplied updated data and pre-stored user or connected user's data.
- Other important object of present invention is to auto generate or identify and present one or more cartoons, emoji, avatars, emoticons, photo filters or image based on auto identified, prepared and generated user status (description about user's current activities, actions, events, transactions, expressions, feeling, senses, places, accompanied persons or user contacts, purpose, requirements, date & time etc.) based on real-time user supplied updated data and pre-stored user or connected user's data.
- Other important object of present invention is to provide always on and always started parent video session (while user's intention to take visual media - hold device to take visual media) and during that parent video session enable user to conduct multi -tasking (for utilizing user's time) including enable to mark as start via trimming and end of one or more video via tapping on anywhere on display or on particular icon and captured photo(s) and sharing to one or more contacts (all during recording of parent video recording session) i.e. instant, real-time, ephemeral, same time sharing which utilizes user' time and provide instant gratification.
- Other important object of present invention is to enable user to creation of gallery or story or location or place or defined geo-fence boundaries specific scheduled event and define and invite participants.
- Based on event location, date & time and participant data presents auto generated visual media capture & view controller(s) on display screen of device of each authorized participant member and enable them to one tap front or back camera capturing of one or more photos or recording of one or more videos and present preview interface for previewing said visual media for set pre-set duration and within that duration user is enable to remove said previewed photo or video and/or change or select destination(s) or auto send to pre-set destination(s) after expiry of said pre-set period of time.
- Admin of gallery or story or album or event creator is enabled to update event, start manually or auto start at scheduled period, invite, update or change, remove and define participants of event and accept request of user to participate in the event, define target viewers, select one or more types of one or more destinations, provide or define on or more types of presentation settings.
- augmented reality platform network, application, web site, server, device, storage medium, store, search engine, developer client application for registering developer, make payment for membership as per payment mode or models (if paid), registering, verifying, make payment for listing as per payment mode or models (if paid), listing, uploading with details (description, categories, keywords, help, configuration, customization, setup, & integration manual, payment details, mode & models (fix, monthly subscription, pay per presentation, use or access, action, transaction etc.)) or making available for searching users of network one or more augmented reality applications, functions, controls (e.g.
- buttons interfaces, one or more types of media, data, application programming language (API), software development toolkit (SDK), web services, objects and any combination thereof (packaged), advertiser or merchant or publisher's client application for searching, matching, viewing details, selecting, adding link to list for selection while creating publication or advertisement, downloading, installing, making payment as per selected payment modes and models (if paid), updating, upgrading or accessing from server 110 or from 3 rd parties server, creating publication or advertisement including provide publisher or advertiser or user details, provide object criteria, schedules of publication or presentation, target audience criteria, target location criteria and searching, matching, selecting, configuring, customizing, adding, updating, removing or associating and publishing one or more augmented reality applications, functions, controls (e.g.
- API application programming language
- SDK software development toolkit
- web services objects and any combination thereof (packaged)
- advertiser or merchant or publisher's client application for searching, matching, viewing details, selecting, adding link to list for selection while creating publication or advertisement, downloading, installing, making payment as per selected payment modes and models
- buttons interfaces, one or more types of media, data, , application programming language (API), software development toolkit (SDK), web services, objects and any combination thereof (packaged) as per said target criteria including object criteria, target audience criteria, target locations or places (selected, current location as location, defined location (via SQL or natural query or wizard interface), schedules and user client application for auto presenting or allow to search, match, select said configured and published one or more augmented reality applications, functions, controls (e.g.
- API application programming language
- SDK software development toolkit
- web services objects and any combination thereof (packaged) as per said target criteria including object criteria, target audience criteria, target locations or places (selected, current location as location, defined location (via SQL or natural query or wizard interface), schedules and user client application for auto presenting or allow to search, match, select said configured and published one or more augmented reality applications, functions, controls (e.g.
- said auto presenting based on object criteria includes enabling user to scan object(s) which is/are recognized by server based on object recognition, optical character recognition, face recognition technologies (identification and matching of said scanned object or identified object or text or face with provided object criteria associated with advertisements or publication of plurality of advertisers or publishers) visual media items at server 110 and auto present matched or contextual one or more augmented reality applications, functions, controls (e.g.
- SDK software development toolkit
- web services objects and any combination thereof (packaged).
- API application programming language
- SDK software development toolkit
- system presents visual media items related to said product of said advertiser (e.g. shop, manufacturer, seller, merchant, brand at particular location etc.).
- User client application enables object scanning, object or face or text recognition, identification and matching via object recognition, machine vision, optical character recognition, face recognition technologies including 3 rd parties SDK (e.g.
- Augmented Reality SDK - Wikitude TM, Open Source Augmented Reality SDK etc. with object criteria and/or visual media items at server 110, object tracking, 3D object tracking, 3d model rendering, location based augmented reality, content augmentation, objects or media or information overlays or presentation on scanned view.
- Other important object of present invention is to enabling user to provide auto capture or record visual media including photo or video reactions on one or more viewed or currently viewing visual media item or content item or news item or feed item or received or presented from connected or other users or sources of network and auto post said user reaction photo or video to and present to below or at prominent place of said presented visual media item or content item in feed of receiving or viewing all or one or more selected users (like content item associate likes or dislikes or comments).
- Another important object of present invention is to enabling user to post requirement specification and receive response from matched or contextual users who helps user in find out best matched in terms of budget, price, quality, availability and saves user's time, money and energy by enabling user to user money saving platform.
- Other important object of present invention is to enabling user to navigation of map including from world map select country, state, city, area, place and point or search particular place or spot or POI or point or access map to search, match, identify and find out location, place, spot, point, Points of Interest (POI) and associated or nearest or located or situated or advertised or listed or marked or suggested one or more types of entities including mall, shop, person, product, item, building, road, tourist place, river, forest, garden, mountain, hotel, restaurant, exhibition, events, fair, conference, structure, station, market, vendor, temple, apartment or society or house, and one or more types of one or more addresses and after identifying particular entity or item on map, user is enabled to provide, search, input, update, add, remove, re-arrange, select, provide via auto fill-up or auto suggested one or more keywords, key phrases and Boolean operators and optionally selecting and applying of one or more conditions, rules, preferences, settings for identifying or matching or searching and presenting visual media or content items which were generated from that particular location or place
- Another important object of present invention is to enabling user to user providing and consuming on demand services including visual media taking services or photography service.
- Other important object of present invention is to suggest contextual activities based on user provided or auto identified date & time range(s) and duration within which user wants to do activities and needs suggestions from serve, experts, 3 rd parties, user contacts and based on one or more types of user data.
- system continuously presents and updates suggested, alternative one or more types of one or more contextual activities (activity item with details including description, name, brand, links, one or more types of user actions comprises book, view, refer, buy, direction, share, like, order, read, listen, install, paly, register, presentation, and media) as per one or more types of user timeline (free, available, want to do collaborative activity, have particular duration free time, want to do activity with family or selected friends, scheduled events, required suggestions from actual users or contacts and based on one or more types of user data.
- contextual activities activity item with details including description, name, brand, links, one or more types of user actions comprises book, view, refer, buy, direction, share, like, order, read, listen, install, paly, register, presentation, and media
- user timeline free, available, want to do collaborative activity, have particular duration free time, want to do activity with family or selected friends, scheduled events, required suggestions from actual users or contacts and based on one or more types of user data.
- the of the present invention is to facilitating user time line including identifying & storing user's available timings or duration or date(s) & time range(s) or schedules, user's calendar entries, user data, suggesting or presenting contents or various activities or prospective activity items that user can do from one or more sources including contextual users of network, advertisers, marketers, sellers, and service providers based on user data including user profile, user preferences, interests, privacy settings, past activities, actions, events, transactions, status, updates, locations, & check-in places, rank of prospective activity, ran of provider of activity item and also facilitating user in planning, sharing, executing & conducting one or more activities including book ticket, book rooms, purchase product, subscribe service, participate in group deals, ask queries to other users of network who already experienced or conducted particular activity.
- the other object of the present invention is to continuously updating time-line specific presentation of activities items based on updated user data.
- Other important object of present invention is to enabling user to create one or more types of feeds and post message to said selected one or more types of feeds and making them available to followers of said posting user's posted message associated selected one or more types of feeds. Enabling user to search and select users via search engine, directory, from user's profile page and from 3 rd parties' web sites, web pages, applications, interfaces, devices and follow user(s) i.e. follow each selected user's all or selected one or more types of feeds.
- present invention enables user to input (auto-fill from suggested keywords) or selects keywords, key phrases, categories, and associated relationships or ontology(ies) from auto presented suggested list of keywords, key phrases, categories, and associated relationships or ontology(ies) based on user's voice or talks, user scanning, providing object criteria, object model(s) (select or take visual media including photo(s) or video(s)), user device(s) monitored current location, user provided status, user domain specific profile, provided structured information via structured forms, templates and wizard related to user's activities, actions, events, transactions, interests, hobbies, likes, dislikes, preferences, privacy, requirement specifications, search queries, and selecting, applying and executing one or more rules to identify contextual keywords, key phrases, categories, and associated relationships or ontology(ies).
- Other important object of present invention is to enabling user to start and stop and re-start and stop video talk based on voice command, face expression detection, voice detection without each time open (make ON) device, open video calling or video communication application, selecting of contact(s), make calling, wait for call acceptance by called user(s), end call (by caller or called user).
- the term "receiving" posted or shared contents & communication and any types of multimedia contents from a device or component includes receiving the shared or posted contents & communication and any types of multimedia contents indirectly, such as when forwarded by one or more other devices or components.
- “sending" shared contents & communication and any types of multimedia contents to a device or component includes sending the shared contents & communication and any types of multimedia contents indirectly, such as when forwarded by one or more other devices or components.
- client application refers to an application that runs on a client computing device.
- a client application may be written in one or more of a variety of languages, such as ⁇ T# ⁇ J2MF, Java, ASP.Net, VB.Net and the like. Browsers, email clients, text messaging clients, calendars, and games are examples of client applications.
- a mobile client application refers to a client application that runs on a mobile device.
- network application refers to a computer-based application that communicates, directly or indirectly, with at least one other component across a network.
- Web sites, email servers, messaging servers, and game servers are examples of network applications.
- present invention identifies user intention to take photo or video based on one or more types of eye tracking system to detect that user want to start camera screen display to capture photo or record video or invoke & access camera display screen and based on that automatically open camera display screen or application or interface without user need to manually open it each time when user want's to capture photo or record video/voice or access device camera.
- Eye tracking system identifies eye positions and movement by eye tracking to measure the point of gaze to identify eye position e.g. straight to eye(s) for taking photo or video and automatically invoke, open and show camera display screen so user is enable to capture photo or video without each time manually open camera application.
- present invention identifies user intention to take photo or video based on one or more types of sensors by identifying device position i.e.
- present invention Based on one or more types of eye tracking system and/or sensors, present invention also detect or identifies user intention to view received or shared contents or one or more types of media including photos or videos or posts from one or more contacts or sources e.g. Eye tracking system identifies eye positions and movement by eye tracking to measure the point of gaze to identify eye position and proximity sensor identifies distance of user device (e.g. device in hand in viewing position), so based on that system identifies user intention to read or view media.
- present invention single mode capturing of photo or video based on the stabilization threshold and the receiving of haptic contact engagement or tap on the single mode input icon which determines whether a photograph will be recorded or a video.
- stabilization threshold is identified. In the event of stabilization parameter is greater than or equal to a stabilization threshold and in response to haptic contact engagement, photo is captured and photo is store. If stabilization parameter is not greater than or not equal to a stabilization threshold or in the event of stabilization parameter is less than to a stabilization threshold and in response to haptic contact engagement, start recording of video and start timer and in an embodiment stop video, store video and stop or re-initiate timer in the event of expiration of pre-set timer.
- present invention enables sender user and receiving user to select, input, apply, and set one or more types of ephemeral and non-ephemeral settings for one or more contacts or senders or recipients or sources or destinations for sending or receiving of contents.
- present invention enables user to provide object criteria i.e. model of object or sample image or sample photo and provide one or more criteria or conditions or rules or preferences or settings and based on that provide object model or sample image and provided one or more criteria or conditions or rules or preferences or settings identify or recognize or track or match one or more matched object or full or part of image or photo or video inside captured or presented or live photo or video and merge all identified photos or videos and present to user.
- object criteria i.e. model of object or sample image or sample photo and provide one or more criteria or conditions or rules or preferences or settings and based on that provide object model or sample image and provided one or more criteria or conditions or rules or preferences or settings identify or recognize or track or match one or more matched object or full or part of image or photo or video inside captured or presented or live photo or video and merge all identified photos or videos and present to user.
- user can provide object model or sample image of "coffee” and/or provide “coffee” keyword and/or provide location as “Mumbai” for searching all coffee related photos and videos by identifying or recognizing "coffee” object inside photo or video and matching said provided "coffee” object or sample image with said identified "coffee” object or image inside said captured or selected or live photo or video and processing or merging or separating or sequencing said all identified photos and videos and presenting to user or searching user or requesting user or recipient user or targeting user.
- present invention enables user to allow or not allow all or selected one or more users and/or allow or not allow all or selected one or more users at particular one or more pre-defined location(s) or place(s) or pre-defined geo-fence boundaries and/or allow or not allow all or selected one or more users at particular one or more pre-defined location(s) or place(s) or pre-defined geo-fence boundaries at particular pre-set schedule(s) to capture user's photo or record video or allow or not allow to capture photo or record video at particular selected location(s) or place(s) or within pre-defined geo-fence boundaries and/or at particular pre-set schedule(s) and/or for all or one or more selected users (phone contacts, social contacts, clients, customers, guests, subscribers, attendees, ticket holders etc.) or one or more types of pre-defined users or pre-defend particular type of characteristics of users.
- present invention enables user to provide or upload or set or apply image or image of part of or particular object, item, face of people, brand, logo, thing, product or object model and/or provide textual description, keywords, tags, metadata, structured fields and associate values, templates, samples & requirement specification and/or provide one or more conditions including similar, exact match, partially match, include & exclude, Boolean operators including A D/OR/NOT/+/-/Phrases, rules.
- server Based on provided object model, object type, metadata, object criteria & conditions server identifies, matches, recognize photos or videos stored by server or accessed but server provided from one or more sources, databases, networks, applications, devices, storage mediums and present to users wherein presented media includes series of or collections of or groups of or slide shows of photos or merged or sequences of or collection of videos or live streams.
- plurality of merchant can upload videos of available products and/or associate details which server stores at server database.
- Searching user is enabled to provide or input or select or upload one or more image(s) or object model of particular object or product e.g. "mobile device" and provide particular location or place name as search query. Based on said search query server matches or identifies said object e.g.
- mobile device with recognized or detected or identified object inside videos and/or photos and/or live stream and/or one or more types of media stored or accessed from one or more sources and present or present merged or series of said identified or searched or matched videos and/or photos and/or live stream and/or one or more type of media to searching user or contextual user or requested user or user(s) of network.
- present invention teaches various embodiments related to ephemeral contents including rule base ephemeral message system, enabling sender to post content or media including photo and enable to add or update or edit or delete one or more content item including photo or video from one or more recipient devices, whom sender sends or adds based on connection with recipient and/or privacy settings of recipient.
- an electronic device comprises a display and an ephemeral message controller to present on the display an ephemeral message for a transitory period of time.
- a sensor controller identifies user sense on the display or application or device during the transitory period of time.
- the ephemeral message controller terminates the ephemeral message in response to the receiving of one or more type of identified user sense via one or more type of sensors.
- Present invention also teaches multi-tabs presentation of ephemeral messages and based on switching of tab, pausing of view timer associated with each message presented on current tab and presents set of content items related to switched tab and start timers associated with each presented message related to switched tab and in the event of expiry of timer or haptic contact engagement remove ephemeral message and present next one or more (if any) ephemeral messages.
- present invention enables real-time or maximum possible near real-time sharing and/or viewing and/or reacting of/on posted or broadcasted or sent one or more types of one or more content items or visual media items or news items or posts or ephemeral message(s).
- present invention discloses or teaches about various types of ephemeral stories, feeds, galleries or albums including view and completely scroll content item to remove or hide from presentation interface or remove after pre-set wait duration, load more or push to refresh or auto refresh (after pre-set interval duration) to remove currently present content and present next set of contents (if any) from presentation or user interface or enable sender to apply or pre-set view duration for set of content items and present said posted set of or one or more content items to viewers or target o intended recipients and present for pre-set duration of timer and in the event of expiry of timer remove set of presented content items or visual media items and display next set of content items or visual media items or enable sender to apply view timer for each posted content item and present more than one content items and each content item having different pre-set view duration and in the event of expiry of view duration remove each expired content item and present new content item, enable receiver to pre-set number of times of views and remove after said pre-set number of times of views or enabling viewing user to mark content item as ep
- present invention enables one or more types of mass user action(s) including participate in mass deals, buy or make order, view and like & review movie trailer release first time, download, install & register application, listen first time launched music, buy curate product, buy latest technology products, subscribe services, like and comment on movie song or story, buy books, view latest or top trend news or tweet or advertisement or announcement at specified date & time for particular pre-set duration of time with customized offer e.g. get points, get discounts, get free samples, view first time movie trailer, refer friends and get more discounts (like chain marketing), which mainly enables curate or verified or picked up or mas level effected but less known or first time launch products, services, mobile application to get huge traction and sells or bookings.
- User can get preference based or use data specific contextual push notifications or indication or can directly view from application, date & time specific presented content (advertisement, news, group deals, trailer, music, customized offers etc.) with one or more types of mass action (install, sign off mass or group deals, register, view trailer, fill survey form, like product, subscribe service etc.)
- present invention enables multi-tasking visual media capture controller control or label and/or icon or image including selecting front or back camera mode, capturing photo or start recording video and sop while further tap on icon or recording pre-set duration video or stop before pre-set duration of video or remove pre-set duration of video and capture as per user need, optionally preview or preview for pre-set duration, auto send to said visual media capture controller control or label and/or icon or image associated or pre-configured contact(s) or group(s) or destination(s) and view said visual media capture controller control or label and/or icon or image associated or pre-configured contact(s) or group(s) or destination(s) specific received content items or view pre-configured one or more interfaces and optionally via status (e.g.
- present invention enables auto identifying, preparing and generating and presenting user's status based on user's supplied data including image (via scanning, capturing, selecting), user's voice, and/or user related data including user device's current location or place, user profile (age, gender, various dates & times etc.).
- present invention enables generating and presenting cartoon or emoji or avatar based on auto generated user's status i.e. based on plurality of factors including scanning or providing image via camera display screen and/or user's voice and/or user and connected users' data (current location, date & time) and identification of user's related activities, actions, events, entities, and transactions.
- present invention suggests always on camera (when user's intention to take visual media based on recognition of particular type of pre-defined user's eye gaze) and auto start video (even if user is not ready to take proper scene) and then enable user to trim
- So like eye (always ON and always recoding views) user can instantly, real-time view and simultaneously record one or more videos and captures one or more photos and provide commentary with back camera video and share with one or more contacts and make it one or more types of ephemeral or non-ephemeral and/or real-time viewing.
- present invention discloses user created gallery or event including provide name, category, icon or image, schedule(s), location or place information of event or predefine characteristics or type of location (via SQL or natural query or wizard interface), define participant members criteria or characteristics including invited or added members from contact list(s) or request accepted or based on SQL or natural query or wizard and define viewers criteria or characteristics and define presentation or feed type.
- system Based on auto starting or manually started by creator or authorized user(s) and said provided settings and information, in the event of matching of said defined target criteria specific including schedule(s) and/or location(s) and/or authorization matching with user device current location or type or category of location and/or user device current date & time and/or user identity or type of user based on user data or user profile, system presents one or more visual media capture controller control or icon and/or label on user device display or camera screen display or device camera for enabling user to alerting, notifying, capturing, recording, storing, previewing, and autos sending to visual media capture controller control or icon and/or label associated created one or more galleries or events or folders or visual stories and/or one or more types of destination(s) and/or contact(s) and/or group(s).
- present invention enables registered developers of network to register, verify, make listing payment (if paid), upload or listing with details and making them searchable one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces and enabling user to download and install augmented reality client application and enabling searching users including advertisers of network to search, match, select, select from one or more types of categories, make payment (if paid), download, update, upgrade, install or access from server or select link(s) of one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces and enable to customize or configure and associate with defined or created named publication or advertisement said one or more augmented reality applications, functions, features, controls (e.g.
- all shops related to particular brand all flower seller - system identifies based on pre-stored categories, types, tags, keywords, taxonomy, associated information associated with location or place or point of interest or spot or location point or location or each identified or updated places or point of interests or spots or location points on map of world or databases or storage mediums of locations or places, so when user device monitored current location is matched with or near to said target location or place then auto present said one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces and any combination thereof.
- user or developer or advertiser or server 110 may be publisher of one or more augmented reality applications, functions, features, controls (e.g. button), and interfaces.
- present invention enables user to post structured or freeform requirement specification and get response from matched actual customers, contacts of user, experts, users of network and sellers.
- System maintains logs of who saved use's time, money, and energy and provides best price, best matched and best quality of matched products and services to user.
- Present invention provides user to user saving money (best price, quality and matched products and services) platform.
- present invention enables auto recording of user reaction on viewed or viewing content item or visual media item or message in visual media format including photo or video and auto posting said auto recorded photo or video reaction on all viewers or recipients for viewing of reactions at prominent place e.g. below said post or content item or visual media item.
- present invention enables user to provide, search, match, select, add, identify, recognize, notify user to add, update & remove keywords, key phrases, categories, tags and associated relationships including type of one or more entities, activities, actions, events, transactions, connections, status, interaction, locations, communication, sharing, participation, expression senses & behavior and associate information, structured information (e.g. selected one or more fields and provided one or more data types or types specific one or more values) metadata & system data, Boolean operators, natural queries, Structures Query Language (SQL).
- structured information e.g. selected one or more fields and provided one or more data types or types specific one or more values
- SQL Structures Query Language
- each user related, associated, provided, accumulated, identified, recognized, ranked, updated keywords, key phrases, tags, hashtags and one or more types of one or more identified or updated or created or defined relationships and ontologies among or between said one or more keywords, key phrases, tags, hashtags, associated one or more categories, sub-categories, taxonomy, system can utilize said user contextual relational keywords for plurality of purposes including search, match user specific contextual visual media items, content items to user, present contextual advertisements, enable 3 rd parties enterprise subscribers to access or data mine said users' contextual and relational keywords for one or more types of purposes based on user permission, privacy settings & preferences.
- present invention enables users to relieve from said manual process and enables to send request to auto select or select from map nearest available or ranked visual media taking service provider and in the event of acceptance of request enable both to find out each other, enable visual media taking service provider or photographer to capture, preview, cancel, retake, and auto send visual media of requestor user or his/her group from visual media taking service provider's or photographer's smartphone camera or camera device to said visual media taking service consumer's or tourists or requestor user's device and enabling him/her to preview, cancel, and accept said visual media including one or more photos or videos, send request to retake or take more or enable both to finish or done current photo taking or shooting service session and provide ratings and reviews of each other.
- present invention enables to identify user's free or available time or date & time range(s) (i.e. wants suggestions for types of contextual activities) and suggests (by server based on match making, by user's contacts, by 3 rd parties) contextual activities including shopping, viewing movie or drama, tours & packages, paly games, eat foods, visit places based on one or more types of user and connected users' data including duration and date & time of free or available time for conducting one or more activities, type of activity do (e.g.
- present invention enables user to create one or more type of feeds including e.g. personal, relatives specific, best friends specific, friends specific, interest specific, professional type specific, news specific, and enable to provide configuration settings, privacy settings, presentation settings and preferences for one or more or each feeds including allow to contacts to follow or subscribe said particular type of feed(s), allow all or allow invited or allow request accepted of requestors or allow to follow or subscribe one or more types of created feeds to pre-defined characteristics of followers only based on use data and user profile.
- personal feeds type is allow following or subscribing only to user's contacts.
- News type of feed allowing following or subscribing to all users of network.
- make particular feed real-time only i.e. receiving user can accept push notification to view said message within pre-set duration else receiving or following user or follower is unable to view said message.
- following user can provide scale to indicate how much content user likes to receive from all or particular followed user or particular feed of particular followed user(s) and/or also provide one or more keywords, categories, hashtags inside posted messages which user likes to receive said keywords, categories, hashtags specific messages from followed user(s).
- In another embodiment enable searching user to provide search query and search users and related one or more types of feeds and select from search result users and/or related one or more types of feeds and follow all or selected one or more types of feeds of one or more selected users or provide search query to search posted contents or messages of users of network and select source(s) or user(s) or related one or more types of feeds associated with posted message or content items and follow source(s) or related one or more types of feeds or follow user's all or selected one or more types of feeds of user from search result item or from user's profile page or from categories directory item or from suggested list or from user created list or from 3rd parties' web sites or applications.
- follower receives posted messages or one or more types of one or more content items or visual media items from followed user(s)'s followed type of feed(s) related posted messages at/under/within//in said categories or type of feed(s) presented messages.
- follower receives posted messages or one or more types of one or more content items or visual media items from followed user(s)'s followed type of feed(s) related posted messages at/under/within//in said categories or type of feed(s) presented messages.
- server 110 presents said posted message related to "Sports" type feed of user [Y] at following user [A]'s "Sports” category tab so receiving user can view all followed “Sports” type feed related messages from all followed users in said "Sports” category tab.
- Present invention also enables group of users to posts under one or more created and selected types of feeds for making them available for common followers of group.
- present invention enables user to input (auto-fill from suggested keywords) or selects keywords, key phrases, categories, and associated relationships or ontology(ies) from auto presented suggested list of keywords, key phrases, categories, and associated relationships or ontology(ies) based on user's voice or talks, user scanning, providing object criteria, object model(s) (select or take visual media including photo(s) or video(s)), user device(s) monitored current location, user provided status, user domain specific profile, provided structured information via structured forms, templates and wizard related to user's activities, actions, events, transactions, interests, hobbies, likes, dislikes, preferences, privacy, requirement specifications, search queries, and selecting, applying and executing one or more rules to identify contextual keywords, key phrases, categories, and associated relationships or ontology(ies).
- Present invention enables user to search and present location on map or navigate map to select location or search address for finding particular location or place on map and further enable user to provide said location or place associate one or more searching keywords and/or object criteria or object model(s) and/or preferences and/or filter criteria to search, match, select, identify recognize and presents visual media items in sequence and display next based on haptic contact engagement on display or tapping on presentation interface or presented visual media item or auto advance next visual media item based on pre-set interval period of time. So user can view location or place and/or associated supplied object criteria and/or filter criteria specific visual media items.
- search engine searches and matches user generated and posted or conference administrator generated or posted visual media items or content items related to said particular keywords and present to user sequentially.
- So user can view visual media items based on plurality of angles, attributes, properties, ontology, characteristics including reviews, presentation, video, menu, price, description, catalogues, photos, blogs, comments, feedbacks, complaints, suggestions, how to use or access or made, manufacturing or how to made, questions and answers, customer interviews or opinions, video survey, particular product or object model specific or type of product related visual media, view design, wearable particular designed or particular type of cloths by customers, user experience videos, learning and educational or entertainment visual media, view interior, view management or marketing style, tips, tricks & live marketing of various products or services.
- user can search based on defined type(s) or characteristics of location(s) or place(s) via structured query language (SQL) or natural query or wizard interface e.g.
- SQL structured query language
- Gardens of world and then provide one or more object criteria (e.g. sample image or object model of Passiflora flower) and/or keywords "how to plant” then system identifies all gardens of Passiflora flower and presents visual media related to plantings posted by visitors or users of network or posted by 3 rd parties, who associated or provided one or more flower related ontology(ies), keywords, tags, hashtags, categories, and information.
- object criteria e.g. sample image or object model of Passiflora flower
- keywords “how to plant”
- system identifies all gardens of Passiflora flower and presents visual media related to plantings posted by visitors or users of network or posted by 3 rd parties, who associated or provided one or more flower related ontology(ies), keywords, tags, hashtags, categories, and information.
- present invention enables user to provide voice command to start video talk with voice command related contact and auto ON user's and called user's device, auto open application and front camera video interface of camera display screen in both caller and called user's device and enable them starts video talking and/or chatting and/or voice talk and/or sharing one or more types of media including photo, video, video streaming, files, text, blog, links, emoticons, edited or augmented or photo filter(s) applied photo or video with each other.
- close or hide video interface and OFF user's device and further start in the event of receiving of voice command instructing start of video talk with particular contact(s).
- User doesn't have to each video calling open device, open application, select contacts, make calling, wait for call acceptance by called user, end call and in the event of further talk user doesn't have to again follow same process each time.
- Eye tracking is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head.
- An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design. There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. Other methods use search coils or are based on the electrooculogram.
- Eye trackers measure rotations of the eye in one of several ways, but principally they fall into three categories: (i) measurement of the movement of an object (normally, a special contact lens) attached to the eye, (ii) optical tracking without direct contact to the eye, and (iii) measurement of electric potentials using electrodes placed around the eyes.
- Eye-attached tracking The first type uses an attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor, and the movement of the attachment is measured with the assumption that it does not slip significantly as the eye rotates. Measurements with tight fitting contact lenses have provided extremely sensitive recordings of eye movement, and magnetic search coils are the method of choice for researchers studying the dynamics and underlying physiology of eye movement. It allows the measurement of eye movement in horizontal, vertical and torsion directions.
- Optical tracking An eye-tracking head-mounted display. Each eye has an LED light source (gold-color metal) on the side of the display lens, and a camera under the display lens.
- the second broad category uses some non-contact, optical method for measuring eye motion. Light, typically infrared, is reflected from the eye and sensed by a video camera or some other specially designed optical sensor. The information is then analyzed to extract eye rotation from changes in reflections. Video-based eye trackers typically use the corneal reflection (the first Purkinje image) and the center of the pupil as features to track over time.
- a more sensitive type of eye tracker uses reflections from the front of the cornea (first Purkinje image) and the back of the lens (fourth Purkinje image) as features to track.
- a still more sensitive method of tracking is to image features from inside the eye, such as the retinal blood vessels, and follow these features as the eye rotates.
- Optical methods particularly those based on video recording, are widely used for gaze tracking and are favored for being non-invasive and inexpensive.
- the most widely used current designs are video-based eye trackers.
- a camera focuses on one or both eyes and records their movement as the viewer looks at some kind of stimulus.
- Most modern eye-trackers use the center of the pupil and infrared / near-infrared non-collimated light to create corneal reflections (CR).
- the vector between the pupil center and the corneal reflections can be used to compute the point of regard on surface or the gaze direction.
- a simple calibration procedure of the individual is usually needed before using the eye tracker.
- the proximity sensor is common on most smart-phones, the ones that have a touchscreen. This is because the primary function of a proximity sensor is to disable accidental touch events. The most common scenario being- The ear coming in contact with the screen and generating touch events, while on a call.
- Proximity Sensor is interrupt based (NOT polling). This means that we get a proximity event only when the proximity changes (either NEAR to FAR or FAR to NEAR).
- Gyroscope sensor helps in identifying rate of rotation around the x, y and z axis. It's needed in VR (virtual reality). Accelerometer sensor identifies acceleration force along the x, y and z axis (including gravity). Needed to measure any motion inputs like games. Proximity sensor is used to disable accidental touch events. The most common scenario is the ear coming in contact with the screen, while on a call. Compass sensor is a magnetometer which measures the strength and direction of magnetic fields. Accelerometers in mobile phones are used to detect the orientation of the phone. The gyroscope, or gyro for short, adds an additional dimension to the information supplied by the accelerometer by tracking rotation or twist.
- An accelerometer measures linear acceleration of movement, while a gyro on the other hand measures the angular rotational velocity. Both sensors measure rate of change; they just measure the rate of change for different things. In practice, that means that an accelerometer will measure the directional movement of a device but will not be able to resolve its lateral orientation or tilt during that movement accurately unless a gyro is there to fill in that info.
- Object recognition is a technology in the field of computer vision for finding and identifying objects in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different viewpoints, in many different sizes and scales or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. Object recognition is a process for identifying a specific object in a digital image or video. Object recognition algorithms rely on matching, learning, or pattern recognition algorithms using appearance-based or feature-based techniques. Face detection is a computer technology being used in a variety of applications that identifies human faces in digital images. Face detection also refers to the psychological process by which humans locate and attend to faces in a visual scene.
- Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos.
- Well -researched domains of object detection include face detection and pedestrian detection.
- Object detection has applications in many areas of computer vision, including image retrieval and video surveillance. It is used in face detection and face recognition. It is also used in tracking objects, for example tracking a ball during a football match, tracking movement of a cricket bat, tracking a person in a video.
- Optical character recognition is the mechanical or electronic conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example from a television broadcast). It is widely used as a form of information entry from printed paper data records, whether passport documents, invoices, bank statements, computerized receipts, business cards, mail, printouts of static-data, or any suitable documentation.
- OCR is a field of research in pattern recognition, artificial intelligence and computer vision.
- Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.
- Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and in general, deal with the extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.
- Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action.
- This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
- computer vision is concerned with the theory behind artificial systems that extract information from images.
- the image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner.
- computer vision seeks to apply its theories and models for the construction of computer vision systems.
- Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, object pose estimation, learning, indexing, motion estimation, and image restoration.
- SR speech recognition
- ASR automatic speech recognition
- STT speech to text
- Some SR systems use "training” (also called “enrollment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy.
- Systems that do not use training are called “speaker independent” systems. Systems that use training are called “speaker dependent”.
- Speech recognition applications include voice user interfaces such as voice dialing (e.g. "Call home"), call routing (e.g. "I would like to make a collect call"), domotic appliance control, search (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), speech-to- text processing (e.g., word processors or emails), and aircraft (usually termed Direct Voice Input).
- voice recognition or speaker identification refers to identifying the speaker, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process.
- a barcode is an optical, machine-readable, representation of data; the data usually describes something about the object that carries the barcode.
- barcodes systematically represented data by varying the widths and spacings of parallel lines, and may be referred to as linear or one-dimensional (ID).
- 2D codes were developed, using rectangles, dots, hexagons and other geometric patterns in two dimensions, usually called barcodes although they do not use bars as such. Barcodes originally were scanned by special optical scanners called barcode readers. Later applications software became available for devices that could read images, such as smartphones with cameras.
- QR code abbreviated from Quick Response Code
- QR code uses four standardized encoding modes (numeric, alphanumeric, byte/binary, and kanji) to efficiently store data; extensions may also be used.
- the QR code system became popular outside the automotive industry due to its fast readability and greater storage capacity compared to standard UPC barcodes.
- Applications include product tracking, item identification, time tracking, document management, and general marketing.
- a QR code consists of black squares arranged in a square grid on a white background, which can be read by an imaging device such as a camera, and processed using Reed-Solomon error correction until the image can be appropriately interpreted. The required data are then extracted from patterns that are present in both horizontal and vertical components of the image.
- ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse. It is thus a practical application of philosophical ontology, with taxonomy.
- the core meaning within computer science is a model for describing the world that consists of a set of types, properties, and relationship types. There is also generally an expectation that the features of the model in an ontology should closely resemble the real world (related to the object).
- Common components of ontologies include: Individuals-Instances or objects (the basic or "ground level” objects), Classes-Sets, collections, concepts, classes in programming, types of objects, or kinds of things, Attributes- Aspects, properties, features, characteristics, or parameters that objects (and classes) can have, Relations- Ways in which classes and individuals can be related to one another, Function terms- Complex structures formed from certain relations that can be used in place of an individual term in a statement, Restrictions- Formally stated descriptions of what must be true in order for some assertion to be accepted as input, Rules-Statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form, Axioms-Assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application, Events- The changing of attributes or relations.
- a domain ontology (or domain-specific ontology) represents concepts which belong to part of the world. Particular meanings of terms applied to that domain are provided by domain ontology. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card” meaning of the word, while an ontology about the domain of computer hardware would model the "punched card” and "video card” meanings.
- a geo-fence is a virtual perimeter for a real-world geographic area. A geo-fence could be dynamically generated— as in a radius around a store or point location, or a geo-fence can be a predefined set of boundaries, like school attendance zones or neighborhood boundaries.
- geo-fencing The use of a geo-fence is called geo-fencing, and one example of usage involves a location- aware device of a location-based service (LBS) user entering or exiting a geo-fence. This activity could trigger an alert to the device's user as well as messaging to the geo-fence operator. This info, which could contain the location of the device, could be sent to a mobile telephone or an email account. Geo-fencing, used with child location services, can notify parents if a child leaves a designated area. Geo-fencing used with locationized firearms can allow those firearms to fire only in locations where their firing is permitted, thereby making them unable to be used elsewhere. Geo-fencing is critical to telematics.
- LBS location-based service
- geo-fencing is used by the human resource department to monitor employees working in special locations especially those doing field works. Using a geofencing tool, an employee is allowed to log his attendance using a GPS-enabled device when within a designated perimeter. Other applications include sending an alert if a vehicle is stolen and notifying rangers when wildlife stray into farmland. Geofencing, in a security strategy model, provides security to wireless local area networks.
- Geo-fencing is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries.
- GPS global positioning system
- RFID radio frequency identification
- a geofence is a virtual barrier. Programs that incorporate geo-fencing allow an administrator to set up triggers so when a device enters (or exits) the boundaries defined by the administrator, a text message or email alert is sent.
- Many geo-fencing applications incorporate Google Earth, allowing administrators to define boundaries on top of a satellite view of a specific geographical area.
- Geo-fencing has many uses including: Fleet management- e.g. When a truck driver breaks from his route, the dispatcher receives an alert.
- Human resource management - e.g. An employee smart card will send an alert to security if an employee attempts to enter an unauthorized area.
- Compliance management - e.g. Network logs record geo-fence crossings to document the proper use of devices and their compliance with established rules.
- Marketing - e.g. A restaurant can trigger a text message with the day's specials to an opt-in customer when the customer enters a defined geographical area.
- Asset management - e.g. An RFID tag on a pallet can send an alert if the pallet is removed from the warehouse without authorization.
- Law enforcement - e.g. An ankle bracelet can alert authorities if an individual under house arrest leaves the premises.
- network-based geofencing uses carrier-grade location data to determine where SMS subscribers are located.” If the user has opted in to receive SMS alerts, they will receive a text message alert as soon as they enter the geofence range. As always, users have the ability to opt-out or stop the alerts at any time.
- Beacons can achieve the same goal as app-based geofencing without invading anyone's privacy or using a lot of data. They can't pinpoint the user's exact location on a map like a geofence can, but they can still send signals when it's triggered by certain events (like entering or exiting the beacon's signal, or getting within a certain distance of the beacon)— and they can determine approximately how close the user is to the beacon, down to a few inches. Best of all, because beacons rely on bluetooth technology, they hardly use any data and won't affect the user's battery life.
- Geo-location identifying the real-world location of a user with GPS, Wi-Fi, and other sensors
- Geo-fencing taking an action when a user enters or exits a geographic area
- Geo-awareness customizing and localizing the user experience based on rough approximation of user location, often used in browsers
- Augmented reality is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one's current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable.
- advanced AR technology e.g. adding computer vision and object recognition
- Information about the environment and its objects is overlaid on the real world.
- This information can be virtual or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space.
- Augmented reality brings out the components of the digital world into a person's perceived real world.
- One example is an AR Helmet for construction workers which displays information about the construction sites.
- Hardware components for augmented reality are: processor, display, sensors and input devices.
- Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.
- Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on the human body.
- AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces.
- Modern mobile augmented-reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user's head. Tracking the user's hand(s) or a handheld input device can provide a 6DOF interaction technique
- Techniques include speech recognition systems that translate a user's spoken words into computer instructions and gesture recognition systems that can interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.
- a peripheral device such as a wand, stylus, pointer, glove or other body wear.
- the computer analyzes the sensed visual and other data to synthesize and position augmentations.
- a key measure of AR systems is how realistically they integrate augmentations with the real world.
- the software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking.
- Many computer vision methods of augmented reality are inherited from visual odometry. Usually those methods consist of two parts.
- First stage can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods.
- the second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used.
- Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.
- Augmented Reality Markup Language is a data standard developed within the Open Geospatial Consortium (OGC), which consists of an XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.
- OAC Open Geospatial Consortium
- SDK software development kits
- a few SDK such as CloudRidAR leverage cloud computing for performance improvement.
- Some of the well known AR SDKs are offered by Vuforia, ARToolKit, Catchoom CraftAR Mobinett AR, Wikitude, Blippar Layar, and Meta.
- Handheld displays employ a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through.
- Handheld AR employed fiducial markers, [48] and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometer-gyroscope.
- Today SLAM markerless trackers such as PTAM are starting to come into use.
- Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones.
- AR is used to integrate print and video marketing.
- Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR enabled device using image recognition, activate a video version of the promotional material.
- Augmented Reality and straight forward image recognition is that you can overlay multiple media at the same time in the view screen, such as social media share buttons, in-page video even audio and 3D objects.
- Traditional print only publications are using Augmented Reality to connect many different types of media.
- AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it.
- AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use.
- AR has been used to complement a standard curriculum.
- Text, graphics, video and audio were superimposed into a student's real time environment.
- Textbooks, flashcards and other educational reading material contained embedded "markers" or triggers that, when scanned by an AR device, produced supplementary information to the student rendered in a multimedia format.
- Augmented reality technology enhanced remote collaboration, allowing students and instructors in different locales to interact by sharing a common virtual learning environment populated by virtual objects and learning materials.
- the gaming industry embraced AR technology.
- a number of games were developed for prepared indoor environments, such as AR air hockey, collaborative combat against virtual enemies, and AR-enhanced pool-table games.
- Augmented reality allowed video game players to experience digital game play in a real world environment.
- Niantic is notable for releasing the record-breaking Pokemon Go game. Travelers used AR to access real time informational displays regarding a location, its features and comments or content provided by previous visitors. Advanced AR applications included simulations of historical events, places and objects rendered into the landscape. AR applications linked to geographic locations presented location information by audio, announcing features of interest at a particular site as they became visible to the user. AR systems can interpret foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles.
- Augmented GeoTravel application displays information about users' surroundings in a mobile camera view.
- the application calculates users' current positions by using the Global Positioning System (GPS), a compass, and an accelerometer and accesses the Wikipedia data set to provide geographic information (e.g. longitude, latitude, distance), history, and contact details of points of interest.
- Augmented GeoTravel overlays the virtual 3 -dimensional (3D) image and its information on real-time view.
- An augmented reality development framework utilizes image recognition and tracking, and geolocation technologies.
- the position of objects on the screen of the mobile device is calculated using the user's position (by GPS or Wifi), the direction in which the user is facing (by using the compass) and accelerometer.
- Smartglasses or smart glasses are wearable computer glasses that add information alongside or to what the wearer sees. Typically this is achieved through an optical head-mounted display (OHMD) or embedded wireless glasses with transparent heads-up display (HUD) or augmented reality (AR) overlay that has the capability of reflecting projected digital images as well as allowing the user to see through it, or see better with it.
- OHMD optical head-mounted display
- HUD head-up display
- AR augmented reality
- early models can perform basic tasks, such as just serve as a front end display for a remote system, as in the case of smartglasses utilizing cellular technology or Wi-Fi
- modern smart glasses are effectively wearable computers which can run self-contained mobile apps. Some are handsfree that can communicate with the Internet via natural language voice commands, while other use touch buttons.
- smartglasses may collect information from internal or external sensors. It may control or retrieve data from other instruments or computers. It may support wireless technologies like Bluetooth, Wi-Fi, and GPS. While a smaller number of models run a mobile operating system and function as portable media players to send audio and video files to the user via a Bluetooth or WiFi headset. Some smartglasses models, also feature full lifelogging and activity tracker capability.
- Such smartglasses devices may also have all the features of a smartphone. Some also have activity tracker functionality features (also known as “fitness tracker”) as seen in some GPS watches.
- One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer- implemented method.
- Programmatically means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device.
- a programmatically performed step may or may not be automatic.
- a programmatic module, engine, or component can include a program, a sub-routine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions.
- a module or component can exist on a hardware component independently of other modules or components.
- a module or component can be a shared element or process of other modules, programs or machines.
- Some embodiments described herein can generally require the use of computing devices, including processing and memory resources.
- computing devices including processing and memory resources.
- one or more embodiments described herein may be implemented, in whole or in part, on computing devices such as servers, desktop computers, cellular or smartphones, personal digital assistants (e.g., PDAs), laptop computers, printers, digital picture frames, network equipments (e.g., routers) and tablet devices.
- PDAs personal digital assistants
- Memory, processing, and network resources may all be used in connection with the establishment, use, or performance of any embodiment described herein (including with the performance of any method or with the implementation of any system).
- one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium.
- Machines shown or described with Figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention can be carried and/or executed.
- the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions.
- Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers.
- Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on smartphones, multifunctional devices or tablets), and magnetic memory.
- Computers, terminals, network enabled devices are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, embodiments may be implemented in the form of computer-programs, or a computer usable carrier medium capable of carrying such a program.
- Figure 1 is a network diagram depicting a network system having a client-server architecture configured for exchanging data over a network, according to one embodiment.
- Figure 2 illustrates components of an electronic device implementing various embodiments of intelligent camera & story system including components of an electronic device implementing content sending and receiving privacy & presentation settings, auto present camera display screen or media view interface, various types of ephemeral feeds, galleries and stories, sender controlled shared media items at recipient device, real-time ephemeral messages, object criteria specific search, subscription, presentation of visual media, stories and visual media advertisements integration within story or sequences of visual media items, auto identified user reactions, scan to access various types of features, intelligent or multi-tasking visual media capture controller, accelerated display of ephemeral messages, single mode front or back photo capture, user to user on demand providing and consuming service(s), search keyword(s) specific visual media posted at particular place, provide user related keywords, augmented reality application, user reaction application, user's auto status application, mass user action application, user requirement specific responses application, suggested prospective activities application, and natural talking application in accordance with the invention.
- Figure 3 illustrates flowchart explaining eye tracking system to auto open one or more types of interfaces including camera display screen in the event of auto detection of user's intent to take photo or video or present album or gallery or inbox or received media items interface based on user's intent to view past or received media items, according to an embodiment.
- Figure 4 illustrates flowchart explaining auto capturing of one or more photos or auto recording of pre-set duration video(s) based on starting and expiration of pre-set timer duration and optionally provide auto preview and/or auto send to pre-set one or more destinations.
- Figure 5 illustrates flowchart explaining auto capturing of photo or auto recording of video, according to an embodiment.
- Figure 6 (C) & (D) illustrate processing operations associated with single mode visual media capture in accordance with the invention.
- Figure 6 (A) & (B) illustrates the exterior of an electronic device implementing auto mode turn on user device or switch on display screen and auto capture photo or auto start of recording of video discussed in detail in figure 3 and 4.
- Figure 7 illustrates exemplary graphical user interface, describing ephemeral or non- ephemeral content access privacy and presentation settings for one or more types of one or more destination(s) or recipient(s) or contact(s) by sender with examples.
- Figure 8 illustrates exemplary graphical user interface, describing privacy settings, presentation settings and ephemeral settings for receiving of contents and making of contents as ephemeral or non-ephemeral received from one or more types of one or more source(s) or sender(s) or contact(s).
- Figure 9-13 illustrates a various embodiments for searching, matching, presenting, subscribing, and auto generating visual media story system.
- Figure 14 illustrates exemplary graphical user interface, describing proving of user preferences for consuming one or more types of series of visual media items or stories.
- Figure 15 illustrates exemplary graphical user interface, describing privacy settings for allowing or not-allowing other users to capture or record visual media related to user.
- Figure 16-17 illustrates exemplary graphical user interface, describing creating visual media advertisements with target criteria including object criteria or supplied object model or sample image for presenting with or integrating in or embedded within visual media stories for presenting to users of network.
- Figure 18 illustrates exemplary graphical user interface, describing sender access shared content item(s) by sender at recipient(s) based system.
- Figure 19 illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention and illustrates the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention.
- Figure 20-24 illustrates processing operations associated with real-time display of ephemeral messages in accordance with various embodiments of the invention.
- Figure 25-27 illustrates exemplary graphical user interface, describing real-time display of ephemeral messages in accordance with various embodiments of the invention.
- Figure 28-29 illustrates processing operations associated with real-time starting session of displaying or broadcasting of ephemeral messages in accordance with an embodiment of the invention.
- Figure 30 illustrates processing operations associated with display of ephemeral messages and media item and enabling user to remove from first list and add to second list and enabling to further move to first list within life timer and in the event of expiry of life timer remove from second list.
- Figure 31 illustrates processing operations associated with display of ephemeral messages and media item completely scroll-up to remove and append media item at the end of feed of or set of ephemeral messages in accordance with an embodiment of the invention.
- Figure 32 illustrates processing operations associated with display of ephemeral messages and based on load more user action remove currently presented ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- Figure 33 illustrates processing operations associated with display of ephemeral messages and based on push to refresh user action remove currently presented ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- Figure 34 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer auto refresh and remove currently presented ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- Figure 35 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer associated with or correspond to presented each set of ephemeral messages, remove currently presented set of ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- Figure 36 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer for scrolled-up ephemeral message (s) or media item(s), remove expired scrolled-up ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or particular pre-set number of or available to present ephemeral messages in accordance with an embodiment of the invention.
- Figure 37 illustrates processing operations associated with display of ephemeral messages with no scrolling and based on expiration of pre-set duration of timer associated with each ephemeral message, remove expired ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or available to present ephemeral messages in accordance with an embodiment of the invention.
- Figure 38 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer associated with each ephemeral message, remove expired ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or particular pre-set number of or available to present ephemeral messages in accordance with an embodiment of the invention.
- Figure 39 illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention and illustrates interface and the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention.
- Figure 40 illustrates processing operations associated with single mode front or back photo or live photo capture embodiment of the invention and illustrates the exterior of an electronic device implementing single mode front or back photo or live photo capture.
- Figure 41 illustrates an electronic device includes digital image sensors to capture visual media, a display to present the visual media from the digital image sensors and a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display.
- a visual media capture controller alternately records the visual media as a photograph or a video based on pre-defined haptic contact engagement duration threshold. If threshold not exceeded then photo captured and if threshold exceeded then starting of video recording and if pre-set associated timer expired then stop video and store video and if not expired and receiving of haptic contact engagement on icon or display then stop said pre-defined video duration timer and enabling user to record video up-to further haptic contact engagement on icon or anywhere on display.
- Figure 42 illustrates processing operations associated with single mode front or back video recording embodiment of the invention and illustrates the exterior of an electronic device implementing single mode front or back video recording.
- Figure 43 illustrates an electronic device includes digital image sensors to capture visual media, a display to present the visual media from the digital image sensors and a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display and illustrates a visual media capture controller enables to slide or swipe or haptic contact swipe to change front or back camera or view on or more type of pre-set interfaces including view gallery or album or inbox or received media items mode and alternately records the visual media as a photo or a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- Figure 44 illustrates processing operations associated with slide or swipe or haptic contact swipe to change to front or back camera mode and based on selection of mode capture photo or record video embodiment of the invention and illustrates the exterior of an electronic device implementing the single mode invention slide to change front to back or back to front camera mode and alternately records the visual media as a photo or a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- Figure 45 illustrates processing operations associated with slide or swipe or haptic contact swipe to change to front or back camera mode and based on selection of mode capture photo or start recording of video and auto stop and store after pre-set duration, stop and store while further haptic contact energumen, stop and store before expiration of pre-set duration, stop pre-set timer to record video up-to next user haptic contact engagement embodiments of the invention.
- Figure 46 illustrates processing operations associated with slide or swipe or haptic contact swipe to change to front or back camera mode or view one or more types of pre-set interface(s) and based on selection of mode capture photo or start recording of video and auto stop after pre-set duration embodiment of the invention and illustrates the exterior of an electronic device implementing the single mode invention slide to change front to back or back to front camera mode or show one or more types of pre-set interface(s) and alternately records the visual media as a photo or a pre-set duration of video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- Figure 47 illustrates processing operations associated with slide or swipe or haptic contact swipe to change to front or back camera mode or view one or more types of pre-set interface(s) and based on selection of mode capture photo or start recording of video and auto stop and store after pre-set duration, stop and store while further haptic contact energumen, stop and store before expiration of pre-set duration, stop pre-set timer to record video up-to next user haptic contact engagement embodiments of the invention.
- Figure 48 illustrates an electronic device includes digital image sensors to capture visual media, a display to present the visual media from the digital image sensors and a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display and illustrates a visual media capture controller enables to Haptic Contact Engagement on pre-defined area of visual media capture controller to change front or back camera or view on or more type of pre-set interfaces including view gallery or album or inbox or received media items mode and alternately records the visual media as a photo or a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- Figure 49 illustrates processing operations associated with Haptic Contact Engagement on predefined area of visual media capture controller to change to front or back camera mode and based on selection of mode capture photo or record video embodiment of the invention and illustrates the exterior of an electronic device implementing the single mode invention Haptic Contact Engagement on pre-defined area of visual media capture controller to change front to back or back to front camera mode and alternately records the visual media as a photo or a video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- Figure 50 illustrates processing operations associated with Haptic Contact Engagement on pre- defined area of visual media capture controller to change to front or back camera mode and based on selection of mode capture photo or start recording of video and auto stop and store after pre-set duration, stop and store while further haptic contact energumen, stop and store before expiration of pre-set duration, stop pre-set timer to record video up-to next user haptic contact engagement embodiments of the invention.
- Figure 51 illustrates processing operations associated with Haptic Contact Engagement on predefined area of visual media capture controller to change to front or back camera mode or view one or more types of pre-set interface(s) and based on selection of mode capture photo or start recording of video and auto stop after pre-set duration embodiment of the invention and illustrates the exterior of an electronic device implementing the single mode invention Haptic Contact Engagement on pre-defined area of visual media capture controller to change front to back or back to front camera mode or show one or more types of pre-set interface(s) and alternately records the visual media as a photo or a pre-set duration of video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- Figure 52 illustrates processing operations associated with Haptic Contact Engagement on predefined area of visual media capture controller to change to front or back camera mode or view one or more types of pre-set interface(s) and based on selection of mode capture photo or start recording of video and auto stop and store after pre-set duration, stop and store while further haptic contact energumen, stop and store before expiration of pre-set duration, stop pre-set timer to record video up-to next user haptic contact engagement embodiments of the invention.
- Figures 53-54 illustrates identifying, preparing, generating and presenting status based on user provided and user related data.
- Figure 55 illustrates processing operations associated with accelerated taking, sharing visual media including taking one or more video(s), trimming video(s), capture photo(s) during recording of single video session.
- Figure 56-57 illustrates real-time sending and viewing ephemeral message in accordance with the invention.
- Figure 58 illustrates processing operations associated with multi-tabs accelerated display of ephemeral messages in accordance with an embodiment of the invention and illustrates the exterior of an electronic device implementing multi-tabs accelerated display of ephemeral messages in accordance with the invention.
- Figures 59-64 illustrates processing operations, flowchart, exemplary interfaces and examples associated with creation of gallery or story or location or place or defined geo-fence boundaries specific scheduled event and define and invite participants.
- Based on event location, date & time and participant data presents auto generated visual media capture & view controller(s) on display screen of device of each authorized participant member and enable them to one tap front or back camera capturing of one or more photos or recording of one or more videos and present preview interface for previewing said visual media for set pre-set duration and within that duration user is enable to remove said previewed photo or video and/or change or select destination(s) or auto send to pre-set destination(s) after expiry of said pre-set period of time.
- Admin of gallery or story or album or event creator is enabled to update event, start manually or auto start at scheduled period, invite, update or change, remove and define participants of event and accept request of user to participate in the event, define target viewers, select one or more types of one or more destinations, provide or define on or more types of presentation settings.
- Figures 65-66 illustrates exemplary interface of augment reality application 280 and platform 180 wherein user can provide object model or object image or captured or selected image or photo or video (i.e. series of images), provide associated details and select one or more said provided scan able object associate one or more user action controls, wherein when user scans or view via camera display screen particular scene or object or select one or more objects from map then system matches with said supplied object model(s) or object image(s) or sample photo or video related images based on employing of image recognition technologies and optical character recognition technologies and present said matched user provided object model(s) or object image(s) or sample photo or video related images associated one or more user action control(s) on the user device, so scanned user can access or tap on preferred user action control.
- user can provide object model or object image or captured or selected image or photo or video (i.e. series of images), provide associated details and select one or more said provided scan able object associate one or more user action controls, wherein when user scans or view via camera display screen particular scene or object or select one
- Figure 67 illustrates exemplary interface for auto recording video & audio or recording audio or auto capturing photo reaction on received and currently viewing media item(s) or feed item(s) or news feed item(s) or content item(s) or search result item(s) and auto sending said auto recorded or captured user reactions to sender or source of said media item(s) or feed item(s) or news feed item(s) or content item(s) or search result item(s).
- user can make reaction ephemeral based on set period of view duration and preview before send.
- Figure 68 illustrates user interface for enabling user to submit requirement specification and receives responses from contextual users or sources including actual users, contacts of users, experts, sellers and 3 rd parties in exchange of points or one or more types of payment models & modes.
- System logs and presents information and statistics & analytics about in which product or service user bought or subscribes or use with the help of which response of which user(s) and user provided related details including saved amount of money, ratings, quality, level of match making, experience, and updated details after purchase of products and services.
- Figures 69-70 illustrates exemplary interface for enabling user to navigation of map including from world map select country, state, city, area, place and point or search particular place or spot or POI or point or access map to search, match, identify and find out location, place, spot, point, Points of Interest (POI) and associated or nearest or located or situated or advertised or listed or marked or suggested one or more types of entities including mall, shop, person, product, item, building, road, tourist place, river, forest, garden, mountain, hotel, restaurant, exhibition, events, fair, conference, structure, station, market, vendor, temple, apartment or society or house, and one or more types of one or more addresses and after identifying particular entity or item on map, user is enabled to provide, search, input, update, add, remove, re-arrange, select, provide via auto fill-up or auto suggested one or more keywords, key phrases and Boolean operators and optionally selecting and applying of one or more conditions, rules, preferences, settings for identifying or matching or searching and presenting visual media or content items which were generated from that
- So user is enabled to view contextual stories. So user can view contextual photos or videos previously taken at a particular geographic location (e.g. ) which are related to or filtered or contextually matched with said user provided one or more keywords, key phrases and/or associated Boolean operators, conditions, advance search criteria, and rules.
- User can further sort as per date & time or ranges of date & time, types of sources or one or more particular sources or identified or selected or defined sources, type of visual media or content including photo and/or video, most reacted including most viewed, most rated, most liked, least disliked, most commented, most re-shared, apply safe search, apply omit duplicate visual media or content item, select presentation type, apply view interval duration between two visual media items in sequence of auto presenting of visual media items based on said pre-set interval duration.
- user can visually define ontology or semantic syntax including providing or selecting categories, sub-categories, taxonomy, keywords and visually defining or providing one or more types of one or more relationships.
- Figure 71 illustrates an example system for enabling a user to request on-demand services using a computing device, under an embodiment.
- Figure 72 illustrates exemplary interface for enabling user to provide and consume visual media taker's service or user to user photographer service.
- User can select on map particular visual media taker and send request or send request to identify and consume nearest or matched or ranked visual media taker service provider(s) i.e. user who capture photo or record video of user and in the event of acceptance of request of user by the visual media taker, requestor user is notified about visual media taker.
- Visual media taker starts capturing of one or more photos or recording one or more videos and sends to visual media taker service consumer user and in the event of acceptance of received photo(s) or video(s), system adds points to account of visual media capturing service provider user and deduct points from visual media capturing service consumer.
- Figure 73 illustrates processing operations associated with display of ephemeral messages and media item based on identification of read or unread or viewed or not viewed status or based on identification or read or unread or viewed or not viewed status and associated life time of message in accordance with an embodiment of the invention.
- Figure 74 illustrates processing operations associated with display of ephemeral messages and media item based on identification of mark as ephemeral message(s) or mark as non-ephemeral message(s) status and message associated pre-set duration of timer in accordance with an embodiment of the invention.
- Figure 75 illustrates processing operations associated with display of next ephemeral message and media item based on identification of removing or saving of said presented message or display of next ephemeral message and media item based on identification of removing or saving or expiration of said presented message associated pre-set duration timer in accordance with an embodiment of the invention.
- Figure 76 illustrates user interface for enabling publisher or advertiser or user to create mass user action one or more types of campaign(s) including mobile application installation, deals, offers, advertisements etc. and select available time slot (date & time and length of duration and in the event of expiry of said pre-set duration present next one or more types of content item(s) and associated one or more types of action(s) (if any)) for presenting said created mass user action and associated content e.g. present group deal information with present associated user action e.g. buy or participate or sign group deals, present movie trailer and enable to view, like, provide comments, reviews & ratings, present details of mobile application and enable to download, install, & register, present survey forms and enable to fill-up survey and get gift within said preset duration of time.
- present group deal information with present associated user action e.g. buy or participate or sign group deals, present movie trailer and enable to view, like, provide comments, reviews & ratings, present details of mobile application and enable to download, install, & register, present survey forms and enable to
- Figures 77-79 illustrates user interface for enabling user to provide details about user's scheduled or day to day general activities, events, to-dos, meetings, appointments, tasks and available date & time range(s) for conducting of other activities and/or system auto identifies user's available date & time range(s) based on provided data and user related data for conducting of other activities and provides each available date & time range(s) specific suggested list of contextual activities.
- Figure 80 illustrates an electronic device includes digital image sensors to capture visual media, a display to present the visual media from the digital image sensors and a touch controller to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display.
- a visual media capture controller alternately records the visual media as a photograph or a video based on pre-defined haptic contact engagement duration threshold. If threshold not exceeded then photo captured and if threshold exceeded then starting of video recording and if pre-set associated timer expired then stop video and store video and if not expired and receiving of haptic contact engagement on icon or display then stop said pre-defined video duration timer and stop video recording and save recorded video.
- Figure 81 illustrates processing operations associated with display of index or indicia or list item(s) or inbox list of items or search result item(s) or thumbnails of requested or searched or subscribed or auto presented or received digital item(s) or thumbnails or thumbshot or small representation of ephemeral message(s) or visual media item(s) including photo or video or content item(s) or post(s) or news item(s) or story item(s) for user selection based on type of feed (discussed throughout the specification), user is presented with said selection specific original version of ephemeral message(s) or content item(s) or visual media item(s) and starts timer associated with one or more or set of messages 8154 and in the event of expiry of timer or receiving of haptic contact engagement or recognizing or detection of one or more types of predefined user sense on message or on feed or set of message(s), remove presented messages on the display and proceed to present index or list item(s) or thumbnails or thumbshot of ephemeral message(s)
- Figure 82 illustrates user interface for creating one or more types of feeds, posting of one or more types of one or more content items or visual media items in selected one or more types of said created feeds and also enabling to follow of one or more types of feeds of one or more users of network for receiving of posted messages from followed users' followed one or more types of feeds in said received message associated type of feed or tab or categories presentation interface.
- Figure 83-84 illustrates exemplary interface for providing settings for allowing system to, monitoring, tracking, storing and analyzing, applying rules, extracting or identifying or recognizing plurality of keywords, key phrases, categories, ontology provided by user and/or from one or more types of user data including from one or more types of detail user profiles, monitored or tracked or detected or recognized or sensed or logged or stored activities, actions, status, manual status provided or updated by user, locations or checked-in-places, events, transactions, reactions (liked or disliked or commented contents), sharing (one or more types of visual media or contents), viewing (one or more types of visual media or contents), reading, listing, communications, collaborations, interactions, following, participations, behavior and senses from one or more sources, domain or subject or activities specific contextual survey structured (fields and values) or un-structured forms, devices, sensors, accounts, profiles, domains, storage mediums or databases, web sites, applications, services or web services, networks, servers and user connections, contacts, groups, networks, relationships and followers.
- User is also enable to provide categories, sub -categories or taxonomy and provided one or more keywords and mention relationships. Based on plurality of accumulated categories, sub- categories, taxonomy and associated keywords or key phrases or based on identifying keywords or key phrases, system can matched said keywords with stored one or more types of visual media or content items associated recognized or identified and stored keywords or dictionary of keywords and select, apply and execute one or more rules from rule base to search, match, recognize and identify user related matched, relevant and contextual visual media or content item(s) for continuously creating, updating, generating and providing or presenting or serving one or more types of story or gallery or feed or series of sequences of visual media or content items.
- system Based on monitoring, tracking and soring user's viewing behavior including liked, disliked, rated, commented, re-shared, bookmarked, number of times viewed, skipped, most liked sources, system further identifies and filters providing of contextual visual media or content items to user subsequently.
- Figure 85 (A) illustrates user interface for enabling user to scan and view suggested keywords based on scan for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 85 (B) illustrates user interface for enabling user to view suggested keywords based on recognition of keywords from recorded user's voice for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 85 (C) illustrates user interface for enabling user to view user's current location or checked-in place specific suggested keywords for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 86 (A) illustrates user interface for enabling user to scan one or more types of barcode or code including QRcode and view suggested keywords based on scan of code for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 86 (B) illustrates user interface for enabling user to view suggested keywords based on recognition of user's eye view of particular object or scene or code via one or more types of wearable device including eye glasses or digital spectacles equipped with video camera and connected with user device(s) including smart phone for enabling user to add selected keywords from suggested keywords to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 87 (A) illustrates user interface for enabling user to input keywords and/or associated one or more types of user actions, relationships, status, activities, actions, events, senses, interaction, status, location or place, connection, and communication by user and add to user related collection of keywords or add to user related collection of keywords and share with one or more types of one or more contacts and/or destinations.
- FIGURE 87 (B) illustrates user interface for enabling user to view user's current status (manual or auto identified) specific suggested keywords for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 87 (C) illustrates user interface for enabling user to select one or more categories, select or select from suggested keywords or input one or more keywords and/or associated one or more types of user actions, relationships, status, activities, properties, attributes, selected or added field(s) and associated value(s), actions, events, senses, interaction, status, location or place, connection, and communication by user and add to user related collection of keywords or add to user related collection of keywords and share with one or more types of one or more contacts and/or destinations.
- Figure 87 (D) illustrates user interface for enabling user to suggested keywords including advertised keywords (discuss in detail in figures 91-98) based on one or more types of updated user data for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 88 (A) illustrates user interface for enabling user to view user's current location or checked-in place related nearby places & locations and/or user data specific suggested keywords (e.g. brands, products, services, activities type specific etc.) for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 88 (B) illustrates user interface for enabling user to view suggested keywords provided by user's one or more contacts or suggested by contextual or related or interacted or liked or currently visited or visiting advertisers, sellers, merchants, places, shops, service providers, point of interest, hotels, restaurants etc. for enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 88 (C) illustrates user interface for enabling user to provide user details via one or more types of profiles or forms, search and select or use auto presented one or more categories templates or forms for providing domain or subject or field or category or activity specific keywords, relationships, type of user action, provide user preferences for suggesting keywords to user, and enable to create, update, add in selected domain or subject or activity type specific user ontology (ies).
- Figure 88 (D) illustrates user interface for enabling user to input multiple keywords including keywords and associated one or more types of user actions, relationships, status, activities, actions, events, senses, interaction, status, location or place, connection, and communication and add to user related collection of keywords or add to user related collection of keywords and share with one or more types of one or more contacts and/or destinations.
- Figure 89 (A) illustrates user interface for enabling user to search and select location or place on map and search, select, input and add from suggested list of keywords.
- Figure 89 (B) illustrates user interface for enabling user to view suggested local keywords based on one or more types of user related addresses.
- Figure 89 (C) illustrates user interface for enabling user to view one or more types of received notifications.
- Figure 89 (D) illustrates user interface for enabling user to add keywords from 3 rd parties' web sites and applications integrated by 3 rd parties' web sites and applications and provided by server 110, advertisers and 3 rd parties' web sites and applications.
- Figure 90 (A) illustrates user interface for enabling user to view suggested keywords related to user selected keywords and enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 90 (B) illustrates user interface for enabling user to view user related keywords and take one or more user actions from provide contextual user actions.
- Figure 90 (C) illustrates user interface for enabling user to view suggested structured form(s) or template(s) or field(s) or questions based on user's scan of object or code, recording of voice, view object via eye glasses , supplied object model, status, current location, checked-in place, and status and enable to provide associated value(s) or answers or details and enabling user to add to user's collection of keywords or add to user's collection of keywords & share with contact(s).
- Figure 90 (D) illustrates user interface for enabling user to provide various settings.
- Figure 91 illustrates user interface for enabling advertiser or publisher user to create campaign(s), set budget, and provide target user criteria, location criteria, schedule and other settings.
- Figure 92 illustrates user interface for enabling advertiser or publisher user to create and manage campaign related advertisement group(s), advertisement (s), advertisement related advertised keywords, associate type of relationships, user actions, categories, & hashtags, associate one or more user action controls or links or applications or interfaces or media, provide target keywords, and object criteria.
- Figure 93 illustrates user interface for enabling advertiser or publisher user to show keyword adverti sement(s) to/at/in one or more selected features.
- Figure 94 illustrates user interface for enabling user to search, match, select, create, update, suggest, generate one or more user related customized and configured categories templates for providing presented or selected one or more fields specific one or more types of values including brand name, product name, service name, one or more types of entity name, user actions or reactions or relationships.
- Figure 95-96 illustrates user interface for enabling user to provide one or more types of profiles related structured as well as un-structured details.
- Figure 97 illustrates user interface for enabling user to provide preferences for receiving suggested keywords.
- Figure 98 illustrates user interface for enabling user to search, browse categories directories and select one or more keywords and add to user related collections of keywords or add to user related collections of keywords and share with one or more contacts and/or destinations.
- Figure 99 illustrates user interface for enabling user to create, provide, update and suggest user related simplified ontology(ies) or similar to ontology(ies), wherein system interpreted said simplified ontology(ies) based on one or more keywords, structured details including auto presented contextual or added or suggested one or more fields (or set of categories or activity specific fields via forms and templates or questionnaire) specific one or more data types specific values or data or details, associate types, categories, types & name of entities, activities, actions, events, transactions, status, locations, places, requirements, sharing, participations, reactions, tasks.
- Figure 100 illustrates user interface for enabling user to accelerate mode video talking with one or more users or contacts.
- a user can instant starts and stops and again restarts and stops video talking instantly.
- Based on voice command if user device is OFF then system makes it auto ON and user is auto presented with front camera display screen for enabling user to instantly start video talking.
- Server connects with user's contact based on recognizing said voice command related user's contact and stores said started video talk related incremental video stream at relay server. In the event of successful connection both can starts video talking with each other. In the event of delay in making connection server presents said recorded video first.
- server presents system message or called user's status to caller and sent said recorded video message of caller user to called user, so called user can view and issue voice command to connect with said user.
- system makes caller and called users device OFF and hide or close loaded & presented video interface to stop video talking with each other.
- caller and called users device OFF and hide or close loaded & presented video interface to stop video talking with each other.
- user can talk sometime and again stop, then again talk, busy sometime and pause talking and again talk in the event of availability of users. So hands free starting and stopping or restarting video talking makes user feels like natural talking.
- Figure 101 is a block diagram that illustrates a mobile computing device upon which embodiments described herein may be implemented. While the invention is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description.
- the word “may” is used in a permissive sense (e.g., meaning having the potential to), rather than the mandatory sense (e.g., meaning must).
- the words “include”, “including”, and “includes” mean including, but not limited to.
- Figure 1 is a network diagram depicting a network system 100 having a client-server architecture configured for exchanging data over a network, according to one embodiment.
- the network system 100 may be a messaging system where clients may communicate and exchange data within the network system 100.
- the data may pertain to various functions (e.g., sending and receiving ephemeral or non-ephemeral messages, logging, user activities, actions, events, transactions, senses, behavior, status, receiving user profile, privacy settings, preferences, access conditions & rules, ephemeral settings, rules & conditions, sensor data from one or more types of user device sensors, indications or notifications, text and media communication, media items, and receiving search query including keywords, rules, preferences & Boolean operators, object criteria including object models, keywords & conditions, search result, supplied object criteria and target criteria specific visual media advertisements, created configuration of gallery or story or event, configuration of visual media capture controller, scanned object, supplied scanned objects and associated user actions or controls or interfaces or applications, user actions or controls or interfaces or
- a platform in an example, includes a server 110 which includes various applications describe in detail in 236, and may provide server-side functionality via a network 125 (e.g., the Internet) to one or more clients.
- the one or more clients may include users that utilize the network system 100 and, more specifically, the server applications 236, to exchange data over the network 125.
- These operations may include transmitting, receiving (communicating), and processing data to, from, and regarding content and users of the network system 100.
- the data may include, but is not limited to, content and user data such as shared or broadcasted visual media, user profiles, user status, user location or checked-in place, search queries, saved search results or bookmarks, privacy settings, preferences, created events, feeds, stories related settings & preferences, user contacts, connections, groups, networks, opt-in contacts, followed feeds, stories & hashtag, following users & followers, user logs of user's activities, actions, events, transactions messaging content, shared or posted contents or one or more types of media including text, photo, video, edited photo or video e.g. applied one or more photo filters, lenses, emoticons, overlay drawings or text, messaging attributes or properties, media attributes or properties, client device information, geolocation information, and social network information, among others.
- content and user data such as shared or broadcasted visual media, user profiles, user status, user location or checked-in place, search queries, saved search results or bookmarks, privacy settings, preferences, created events, feeds, stories related settings & preferences, user contacts, connections, groups, networks, opt-in contacts
- the data exchanges within the network system 100 may be dependent upon user-selected functions available through one or more client or user interfaces (UIs).
- the UIs may be associated with a client machine, such as mobile devices or one or more types of computing device 130, 135, 140.
- the mobile devices e.g. 130 and 135 may be in communication with the server application(s) 236 via an application server 199.
- the mobile devices e.g. 130, 135 include wireless communication components, and audio and optical components for capturing various forms of media including photos and videos as described with respect to Figure 2.
- the server messaging application 236, an application program interface (API) server is coupled to, and provides programmatic interface to the application server 199.
- the application server 199 hosts the server application(s) 236.
- the application server 199 is, in turn, shown to be coupled to one or more database servers 198 that facilitate access to one or more databases 199.
- API application program interface
- the Application Programming Interface (API) server 110 communicates and receives data pertaining to visual media, user profile, preferences, privacy settings, presentation settings, user data, search queries, user actions or controls from 3 rd parties developers, providers, servers, networks, applications, devices & storage mediums, notifications, ephemeral or non-ephemeral messages, media items, and communications, among other things, via various user input tools.
- the API server 197 may send and receive data to and from an application running on another client machine (e.g., mobile devices 130, 135, 140 or one or more types of computing devices or a third party server).
- the server application(s) 236 provides messaging mechanisms for users of the mobile devices e.g. 130, 135 to send messages that include ephemeral or non- ephemeral messages or text and media items or contents such as pictures and video and search request, subscribe or follow request, request to access search query based feeds, and stories.
- the mobile devices 130, 135 can access and view the messages from the server application(s) 236.
- the server application(s) 236 may utilize any one of a number of message delivery networks and platforms to deliver messages to users.
- the messaging application(s) 236 may deliver messages using electronic mail (e-mail), instant message (FM), Push Notifications, Short Message Service (SMS), text, facsimile, or voice (e.g., Voice over IP (VoIP)) messages via wired (e.g., the Internet), plain old telephone service (POTS), or wireless networks (e.g., mobile, cellular, WiFi, Long Term Evolution (LTE), Bluetooth).
- e-mail electronic mail
- FM instant message
- SMS Short Message Service
- SMS Short Message Service
- text e.g., text, facsimile, or voice (e.g., Voice over IP (VoIP)) messages via wired (e.g., the Internet), plain old telephone service (POTS), or wireless networks (e.g., mobile, cellular, WiFi, Long Term Evolution (LTE), Bluetooth).
- POTS plain old telephone service
- wireless networks e.g., mobile, cellular, WiFi, Long Term Evolution (LTE), Bluetooth
- Figure 1 illustrates an example platform, under an embodiment.
- system 100 can be implemented through software that operates on a portable computing device, such as a mobile computing device 110.
- System 100 can be configured to communicate with one or more network services, databases, objects that coordinate, orchestrate or otherwise provide advertised contents of each user to other users of network.
- the mobile computing device can integrate third-party services which enable further functionality through system 100.
- Various embodiments of system also enables user to create events or groups, so invited participants or presented members at particular place or location can share media including photos and videos with each other.
- the system also enabling user to create, save, bookmark, subscribe and view one or more object criteria including provided object model(s) or sample image(s), identified object(s) related keywords, object conditions including exact match, similar, pattern matched specific searched or matched series of one or more types of media or contents including photo, video, voice, text & like.
- the system also enabling user to display ephemeral messages in real-time or via sensors and/or timers or in tabs.
- the system also enabling sender of media to access media shared by sender at recipient device including add, remove, edit & update shared media at recipient's device or application or gallery or folder.
- FIG. 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the system.
- gateway 120, database 115 and server 110 may be
- the system may include a posting or sender user device or mobile devices 130/140 and viewing or receiving user device or mobile devices 135.
- Devices or Mobile devices 130/140/135 may be particular set number of or an arbitrary number of devices or mobile devices which may be capable of capturing, recording, previewing, posting, sharing, publishing, broadcasting, advertising, notifying, sensing, sending, presenting, searching, matching, accessing and managing shared contents or visual media or content items.
- Each device or mobile device in the set of posting or sending or broadcasting or advertising or sharing user(s) 130/140 and viewing ore receiving user(s) device or mobile devices 135/140 may be configured to communicate, via a wireless connection, with each one of the other mobile devices 130/140/135.
- Each one of the mobile devices 130/140/135 may also be configured to communicate, via a wireless connection, to a network 125, as illustrated in Figure 1.
- the wireless connections of mobile devices 130/140/135 may be implemented within a wireless network such as a Bluetooth network or a wireless LAN.
- the system may include gateway 120.
- Gateway 120 may be a web gateway which may be configured to communicate with other entities of the system via wired and/or wireless network connections.
- gateway 120 may communicate with mobile devices 130/140/135 via network 125.
- gateway 120 may be connected to network 125 via a wired and/or wireless network connection.
- gateway 120 may be connected to database 115 and server 110 of system.
- gateway 120 may be connected to database 115 and/or server 110 via a wired or a wireless network connection.
- Gateway 120 may be configured to send and receive user contents or posts or data to targeted or prospective, matched & contextual viewers based on preferences, wherein user data comprises user profile, user connections, connected users' data, user shared data or contents, user logs, activities, actions, events, senses, transactions, status, updates, presence information, locations, check-in places and like) to/from mobile devices 130/140/135.
- gateway 120 may be configured to receive posted contents provided by posting users or publishers or content providers to database 115 for storage.
- gateway 120 may be configured to send or present posted contents to contextual viewers stored in database 115 to mobile devices 130/140/135.
- Gateway 120 may be configured to receive search requests from mobile devices 130/140/135 for searching and presenting posted contents.
- gateway 120 may receive a request from a mobile device and may query database 115 with the request for searching and matching request specific matched posted contents, sources, followers, following users and viewers. Gateway 120 may be configured to inform server 110 of updated data. For example, gateway 120 may be configured to notify server 110 when a new post has been received from a mobile device or device of posting or publishing or content broadcaster(s) or provider(s) stored on database 115.
- the system may include a database, such as database 115.
- Database 115 may be connected to gateway 120 and server 110 via wired and/or wireless connections.
- Database 115 may be configured to store a database of registered user's profile, accounts, posted or shared contents, followed updated keyword(s), key phrase(s), named entities, nodes, ontology, semantic syntax, categories & taxonomies, user data, payments information received from mobile devices 130/140/135 via network 125 and gateway 120.
- Database 115 may also be configured to receive and service requests from gateway 120.
- database 115 may receive, via gateway 120, a request from a mobile device and may service the request by providing, to gateway 120, user profile, user data, posted or shared contents, user followers, following users, viewers, contacts or connections, user or provider account's related data which meet the criteria specified in the request.
- Database 115 may be configured to communicate with server 110.
- the system may include a server, such as server 110.
- Server may be connected to database 115 and gateway 120 via wired and/or wireless connections.
- server 110 may be notified, by gateway 120, of new or updated user profile, user data, user posted or shared contents, user followed updated keyword(s), key phrase(s), named entities, nodes, ontology, semantic syntax, categories & taxonomies & various types of status stored in database 115.
- Figure 1 illustrates a block diagram of a system configured to implement the various
- system identifies user intention to take photo or video and automatically invokes, open and show camera display screen so user is enable to capture photo or video without each time manually open camera application.
- system identifies user's intention to view media and show interface to view media.
- system enables user to create events so invited participants or presented members at particular place or location can share media including photos and videos with each other.
- system enables user to create, save, bookmark, subscribe and view one or more object criteria including provided object model(s) or sample image(s), identified object(s) related keywords, object conditions including exact match, similar, pattern matched specific searched or matched series of one or more types of media or contents including photo, video, voice, text & like.
- system also enables to display of ephemeral messages in real-time or via sensors and/or timers or in tabs.
- system enables sender of media to access media shared by sender at recipient device including add, remove, edit & update shared media at recipient's device or application or gallery or folder.
- Figure 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the system.
- gateway 120, database 115 and server 110 may be implemented in the system as separate systems, a single system, or any combination of systems.
- the server 110 stores database server 198, API server 197 and application server 199 which stores Sender's Ephemeral/Non -Ephemeral Settings for Recipients Module 171, Recipient's Ephemeral/Non-Ephemeral Settings for Senders Module 172, Visual Media Search/Request Module 173, Visual Media Subscription Module 174, User's Visual Media Privacy Settings Module 175, Visual Media Advertisement Module 176, Sender's Shared Content Access
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an Auto Present Camera Display Screen 260 to implement operations of one of the embodiment of the invention.
- the Auto Present Camera Display Screen 260 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Auto Present Camera Display Screen 260 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Auto Present on Camera Display Screen Visual Media Capture controller(s) icon(s) or labels) or image(s) or control(s) 261 to enabling user to one tap capture photo or record video, preview for pre-set duration and manually select destination(s) and send or auto send to auto determined destination(s) to implement operations of another embodiment of the invention.
- the Auto Present on Camera Display Screen Visual Media Capture controller(s) icon(s) or labels) or image(s) or control(s) 261 to enabling user to one tap capture photo or record video, preview for pre-set duration and manually select destination(s) and send or auto send to auto determined destination(s) to implement operations of another embodiment of the invention.
- the Auto Present on Camera Display Screen Visual Media Capture controller(s) icon(s) or labels) or image(s) or control(s) 261 to enabling user to one tap capture photo or record video, preview for pre-set duration and manually select destination(s) and send or auto send to auto determined destination(s) to implement operations of another embodiment of the invention
- controller(s) icon(s) or labels) or image(s) or control(s) 261 may include executable instructions to access a server which coordinates operations disclosed herein.
- the Auto Present on Camera Display Screen Visual Media Capture controller(s) icon(s) or labels) or image(s) or control(s) 261 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores Auto Present Media Viewer Application 262 to implement operations of one of the embodiment of the invention.
- the Auto Present Media Viewer Application 262 may include executable instructions to access a client device and/or server which coordinates operations disclosed herein. Alternately, the Auto Present Media Viewer Application 262 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Auto or Manually Capture Visual Media Application 263 to implement operations of one of the embodiment of the invention.
- the Auto or Manually Capture Visual Media Application 263 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Auto or Manually Capture Visual Media Application 263 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Preview or Auto Preview Visual Media Application 264 to implement operations of one of the embodiment of the invention.
- the Preview or Auto Preview Visual Media Application 264 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Preview or Auto Preview Visual Media Application 264 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Media sharing application (Send Visual Media Item(s) to user selected or Auto determined destination(s)) 265 to implement operations of one of the embodiment of the invention.
- the Media sharing application (Send Visual Media Item(s) 265 to user selected or Auto determined destination(s)) 265 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Media sharing application (Send Visual Media Item(s) to user selected or Auto determined destination(s)) 265 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Send by User or Auto Send Visual Media Item(s) Application 266 to implement operations of one of the embodiment of the invention.
- the Send by User or Auto Send Visual Media Item(s) Application 266 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Send by User or Auto Send Visual Media Item(s) Application 266 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Ephemeral or Non-Ephemeral Content Access Rules & Settings for Recipient(s) by sender Application 267 to implement operations of one of the embodiment of the invention.
- the Ephemeral or Non-Ephemeral Content Access Rules & Settings for Recipient(s) by sender Application 267 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Ephemeral or Non- Ephemeral Content Access Rules & Settings for Recipient(s) by sender Application 267 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Ephemeral or Non-Ephemeral Content Receiving Rules & Settings Application 268 to implement operations of one of the embodiment of the invention.
- the Ephemeral or Non-Ephemeral Content Receiving Rules & Settings Application 268 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Ephemeral or Non-Ephemeral Content Receiving Rules & Settings Application 268 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Search Query, Conditions, Object Criteria(s), Scan, Preferences, Directory, User Data & Clicked object(s) inside visual Media specific Searching, Following & Auto Presenting of Visual Media Items or Story Application 269 to implement operations of various embodiments of the invention.
- the Search Query, Conditions, Object Criteria(s), Scan, Preferences, Directory, User Data & Clicked object(s) inside visual Media specific Searching, Following & Auto Presenting of Visual Media Items or Story Application 269 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Search Query, Conditions, Object Criteria(s), Scan,
- Preferences, Directory, User Data & Clicked object(s) inside visual Media specific Searching, Following & Auto Presenting of Visual Media Items or Story Application 269 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Object Criteria & Target Criteria specific Visual Media
- Advertisement insertion Story Application 270 to implement operations of one of the
- the Object Criteria & Target Criteria specific Visual Media Advertisement insertion Story Application 270 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Object Criteria & Target Criteria specific Visual Media Advertisement insertion Story
- Application 270 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Sender's Shared Content Access at Recipient's Device Application 271 to implement operations of one of the embodiment of the invention.
- the Sender's Shared Content Access at Recipient's Device Application 271 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Sender's Shared Content Access at Recipient's Device Application 271 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Capture User related Visual Media via other User' Device Application 272 to implement operations of one of the embodiment of the invention.
- the Capture User related Visual Media via other User' Device Application 272 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Capture User related Visual Media via other User' Device Application
- 272 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a User Privacy for others for taking user's related visual media
- the User Privacy for others for taking user's related visual media Application 273 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the User Privacy for others for taking user's related visual media Application
- the memory 236 stores a Muti tabs or Multi Access Ephemeral Message Controller and
- the Muti tabs or Multi Access Ephemeral Message Controller and Application 274 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Muti tabs or Multi Access Ephemeral Message Controller and Application 274 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores an Ephemeral Message Controller and Application 275 to implement operations of one of the embodiment of the invention.
- the Ephemeral Message Controller and Application 275 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Ephemeral Message Controller and Application 275 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Real-time Ephemeral Message Controller and Application 276 to implement operations of one of the embodiment of the invention.
- the Real-time Ephemeral Message Controller and Application 276 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Real-time Ephemeral Message Controller and Application 276 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Various Types of Ephemeral feed(s) Controller and Application 277 to implement operations of various embodiments of the invention.
- Ephemeral feed(s) Controller and Application 277 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Various Types of Ephemeral feed(s) Controller and Application 277 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Various Types of Multi -tasking Visual Media Capture Controllers and Applications 278 to implement operations of various embodiments of the invention.
- the Various Types of Multi -tasking Visual Media Capture Controllers and Applications 278 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Various Types of Multi -tasking Visual Media Capture
- Controllers and Applications 278 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- FIG. 2 (278) illustrates components of an electronic device implementing single mode visual media capture in accordance with the invention.
- the memory 236 stores a User created event or gallery or story Application 279 to implement operations of one of the embodiment of the invention.
- the User created event or gallery or story Application 279 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the User created event or gallery or story Application 279 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Scan to Access Digital Items Application 280 to implement operations of one of the embodiment of the invention.
- the Scan to Access Digital Items Application 280 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Scan to Access Digital Items Application 280 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a User Reaction Application 281 to implement operations of one of the embodiment of the invention.
- the User Reaction Application 281 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the User Reaction Application 281 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a User's Auto Status Application 282 to implement operations of one of the embodiment of the invention.
- the User's Auto Status Application 282 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the User's Auto Status Application 282 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Mass User Action Application 286 to implement operations of one of the embodiment of the invention.
- the Mass User Action Application 286 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Mass User Action Application 286 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a User Requirement specific Responses Application 284 to implement operations of one of the embodiment of the invention.
- the User Requirement specific Responses Application 284 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the User Requirement specific Responses Application 284 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Suggested Prospective Activities Application 285 to implement operations of one of the embodiment of the invention.
- the Suggested Prospective Activities Application 285 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Suggested Prospective Activities Application 285 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the memory 236 stores a Natural talking (e.g. video/voice) application 287 to implement operations of one of the embodiment of the invention.
- the Natural talking (e.g. video/voice) application 287 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Natural talking (e.g. video/voice) application 287 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
- the processor 230 is also coupled to image sensors 238.
- the image sensors 238 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- the image sensors 238 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220 to provide connectivity to a wireless network.
- a wireless signal processor 220 to provide connectivity to a wireless network.
- a power control circuit 225 and a global positioning system (GPS) processor 235 may also be utilized. While many of the components of Figure 2 are known in the art, new
- notification application 260 operating in conjunction with a server.
- FIG. 2 shows a block diagram illustrating one example embodiment of a mobile device 200.
- the mobile device 200 includes an optical sensor 244 or image sensor 238, a Global Positioning System (GPS) sensor 235, a position sensor 242, a processor 230, a storage 236, and a display 210.
- GPS Global Positioning System
- the optical sensor 244 includes an image sensor 238, such as, a charge-coupled device.
- the optical sensor 244 captures visual media.
- the optical sensor 244 can be used to media items such as pictures and videos.
- the GPS sensor 238 determines the geolocation of the mobile device 200 and generates geolocation information (e.g., coordinates including latitude, longitude, aptitude).
- geolocation information e.g., coordinates including latitude, longitude, aptitude
- other sensors may be used to detect a geolocation of the mobile device 200.
- a WiFi sensor or Bluetooth sensor or Beacons including iBeacons or other accurate indoor or outdoor location determination and identification technologies can be used to determine the geolocation of the mobile device 200.
- the position sensor 242 measures a physical position of the mobile device relative to a frame of reference.
- the position sensor 242 may include a geomagnetic field sensor to determine the direction in which the optical sensor 240 or the image sensor 244 of the mobile device is pointed and an orientation sensor 237 to determine the orientation of the mobile device (e.g., horizontal, vertical etc.).
- the processor 230 may be a central processing unit that includes a media capture application 263, a media display application 262, and a media sharing application 265.
- the media capture application 263 includes executable instructions to generate media items such as pictures and videos using the optical sensor 240 or image sensor 244.
- the media capture application 263 also associates a media item with the geolocation and the position of the mobile device 200 at the time the media item is generated using the GPS sensor 238 and the position sensor 242.
- the storage 236 includes a memory that may be or include flash memory, random access memory, any other type of memory accessible by the processor 230, or any suitable combination thereof.
- the storage 236 stores the media items generated or shared or received by user and also store the corresponding geolocation information, auto identified system data including date & time, auto recognized keywords, metadata, and user provided information.
- the storage 236 also stores executable instructions corresponding to the Auto Present Camera Display Screen
- the display 210 includes, for example, a touch screen display.
- the display 210 displays the media items generated by the media capture application 263.
- a user captures record and selects media items for sending to one or more selected or auto determined destinations or adding to one or more types of feeds, stories or galleries by touching the corresponding media items on the display 210.
- a touch controller monitors signals applied to the display 210 to coordinate the capturing, recording, and selection of the media items.
- the mobile device 200 also includes a transceiver that interfaces with an antenna.
- the transceiver may be configured to both transmit and receive cellular network signals, wireless data signals, or other types of signals via the antenna, depending on the nature of the mobile device 200.
- the GPS sensor 238 may also make use of the antenna to receive GPS signals.
- Figure 3 illustrates an embodiment of a logic flow 300 for the visual media capture system 200 of Figure 2.
- the logic flow 300 may be representative of some or all of the operations executed by one or more embodiments described herein.
- Figure 3 (A) illustrates processing operations associated with the Auto Present Camera Display Screen Application and/or auto present visual media capture controller 260.
- an eye tracking system is loaded in memory or loaded in background mode 303, so it can continuously monitor user's eye by using one or more types of image sensors 244 or optical sensors 240 or for example, a user may access an application presented on display 210 to invoke or initiate or open an eye tracking system 303.
- an eye tracking system monitors, tracks and recognize user's one or more types of eye movement or eye status or eye position by using one or more types of one or more optical sensors 240 or image sensors 244.
- an eye tracking system monitors, tracks and recognize user's one or more types of eye movement or eye status or eye position by using one or more types of one or more optical sensors 240 or image sensors 244 relative to or correspond to display 210 or user device 200 position by employing Accelerometer sensor 248 and/or employing other user device orientation sensors including gyroscope 247 of user device 200.
- An accelerometer is a sensor which measures the tilting motion and orientation of a mobile phone.
- 313 based on determination or recognition of particular type of user's eye movement or eye status or eye position and type of device orientation, for example similar to hold device to see camera display screen i.e. camera view or camera application view for capturing photo or recording video e.g.
- auto open camera application or camera display screen to capture photo or record video or take visual media i.e. mobile device is off or lock screen is off and auto o mobile device and auto open mobile camera application to enable user to capture photo or record video without manually on camera device and open camera application to take photo or video.
- an eye tracking system recognize or detects another particular type of user's eye movement or eye status or eye position and type of device orientation, for example similar to hold device to view photos from gallery or album e.g. 480 or 490 then open photo or video view application or gallery or album or one or more types of one or more preconfigured or pre-set application or interface(s).
- preset duration of timer started and in the event of expiry of timer auto capture photo or auto start recording of video up-to end video by user or up-to pre-set maximum duration of video or optionally user is auto presented with one or more visual media capture controller labels or icons on camera display screen or camera screen to within one tap capture photo or record video or record pre-set duration video and auto send to said visual media capture controller associated contact(s) or group of contacts or group(s) discussed in detail in Figures 44 and 48 or optionally auto present photo preview interface or video preview interface for pre-set duration to review or cancel or change destination(s) and after expiry of said pre-set duration or preview timer, auto send said captured to said visual media capture controller associated contact(s) or group of contacts or group(s) discussed in detail in Figures 44 and 48.
- FIG 333 optionally while hover on camera display via hover sensor show contacts / groups / destinations show/hide menu on camera display or show menu items (so while hover on preferred or particular menu item or visual media capture controller icon or label user can auto (1) view camera screen scene, (2) capture photo or start recording of or record particular pre-set duration of video, (3) store, (4) Preview, (5) select & (6) send to menu item related contact(s) or group(s) or destination(s).
- Figure 3 (B) illustrates processing operations associated with the Auto Present Camera Display Screen Application and/or auto present visual media capture controller 260.
- an eye tracking system is loaded in memory or loaded in background mode 346, so it can continuously monitor user's eye by using one or more types of image sensors 244 or optical sensors 240 or for example, a user may access an application presented on display 210 to invoke or initiate or open an eye tracking system 346.
- auto open or close device e.g. mobile device or digital television or auto open camera display screen or camera application or present one or more types of digital items e.g. pre-set application, features, interface, screen (e.g. view feed, stories, received or recently received photos, videos.
- Figure 4 illustrates processing operations associated with the Auto Present Camera Display Screen Application and/or auto present visual media capture controller 260.
- an eye tracking system is loaded in memory or loaded in background mode 405, so it can continuously monitor user's eye by using one or more types of image sensors 244 or optical sensors 240 or for example, a user may access an application presented on display 210 to invoke or initiate or open an eye tracking system 405.
- an eye tracking system monitors, tracks and recognize user's one or more types of eye movement or eye status or eye position by using one or more types of one or more optical sensors 240 or image sensors 244.
- an eye tracking system monitors, tracks and recognize user's one or more types of eye movement or eye status or eye position by using one or more types of one or more optical sensors 240 or image sensors 244 relative to or correspond to display 210 or user device 200 position by employing accelerometer sensor 248 and/or employing other user device orientation sensors including gyroscope 247 of user device 200.
- An accelerometer is a sensor which measures the tilting motion and orientation of a mobile phone.
- At 415 based on determination or recognition of particular type of user's eye movement or eye status or eye position and type of device orientation, for example similar to hold device to see camera display screen i.e. camera view or camera application view for capturing photo or recording video e.g.
- auto open camera application or camera display screen to capture photo or record video or take visual media i.e. mobile device is off or lock screen is off and auto o mobile device and auto open mobile camera application to enable user to capture photo or record video without manually on camera device and open camera application to take photo or video.
- auto opening camera application for enabling user to take visual media at 440 pre-set duration of timer started.
- at 444 based on one or more types of sensors system determine whether device is static or in movement, if device is static or sufficient static (e.g.
- auto start video at 446 system auto starts recording of video and in the event of expiry of pre-set maximum duration of video then auto stop video and at 450 optionally store video and/or show pre-set duration of video preview for enabling user to cancel or remove video, review video and select one or more contact(s), group(s) or one or more types of one or more destinations and/or auto sent said recorded or saved video to auto determined or pre-set or default or user selected said one or more contacts, groups and destinations.
- an eye tracking system recognize or detects another particular type of user's eye movement or eye status or eye position and type of device orientation, for example similar to hold device to view photos from gallery or album e.g. 480 or 490 then open photo or video view application or gallery or album or one or more types of one or more preconfigured or pre-set application or interface(s).
- Figure 5 illustrates processing operations associated with the Auto Present Camera Display Screen Application and/or auto present visual media capture controller 260. Initially an eye tracking system is loaded in memory or loaded in background mode 505, so it can continuously monitor user's eye by using one or more types of image sensors 244 or optical sensors 240 or for example, a user may access an application presented on display 210 to invoke or initiate or open an eye tracking system 505.
- an eye tracking system monitors, tracks and recognize user's one or more types of eye movement or eye status or eye position by using one or more types of one or more optical sensors 240 or image sensors 244.
- an eye tracking system monitors, tracks and recognize user's one or more types of eye movement or eye status or eye position by using one or more types of one or more optical sensors 240 or image sensors 244 relative to or correspond to display 210 or user device 200 position by employing accelerometer sensor 248 and/or employing other user device orientation sensors including gyroscope 247 of user device 200.
- An accelerometer is a sensor which measures the tilting motion and orientation of a mobile phone.
- system determines device horizontal or vertical orientation and based on pre-set orientation associated visual media capture mode including for example set horizontal orientation to capture photo or record video or set vertical orientation to capture photo or record video, at 525 system auto change mode to for example photo mode or at 555 auto change mode to video mode.
- pre-set orientation associated visual media capture mode including for example set horizontal orientation to capture photo or record video or set vertical orientation to capture photo or record video
- system auto change mode to for example photo mode or at 555 auto change mode to video mode.
- pre-set duration of timer started.
- an eye tracking system recognize or detects another particular type of user's eye movement or eye status or eye position and type of device orientation, for example similar to hold device to view photos from gallery or album e.g. 480 or 490 then open photo or video view application or gallery or album or one or more types of one or more preconfigured or pre-set application or interface(s).
- Figure 6 (A) illustrates user interface 662 on user device 660 wherein user can select 610, search & select 612, capture photo e.g.
- 606 via tapping or clicking on photo icon 616, recording video 606 via tapping or clicking on video icon 618, start broadcasting live stream 606 via tapping or clicking on live streaming icon 622, edit said captured or selected or recorded visual media 601, switch front or back camera 602 to capture photo or record video, and select one or more destinations 626 including one or more or all contacts, groups, networks, contacts of contacts, follower(s) of contact(s), hashtags, categories, keywords, events or galleries, followers, save locally, broadcast in public, make it public, post to one or more types of feeds, post to one or more types of stories, post to one or more 3 rd parties web sites, web pages, applications, services, user profile pages, servers, storage mediums, databases, devices, and networks and post via one or more channels or communication interfaces or mediums including email, instant messenger, phone contacts, social networks, clouds, Bluetooth, Wi-Fi & like.
- Ephemeral / Non-Ephemeral Content Access Controller 608 or Ephemeral / Non-Ephemeral Content Access Settings 608 enables user to make or pre-set said captured or selected or recorded visual media including photo or video or stream or one or more types of one or more content items as ephemeral including present ephemeral message to recipient(s) in the event of acceptance of push notification, present ephemeral message to recipient(s) in the event of acceptance of push notification within pre-set accept-to- view timer, allow recipient(s) to view shared or sent message real-time only, remind recipient(s) for particular number of times to view shared or sent message(s), allow recipient(s) to view shared or sent message(s) for particular pre-set duration and in the event of expiry of timer remove message(s) from sender and/or recipient(s) and/or server and/or storage medium and/or anywhere where it stored in volatile or non-volatile memory, allow recipient
- sender is enable to select one or more types of feeds or stories (which are discussed throughout the specification).
- recipient is also enabled to receive message as per message(s) receiving settings discussed in detail in Figure 8.
- user can apply pre-set settings or user can select or update settings real-time or after selecting or taking visual media including selecting or capturing photo or selecting or recording video or starting live stream and before sending of said visual media to one or more destinations via Ephemeral / Non-Ephemeral Content Access Controller 608 or Ephemeral / Non-Ephemeral Content Access Settings 608 (discuss in detail in Figure 7).
- Figure 6 (B) illustrates user interface 674 on user device 660 wherein user can auto switch on user device 660 and/or auto start visual media camera display screen or camera interface 674 to take visual media e.g. 628 and/or auto capture photo or auto start recording of video or auto start broadcasting of stream or auto record video e.g. 628 and/or as discussed in Figure 3-5 or throughout the specification.
- visual media e.g. 628 and/or auto capture photo or auto start recording of video or auto start broadcasting of stream or auto record video e.g. 628 and/or as discussed in Figure 3-5 or throughout the specification.
- FIG. 6 (C) illustrates processing operations associated with single mode visual media capture embodiment of the invention.
- FIG. 6 (D) illustrates the exterior of an electronic device implementing single mode visual media capture.
- FIG. 2 (278) illustrates components of an electronic device implementing single mode visual media capture in accordance with the invention.
- a stabilization parameter of the mobile device is determined using a movement sensor. It is determined whether the stabilization parameter is greater than or equal to a stabilization threshold.
- An image of the scene in camera display screen is captured using the mobile device if the stabilization parameter is greater than or equal to the stabilization threshold and receiving of haptic contact engagement signal from touch controller 215 or star recording of video if the stabilization parameter is less than the stabilization threshold and receiving of haptic contact engagement signal from touch controller 215.
- FIG. 2 (278) illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a visual media capture controller 278 to implement operations of the invention.
- the visual media capture controller 278 includes executable instructions to alternately record a photograph or a video based upon the processing of device stability and haptic signals, as discussed below.
- the visual media controller278 interacts with a photograph library controller 294, which includes executable instructions to store, organize and present photos 291.
- the photograph library controller may be a standard photograph library controller known in the art.
- the visual media controller 278 also interacts with a video library controller 296, which includes executable instructions to store, organize and present videos 292.
- the video library controller may also be a standard video library controller known in the art.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210. That is, in a visual media mode controlled by the visual media capture controller 278, the image sensors 244 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the visual media capture controller 278 presents a single mode input icon on the display 210. That is, the visual media capture controller 278 includes executable instructions executed by the processor to present a single mode input icon on the display 210.
- the visual media capture controller 278 communicates with the processor 230 regarding haptic signals applied to the display 210, which are recorded by the touch controller 215.
- the visual media capture controller 278 processes haptic signals applied to the single mode input icon, as detailed in connection with the discussion of FIG. 6 9D), and determines whether to record a photograph or a video, as discussed below.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of FIG. 2 are known in the art, new functionality is achieved through the visual media capture controller 278.
- FIG. 6 (C) illustrates processing operations associated with the visual media capture controller 278.
- a visual media capture mode is invoked 630.
- a user may access an application presented on display 210 to invoke a visual media capture mode or auto switch on of closed mobile device and auto present visual media capture mode or open camera display screen or open camera application (as discussed in detail in figure 3 and 4).
- FIG. 6 (D) illustrates the exterior of electronic device 200. The figure also illustrates the display 210. The electronic device 200 is in a visual media capture mode and presents visual media 640. The display 210 also includes a single mode input icon 645.
- the stabilization threshold and the receiving of haptic contact engagement or tap on the single mode input icon 640 determines whether a photograph will be recorded or a video. For example, if a user initially intends to take a photograph, then user has to hold mobile device stable and the icon 645 is engaged with a haptic signal or in an embodiment tap anywhere on camera display screen. If the user decides that the visual media should instead be a video, the user has to slight move user device and engage the icon 645 and in the event of start of video user can then move or keep device stable to record video.
- the output of the visual media capture is determined to be photo or If the device is not stable for a specified period of time (e.g., 3 seconds) and receiving of haptic contact engagement on icon or anywhere on device, then the output of the visual media capture is determined to be video.
- the photo mode or video mode may be indicated on the display 210 with an icon 648.
- haptic contact engagement is identified 632.
- the haptic contact engagement may be at icon 645 on display 210.
- the touch controller 215 generates haptic contact engagement signals for processing by the visual media capture controller 278 in conjunction with the processor 230.
- the haptic contact may be at any location on the display 210.
- the stabilization parameter is based on a motion of the camera on the x-axis, a motion of the camera on the y-axis, and a motion of the camera on the z-axis.
- the movement sensor comprises an accelerometer.
- stabilization threshold is identified. In the event of stabilization parameter is greater than or equal to a stabilization threshold (631 -Yes) and in response to haptic contact engagement (632 ⁇ Yes), photo is captured 633 and photo is store 634. If stabilization parameter is not greater than or not equal to a stabilization threshold (631-No) or in the event of stabilization parameter is less than to a stabilization threshold (635— Yes) and in response to haptic contact engagement (636— Yes), start recording of video and start timer 637 and in an embodiment stop video, store video and stop or re-initiate timer 639 in the event of expiration of pre-set timer (638— Yes).
- haptic contact engagement In an embodiment in the event of further identification of haptic contact engagement during or before expiration of timer then stop timer. In an embodiment identify further haptic contact engagement to stop video and store video. In an embodiment identify one or more types of users sense via one or more types of user device(s) sensor(s) including voice command to stop video and store video or hover on camera display screen or pre-defined area of camera display screen to stop video and store video or based on eye tracking system identify particular type of pre-defined eye gaze to stop video and store video. In an embodiment receiving one or more types of pre-defined device orientation data via device orientation sensor(s) then stop video, trim said changed device orientation related video part and then store video.
- the video is recorded by the processor 230 operating in conjunction with the memory 236.
- a still frame is taken from the video feed and is stored as a photograph in response to haptic contact engagement and then video is recorded.
- the timer is executed by the processor 230 under the control of the visual media capture controller 278.
- a video is sent to the video library controller 296 for handling.
- the visual media capture controller 278 includes executable instructions to prompt the video library controller to enter a video preview mode. Consequently, a user can conveniently review a recently recorded video.
- video is recorded and a frame of video is selected and is stored as a photograph 634.
- an alternate approach is to capture a still frame from the camera video feed as a photograph. Such a photograph is then passed to the photographic library controller 294 for storage.
- the visual media capture controller 278 may then invoke a photo preview mode to allow a user to easily view the new photograph.
- the visual media capture controller 278 includes executable instructions to prompt the photo library controller to enter a photo preview mode. Consequently, a user can conveniently review a recently captured photo.
- determining a stabilization parameter is based on a motion of the camera on the x-axis, a motion of the camera on the y-axis, and a motion of the camera on the z-axis 690, wherein the movement sensor comprises an accelerometer.
- motion-sensing technology such as an accelerometer or a gyroscope
- the stability or movement of the mobile device is determined.
- the camera automatically captures the image.
- the camera automatically starts recording of video. This eliminates a user action to capture the image or start recording of the video.
- the mobile device may include a stability meter to notify the user of the current stability of the mobile device and/or camera.
- Movement sensor 247 or 248 represents any suitable indicator used to determine a position and/or motion (e.g., velocity, acceleration, or any other type of motion) of one or more points of mobile device 200 and/or camera display screen e.g. 210. Movement sensor 247 or 248 may be communicatively coupled to processor 230 to communicate position and/or motion data to processor 230. Movement sensor 247 or 248 may comprise a single-axis accelerometer, a two- axis accelerometer, or a three-axis accelerometer. For example, a three-axis accelerometer measures linear acceleration in the x, y, and z directions.
- Movement sensor 247 or 248 may be any motion-sensing device, including a gyroscope, a global positioning system (GPS) unit 235, a digital compass, a magnetic compass, an orientation center, magnetometer, a motion sensor, rangefinder, any combination of the preceding, or any other type of device suitable to detect and/or transmit information regarding the position and/or motion of mobile device 200 and/or camera display screen e.g. 210.
- GPS global positioning system
- stabilization parameter is a value determined from the data received from movement sensor 247 or 248 and stored on memory.
- the data represents a change in position and/or motion to mobile device 200.
- Stabilization parameter may be a dataset of values (e.g., position change in X-axis, position change in Y-axis, and position change in Z-axis) or a single value.
- the dataset of values in stabilization parameter may reflect the change in position and/or motion of mobile device 200 on the X, Y, and Z axes.
- Stabilization parameter may be a function of an algorithm, of a mean, of a standard deviation, or of a variance of the data received from movement sensor 247 or 248.
- Stabilization parameter may also be any other suitable type of value and/or data that represents the position and/or motion of mobile device 200 or camera display screen e.g. 210.
- application 278 receives the acceleration of mobile device 200 according to its X, Y, and Z axes.
- Single mode visual media capture controller application 278 stores these values as variables prevX, prevY, and prevZ.
- Application 278 waits a predetermined amount of time, and then receives an updated acceleration of device in the X, Y, and Z axes.
- Application 278 stores these values as curX, curY, and curZ.
- application 278 determines the change in acceleration in the X, Y, and Z axes by subtracting prevX from curX, prevY from curY, and prevZ from curZ and then stores these values as difX, difY, and difZ.
- stabilization parameter may be determined by taking the average of the absolute value of difX, difY, and difZ. Stabilization parameter may also be determined by taking the mean, median, standard deviation, variance, or function of an algorithm of difX, difY, and difZ.
- stabilization threshold is a value that represents the minimum stability required for application 278 to initiate capturing the image 200 by camera display screen e.g.
- Stabilization threshold may be a single value or a dataset, and may be a fixed number or an adaptive number.
- Adaptive stabilization thresholds can be a function of an algorithm, of a mean, of a standard deviation, or of a variance of the data received from movement sensor 247 or 248.
- Adaptive stabilization threshold may also be based on previous stabilization parameter values.
- mobile device 200 records twenty iterations of stabilization parameter.
- Stabilization threshold may then be determined to be one standard deviation lower than the previous twenty stabilization parameter iterations. As a new stabilization parameter is recorded, stabilization threshold will adjust its value accordingly.
- Figure 7 illustrates user interface 267 wherein user can select, set, apply, save, save default various types of or combinations of ephemeral or non-ephemeral one or more types of content or visual media sharing or sending settings from one or more types of one or more selected destinations or provided access rights to one or more selected recipient(s) or destination(s).
- User is enabled to select all 705 or select, match, auto match, search 717, filter 720, import or install 722 and accept request or invite and add 726 one or more types of one or more destinations 707 including one or more phone contacts, unique user name or identities, social network
- Bluetooth, Wi-Fi or cellular network 716 and define, configure, apply, set, select, select group of or select default one or more ephemeral or non-ephemeral content or visual media item sharing or sending settings, so based on said applied or set or configured settings system send or share or present content items or visual media to/at/on said applied or configured or set settings associated destination(s) or recipient(s) interface(s) on recipient device(s), wherein Ephemeral / Non-Ephemeral Content Access Controller 608 ( Figure 7) or Ephemeral / Non-Ephemeral Content Access Settings 608 ( Figure 7) enables user to pre-set or set before sending said captured photo 616 or selected 610 or searched and selected 612 or recorded visual media including video 618 or stream 622 or one or more types of one or more content items or visual media as ephemeral 742 including present ephemeral message to recipient(s) in the event of acceptance of push notification or live only (present as and when it sent or shared or generated)
- recipient(s) when recipient(s) is/are online or not-mute or manually status of recipient(s) is "available" or as per do not disturb setting recipient is available 758 OR make said shared or sent message non-ephemeral 744 including allow to save 776, allow to re-share 778 and/or allow recipient(s) to view shared or sent message(s) in real-time 778 or non-real-time 754 viewable for said one or more selected or selected from suggested or auto determined or auto selected or preset or default destinations e.g. 707.
- sender is enable to select one or more types of presentation interface(s) or feeds or galleries or folders or stories 760 and/or view effect type or style (present shared visual media or content item(s) to recipient based on one or more effects, access logic and pre-presentation) 762 for presenting to one or more selected destination(s) or recipient(s) (which are discussed throughout the specification).
- presentation interface(s) or feeds or galleries or folders or stories 760 and/or view effect type or style (present shared visual media or content item(s) to recipient based on one or more effects, access logic and pre-presentation) 762 for presenting to one or more selected destination(s) or recipient(s) (which are discussed throughout the specification).
- embodiment recipient is also enabled to receive message as per message(s) receiving settings discussed in detail in Figure 8, in which sender's settings applied first and then recipient's settings applied.
- user can apply & save pre-set settings for/on selected destination(s) 730 at user device 200 via client-side module 267 and/or server 110 via server module 172 or user can select or update settings real-time at user device 200 via client-side module 267 and/or server 110 via server module 172 and send or share content items or visual media items 736 or user can set settings after selecting or taking visual media including selecting 610 or searching & selecting 612 or capturing photo 616 or selecting 610 or searching & selecting 612 or recording video 618 or starting live stream 622 and before sending of said visual media to one or more selected or auto determined destinations 626 via Ephemeral / Non- Ephemeral Content Access Controller 608 or Ephemeral / Non-Ephemeral Content Access Settings 608 ( Figure 7).
- sender user is enable to set delay sending timer, wherein delay time started after user sends shared content or visual media items to target or selected or auto determined destination(s) and in the event of expiry of said pre-set delay timer, system actually auto send said shared visual media or content item(s) to destination(s) or recipient(s).
- sender can make shared content or visual media item(s) as free or paid or sponsored 780.
- enable sender to access including edit, update and remove said shared or sent content or visual media items at/from/on recipient's application, interface(s), storage medium, folder or gallery or device memory 782, where said shared visual media or content items by sender stored.
- sender can request or set required reaction(s) 785 on one or more or all sent or shared content or visual media items for selected destination(s) or target recipient(s).
- sender can select user action(s) 788 on one or more or all sent or shared content or visual media items for selected destination(s) or target recipient(s) which shows to said selected destination(s) or target recipient(s) on said shared content or visual media items shared by sender and enabling said selected destination(s) or target recipient(s) to optionally access said selected one or more user actions or controls.
- User can apply different settings for each or selected or different set of recipient(s) or destination(s).
- Figure 8 illustrates user interface 268 for applying one or more ephemeral or non- ephemeral settings on received content or visual media items from selected one or more senders or sources.
- Figure 8 illustrates user interface wherein user can select, set, apply, save, save default various types of or combinations of ephemeral or non-ephemeral one or more types of content or visual media receiving and/or viewing settings from one or more types of one or more selected senders or sources or contacts.
- User is enabled to select all 805 or select, match, auto match, search 817, filter 820, import or install 822 and add one or more types of one or more sources or senders 825 including one or more phone contacts, unique user name or identities, social network connections or accounts or contacts 809, groups & networks 812, email addresses, or one or more types of unique user or sender or source identities, locally saved, receiving from public sources, following users, contacts of contacts up-to one or more depths, followers or following users of contacts, accessed or subscribed hashtags, categories, events or galleries, received on one or more types of feeds or stories or folders, interfaces, content items or visual media items received from one or more 3 rd parties web sites, web pages, applications, web services, servers, devices, networks, and databases or storage mediums, received on/via one or more communication channels or mediums or interfaces including share via 3 rd parties applications and web services, email application, social network web site or application, instant messenger application, Bluetooth, Wi-Fi or cellular network 716 and define, configure, apply
- Do not Disturb settings for receiving content items or visual media items from one or more sources or senders including receive when user is online, user's manual status is e.g. "Available” 876, user is not-mute 864, set scheduled to receive 862, while "Do Not Disturb” is on receive from all or selected or favorites contacts or senders or sources only 874, receive real-time only (as and when content shared) 865.
- recipient is enabled to mark content received from selected one or more senders or contacts or sources as ephemeral or non-ephemeral 875.
- receiving or viewing user can select one or more types of presentation style or feeds or stories or interfaces 868 to view received one or more content items or visual media items from one or more senders or sources or contacts.
- receiving or viewing user can select one or more types of view effect type or style (present shared visual media or content item(s) to recipient based on one or more effects, access logic and pre-presentation) 872.
- system can implemented sender side settings only as discussed in Figure 7 or in another embodiment system can implemented recipient side settings only as discussed in Figure 8 and in an another embodiment system can implemented both sender's side and recipient side settings as discussed in Figures 7 and 8, but first applied sender side settings and then applied recipient side settings.
- Figures 9-13 illustrates a various embodiments for searching, matching, presenting, subscribing, and auto generating visual media story system 269, wherein searching request or request to present or auto generate or auto present visual media received and processed at server module 173 of server 110 and request to subscribe one or more types of subscription or following processes and stores at server module 174 of serverl 10.
- a "story” as described herein is one or more types of set of contents or visual media items. A story may be generated from pieces of content that are related in a variety of different ways, as is described in more detail throughout the specification.
- Pieces of content comprises one or more types of content items including visual media, photo, video, video clip, voice, blog, text, emoticons, photo filter, object, application, interface, data, user action or control, form & like from one or more sources including user generated or user posted contents or contents posted by users of network, from one or more servers, storage mediums, databases, web sites, applications, web services, networks, and devices.
- sources including user generated or user posted contents or contents posted by users of network, from one or more servers, storage mediums, databases, web sites, applications, web services, networks, and devices.
- Figure 9 illustrates searching visual media items based on supplying or adding 964 one or more object criteria including object model or image 965 via selecting from pre-stored 970 or real-time capturing or recording front camera 972 and/or back camera 965 photo 968 or video and/or voice 969 or search from one or more sources or websites or servers or search engines or storage mediums 975 or drag and drop or scan via camera or upload 978 and one or more object keywords (pre-recognized objects inside stored photos and videos associate identified or related keywords database 920) 960 and object conditions comprises similar object, include object, exclude object, exact match object, pattern match, attribute match including color, resolution, & quality, part match, Boolean operators 961 between more than two supplied objects including AND, OR and NOT, wherein object criteria matched with recognized or pre- recognized objects inside photos or images of videos to identify supplied object criteria including object model specific visual media items including photos and videos or clips or multimedia or voice.
- object keywords pre-recognized objects inside stored photos and videos associate identified or related keywords database 920
- User can select, input or auto-fill one or more keywords 955 which are matched with visual media items including photo or video associated contents and metadata including date & time, location, comments, identified or recognized or supplied or associated information from one or more sources or users.
- User can employ advance search 982 or Figure 10 to provide one or more advance search criteria.
- server 110 via server module 173 searches and matches 985 visual media items from one or more sources including media items or visual media content including phots, videos, clips & voice storage medium or database 915 and present sequences of searched and matched visual media items e.g. 996 to user at user interface 997 at device 960.
- User can view one by one or auto present one by one visual media items based on pre-set period of interval. User can also view length of duration of visual media items for viewing 947. For example 450 seconds of visual media items which includes for example 300 seconds of length of videos and 50 photos each presented with 3 seconds of intervals i.e. total 150 seconds which grand total 450 seconds of length of viewing time for user.
- user can save or bookmark or share said searched or matched visual media items 990.
- user can create micro channels 988 related to particular keywords or key phrases and search, match and manually select or add or remove not related, duplicate & inappropriate or rank or order or edit or curate visual media items from said searched or matched visual media items.
- search engine also use one or more types of user data including user profile (age, gender, qualification, skill, interest etc.), locations or checked-in places, status, activities to refine searched or matched visual media items.
- user can subscribing or following sources 995 or receiving matched updated contents or visual media items from sources 995 based on supplying or adding 964 one or more object criteria including object model or image 965 via selecting from pre-stored 970 or real-time capturing or recording photo or video 968 or search from one or more sources or websites or servers or search engines or storage mediums 975 or drag and drop or scan via camera or upload 978 and one or more object keywords (pre-recognized objects inside stored photos and videos associate identified or related keywords database 920) 960 and object conditions comprises similar object, include object, exclude object, exact match object, pattern match, attribute match including color, resolution, & quality, part match, Boolean operators 961 between more than two supplied objects including AND, OR and NOT, wherein object criteria matched with recognized or pre-recognized objects inside photos or images of videos to identify supplied object criteria including object model specific visual media items including photos and videos or clips or multimedia or voice and said visual media items associated sources.
- User can select, input or auto-fill one or more keywords 955 which are matched with visual media items including photo or video associated contents and metadata including date & time, location, comments, identified or recognized or supplied or associated information from one or more sources or users and identify sources.
- User can employ advance search 982 or Figure 10 to provide one or more advance search criteria.
- server 110 via server module 173 searches and matches 985 visual media items from one or more sources and identify said search or matched visual media items associated unique sources for enabling user to subscribe all or selected or auto subscribe or follow all matched sources to continuously receive updated visual media item as and when they are posted or uploaded or updated at server 110 or view from auto presented or auto updated feed or story at user interface 997 e.g.
- visual media item 996 from one of the followed or subscribed source(s). Based on settings user can view only updated visual media items received from followed sources for pre-set number of times and/or within pre-set period of time and then remove or hide from user interface or user device or user device storage medium. Based on setting user is notified via push notification or provide indication of new visual media items from one or more followed sources.
- Figure 10 illustrates user interface 269 for advance search for searching and viewing visual media items, search and following or subscribing sources of visual media items. User can provide various object criteria related to recognized object inside visual media items and criteria related to contents associated with visual media items.
- User can select or select as current location or select from map or select or define geo-boundaries, ranges or geo-fences or provide location(s) or place(s) or points of interest (POI(s)) 1019, 1021 & 1023 with intention to search or match visual media items created from said provided one or more types of one or more locations 1019, 1021 & 1023 and/or provide one or more object models 965 & 964 with intention to match provided object models with recognized or identified objects inside stored visual media items and match provided location(s) with location of visual media items where visual media items captures or content creation or posting location of visual media items or match provided object model(s) with recognized object(s) inside visual media associated location(s).
- User can provide locations 1004, 1010 & 1011 with intention to match provided location(s) with content associated with visual media items.
- User can provide object keywords or keywords including all these words/tags 1001 or 1006, this exact word or tag or phrase 1002 or 1007, any of these words 1003 or 1008, none of these words 1005 or 1009, Categories, Hashtags & Tags 1010 or 1013 recognized via optical character recognition (OCR), wherein said provided keywords and associated conditions or type matched with recognized objects inside pre-stored visual media items including photos and videos related or associated or identified or keywords.
- OCR optical character recognition
- User is enabled to search, select and provide one or more types or categories or user name(s) or contact(s) or group(s) or unique identities or name of sources of visual media items 1026.
- Source comprises phone contacts, groups, social network identities or user names, followers, categories or locations or taxonomy of users or sources, one or more 3 rd parties servers, web sites, applications, web services, devices, networks and storage mediums 1026.
- User can provide or define type of content or visual media items creator users or sources of visual media items including one or more profile fields including gender, age or age range, education, qualification, home or work locations, related entities including organization or school or college or company name(s) and Boolean operators 1035.
- User can add one or more fields 1038.
- User can provide most user reacted visual media items related criteria to search visual media items including most viewed 1055, most commented 1057, most ranked 1060, and most liked 1058.
- User can limit number of media items including photos and/or videos and/or content items 1065 of search results.
- User can limit length or duration of time of searched media items including photos and/or videos and/or content items 1067 or select unlimited or system default limit 1070 searched media items.
- User is enabled to select type of presentation including present searched or matched visual media items sequentially 1081 i.e. show consecutive media items based on pre-set interval of time, show in video format 1082, show visual media items in list format 1083, show visual media items in slide show format 1084 and show in one or more types feed format 1086.
- User can provide other types of presentation settings including set auto advance or auto show next visual media items after expiry of pre-set duration of timer 1072 or provide to user next, previous, skip, play, pause, start, fast forward options or buttons or controls 1075 to manually show next or previous or skip or paly selected or all or pause playing or fast forward sequence or list of searched or matched visual media items.
- User can provide safe search setting including show most relevant results or filter explicit results 1090.
- User can limit search to user generated visual media items 1091 and/or free or sponsored or advertisement supported visual media items 1092 and/or paid visual media items or contents 1093 and/or 3 rd parties affiliated visual media items 1094.
- User can limit searching of visual media items which are created at particular time or range of date & time or duration including any time, last 24 hours, past number of seconds, minutes, hours, days, weeks, months and years, range of date & time 1015.
- User can provide language of creator or language associated with object inside visual media items or language associated with contents associated with visual media items 1017.
- server 110 After providing one or more advance search criteria as discussed above user can search and view 1095 with intention to view searched or matched visual media items provided by server 110 via server module 173 or user can save search result or share search result or bookmark all or selected searched visual media items 1096 or user can search, select, add to earlier saved search result items or remove one or more search result items or rank search result items and add to user created one or more channels 1097 for making said curated visual media items available to subscribers of said user created channel(s) or user can subscribe or follow identified or searched or matched sources of visual media items 1098 based on one or more advance search criteria as discussed above or user can subscribe or follow identified or searched or matched visual media items 1098 based on one or more advance search criteria as discussed above, server 110 via server module 174 stores said user's one or more types of one or more subscriptions or following of searched or matched or selected sources list. For example when user provides keyword
- Figure 11 (A) illustrates another example wherein user provides object model 1165 of human face and provides object condition "similar" 1161 and instruct to execute search via button 1185, then server 110 via server module 173 searches said human face with photos and videos or user generated and user posted visual media items at server storage medium or database 115 or 915 (search at one or more 3 rd parties servers, databases, storage mediums, applications, web sites, networks, devices and via web services or APIs) and find out matched visual media items including photos and videos or clips which have said human face based on employing one or more face recognition technologies, systems, methods & algorithms and present to user sequentially or as per user's or server's presentation settings at user interface 1107 on user device 1110. For example present one of the visual media items 1112 out of many 1113 which have said user supplied similar face image or object model 1165. User is presented with matched visual media items sequentially one by one based on pre-set interval time.
- Figure 11 (B) illustrates auto presented contextual stories, generated by server 110 via server module 173 based on matching stored one or more types of contents or visual media items e.g. 1 140 from server storage medium 115 and/or one or more sources, storage mediums, servers, databases, web sites, networks, via web services & application programming language (APIs) and applications with one or more types of user data related to each user or particular user or requesting user or identified user, wherein user data including detail user profile including user gender, age, income, qualification, education, skills, home address, work address, interacted entities including schools, colleges, companies, organizations, user connections, interests & hobbies and like or domain specific profile noming job profile, dating or matrimonial profile, travel profile, food profile, personality profile & like, ono or more types of one or more logged or stored or identified or recognized user activities, actions, events, logs, transactions, locations, checked-in places, status, interactions, communications,
- APIs application programming language
- system displays the filtered visual media items e.g. 1140 to a display e.g. 1 130 at user device 1190.
- each next visual media item in sequences of visual media item is presented based on updated user context including user' s current location, checked-in place, current point of interest (POI) nearest to user's current location, update in user's status, activity, actin, movement, sense, behavior, identification of event, transaction, identification of presence of user's one or more connections or contacts or family members or friends and update in or change in one or more types of user data.
- POI current point of interest
- Figure 11 (C) user is presented with accessible links or icons or images or video snippets or controls of one or more contextual stories based on one or more user context factors as discussed above and enabling user to play, view, fast view, fast forward, pause, cancel, start next, skip stories and view next, previous, skip one or more visual media items within particular story.
- user is enable to like, dislike, select emoticon(s), comment, rate one or more visual media items at the time of viewing of visual media items.
- Figure 12 (A) illustrates in Figure 12 (A), wherein user is enable to scan 1207 or view 1370 one or more scene or object or particular pre-defined object or area or spot or logo or QRcode 1203 via e.g.
- back camera display screen 1205 and/or provide additional visual instruction, searching requirements or search query or preferences, commands and comments via front camera of 1201 of user device 1290 i.e. camera view (without capturing photo or taking video or visual media) and based on user command or instruction to generate story via button 1207 or after expiry of pre-set duration of timer 1205 (i.e. e.g. three...two... one...
- system auto recognizes and identifies object(s) or pre-defined object or area or spot or logo 1203 inside camera view for example when user is viewing particular bag 1203 from particular shop via camera view 1205 and after tap on button 1207 or based on setting after holding some level static at particular object or pre-defined or pre-stored object or logo or QRcode for pre-set period of time 1205, system identifies or auto recognizes object inside camera view 1203 and matches said identifies one or more objects 1203 with recognized object e.g. 1216 inside visual media item(s) e.g. 1209 and/or pre-defined and pre-stored object(s) e.g.
- 1763 (discuss in detail in Figure 17) provided by advertiser or user or merchant and associated one or more types of data including said pre-provided or pre-defined object(s) or object model(s) provider's profile, object model(s) associated details, preferences, target viewers' criteria including target viewer's pre-defined characteristics including gender, age, interest, education, qualification, skills, interacted or related entities and matches with viewing' s user's data including user's current location or checked-in place or nearest location, user profile including age, gender, interest & like, user activities, actions, events, transactions, status, locations, behavior, senses, communications, sharing and presents sequences of contextual visual media items e.g. 1209 at user interface 1223 on user device 1290.
- sequences or series of searched or matched or contextual visual media items comprise system can recognize, identifies, search, match, serve, add or remove in queue, select, select from curated or select by editor or human, rank, load particular number of visual media items at user device, add, remove, update, and present one by one visual media items based on updated one or more types of contextual factors related to each user, and log user's each search query or request or subscription or scan request or voice request and associated searched or matched each visual media item's unique item number, associate user's like, dislike, selected emoticon, comments & ratings.
- user can scan via camera display screen particular object, product or logo or capture photo or record video (image(s) of video) and select one or more filter criteria including one or more keywords, key phases, Boolean operators, advance search options including creation or posting date, source type(s) or categories or names or groups, most reacted including most liked, disliked, rated & commented visual media items generated, created, updated & posted by users of network.
- filter criteria including one or more keywords, key phases, Boolean operators, advance search options including creation or posting date, source type(s) or categories or names or groups, most reacted including most liked, disliked, rated & commented visual media items generated, created, updated & posted by users of network.
- keywords system matches said identified keyword(s) 1234 with keywords associated with contents associated with visual media items and/or based on image recognition pre-identified object keywords associated with visual media items and presents sequences of visual media items e.g. 1235 at user interface 1237 on user device 1290.
- user can select presentation style in list format for search results 1241 or presented contextual visual media items' presentation style and select one or more identified or preferred visual media items based on snippets and can play visual media items i.e. view one by one in selected sequence which are auto advances based on pre-set interval of time.
- User is enabled to select one or more visual media items e.g. 1261, 1266 & 1268, rank, rate, order, bookmark, save 1251, share via selecting one or more mediums or channels 1254 or select one or more destinations or contacts or group(s) and send 1255 to them.
- Figure 13 (A) illustrates one type of the user interface 1305 where e.g. user is viewing particular image or photo or video (i.e.
- system identifies or recognizes object inside said image or photo or video and matches said identified object or object model or associated identified object details and object keywords with similar recognized objects inside visual media items and presents searched or matched series or sequences of visual media items e.g. 1340 at user interface 1323 on user device 1390.
- User can view number of searched or matched visual media items at prominent place or view number of pending to view presented visual media items or view total length of duration of view of presented searched or matched number of visual media items 1317.
- System can identifies keyword(s) 1334 associated with tapped object and present to user at prominent place.
- System can integrate with 3 rd parties' web sites, web pages, web browsers, video or photo search engines or search results, presented visual media search item(s), applications, services, interfaces, servers, devices via web services and application programming interface (API).
- API application programming interface
- Figure 13 (B) illustrates in Figure 13 (B), wherein user is enabled to view or scan or identify interested via tapping button via spectacles 1399 associated or integrated video cameras 1350 and/or 1342 which is connected with device 1390 and enabling user to view or scan or capture or record photo or video via spectacles 1399 which have an integrated wireless video camera 1350 and/or 1342 that enable user to view or scan or capture photo or record video clips and save them in spectacles 1399 and/or to user device 1390 connected with spectacles 1399 via one or more communication interface or save to server 110 database or storage medium 115.
- the glasses 1354 or 1355 enables user to view or begin to capture photo or record video after user 510 tap a small button near the left or right camera.
- the camera can scan or capture photo or record videos for particular period of time or up-to user stops it.
- the snaps will live on user's Spectacles until user transfer them to smartphone 1390and upload to server database or storage medium 115 via Bluetooth or Wi-Fi or any communication interface, channel, medium, application or service.
- server database or storage medium 115 via Bluetooth or Wi-Fi or any communication interface, channel, medium, application or service.
- system Based on identified object inside real-time viewed or scanned by tapping on button or captured photo or recorded video (i.e. particular image inside video) e.g. 1370, system matches said identified object and identified associated details with similar objects inside visual media items and presents searched or matched visual media items e.g. 1335 at user interface 1383 on user device 1390.
- System can identifies keyword(s) 1384 associated with tapped object and present to user at prominent place.
- Figure 14 illustrates user interface for enabling user to select one or more categories 1410, subcategories 1422 and sub-sub-categories (not shown in Figure) and enable to follow or subscribe said categories or taxonomy related stories.
- Figure 15 illustrates interface 273 and examples explaining providing of privacy settings, which are processes and stores at client device 200 and/or processes and stores at server module 175 of server 110, for allowing or not-allowing 3 rd parties or device(s) of 3 rd parties to capture or record user's photo or video or one or more types of visual media.
- User is enabled to allow or not allow 1505 to capture photo or record video of user to other users. In the event of not allowing to capturing of photo(s) or recording video(s) of user to other users, if other users took visual media related to user then based on face recognition system identifies or recognized user's image
- user can apply default settings for determining allowing or not allowing of taking of visual media by other users related to user 1507.
- user can enable other users to take visual media of user based on real-time or each time auto asking to user for user's permission while capturing of user's photo or video by other users 1509.
- other users of network can request user to allow to capture user's visual media and in the event of acceptance of request other users of network can take visual media related to user 1512 as per one or more other settings including schedules, locations etc.
- user can set location(s) or place(s) (e.g. current location, checked-in place, defined place including work place, school place, shooting place, public place etc.), type of place (e.g.
- geo- fence boundaries where other users of network allow to or not allow to take user's photo or video or one or more types of visual media 1515.
- user can apply "Do Not Allow to Capture Visual Media" Rules & Settings including enabled or disabled settings, allow or not allow to anybody o contacts, allow or not allow to one or more contacts, allow to favorites contacts only, notify when somebody takes user's photo or video (event when allowed to them), not allow while mute, not allow to blocked users or type(s) of user(s), allow or not allow based on schedules, allow or not allow at one or more location(s) or place(s) or geo- location boundaries and any combination thereof.
- user can select and apply settings for whether allow to store or not allow to store user's one or more types of visual media at visual media taker user's device 1592 including allow or not allow all other users of network or allow or not allow selected users or contacts or pre-defined type of users or allow to capture or record but not allow to store or access and/or auto send to user, for example user [Yogesh] captures photo 1554 via video camera(s) 1550 and/or 1552 integrated with spectacles 1555 and based on setting user [Yogesh] can store or access or preview or not-store or not-access or not-preview said captured visual media 1554 and auto send to user 1581 whose photo recognized inside said captured photo 1554 or 1581 based on face recognition technologies (user's digital spectacles e.g.
- user [Candice] 1555 connected with user's [Candice] device 200, so user can preview for set period of time 1543 before auto send to said recognized face associated person 1581 for enabling to review, cancel 1544 or change destination(s) or recipient(s) 1583).
- user is notified with various types of notifications including receiving request from other users to allow to capture or record user's visual media or take visual media at particular place where user is administrator and enabling notification receiving user to accept or reject said request 1571.
- user can send request to other users to allow requesting user to capture their photos or videos 1572.
- SQL structured query language
- invention discussed in figure 15 can implemented via application programming language (API) so other camera applications, default device cameras can implement said invention.
- API application programming language
- Figure 16-17 illustrates user interface 270 for advertiser to create one or more advertisement campaigns including provide campaign name 1605, campaign categories 1607, provide budget for particular duration including daily maximum spending budget of advertisement,
- advertisement model including pay per view of advertised visual media by viewer 1615, associated target criteria including add, include or exclude IP addresses, search, match, select, purchase, customize, apply privacy settings & add one or more user actions, controls, functions, objects, buttons, interfaces, links, contents, applications, forms and like 1620, select one or more types of target destinations or applications or features where advertisements present to users or viewers 1625, provide advertisement group name, target keywords, linked advertisements, headline, description line 1 and description line 2 and links or Uniform Resource Locator (URL) 1630, add 1641 including capture photo 1642, record video 1644, select 1645, search 1647 & upload 1651 for verification, edit 1653 & remove 1643 advertisement related visual media items 1635, 1638 & 1640 which will show to target criteria specific viewing users, one or more target criteria including provide or add keywords 1761, provide one or more object criteria including add 1777 including capture photo 1764, select image or object model 1765, search image or object model 1766 and add and upload for verification 1767 or remove 1775 or 1776 one or more object models or
- Figure 17 illustrates user interface for advance search for providing target criteria for adding or integrating advertised visual media items with visual media stories which are presented to requestor or searching user or scanned user (discussed in detail in Figure 9-14) based on said advance target criteria specific viewing users including searchers and viewing users of visual media items, following or subscribing users of sources of visual media items.
- Advertised user can provide various object criteria related to recognized object inside visual media items and criteria related to contents associated with visual media items.
- Advertised user can select or select as current location or select from map or select or define geo-boundaries, ranges or geo-fences or provide location(s) or place(s) or points of interest (POI(s)) 1719, 1721 & 1723 with intention to add advertised visual media item(s) to visual media viewing users of said provided one or more types of one or more locations 1019, 1021 & 1023 and/or provide one or more object models 1763 Or 1770 with intention to match provided object models with recognized or identified objects inside stored visual media items and match provided location(s) with location of viewers of visual media items where serve visual media items or match provided object model(s) with recognized object(s) inside visual media associated location(s).
- Advertised user can provide locations 1719, 1721 & 1723 with intention to match provided location(s) with content associated with visual media items and add or integrate advertised visual media items with served contents or visual media items of various types of stories requested by searching user, scanned user, followers (discussed in detail in Figure 9-14).
- Advertised can provide object keywords or keywords including all these words/tags 1701 or 1706, this exact word or tag or phrase 1702 or 1707, any of these words 1703 or 1708, none of these words 1705 or 1709, Categories, Hashtags & Tags 1710 or 1713 recognized via optical character recognition (OCR), wherein said provided keywords and associated conditions or type matched with recognized objects inside pre-stored visual media items including photos and videos related or associated or identified or keywords.
- OCR optical character recognition
- Advertised user is enabled to search, select and provide one or more types or categories or entities or user name(s) or contact(s) or group(s) or unique identities or names of viewing users for targeting advertised visual media items 1726.
- Advertise user can use structured query language (SQL) or natural query to identify or define type of viewing users of advertised visual media items for adding or integrating advertised visual media items to viewing users of stories.
- sources comprises phone contacts, groups, social network identities or user names, followers, categories or locations or taxonomy of users or sources, one or more 3 rd parties servers, web sites, applications, web services, devices, networks and storage mediums 1726.
- Advertise user can provide or define type of content or visual media items searching users, requesting users, scanned users, following users & viewing users of visual media items or stories (discussed in detail in Figure 9-14) including one or more profile fields including gender, age or age range, education, qualification, home or work locations, related entities including
- Advertise user can add one or more fields 1738. Advertise user can add or integrate advertised visual media item(s) e.g. 1635, 1638 & 1640 with visual media items which are most viewed 1755, most commented 1757, most ranked 1760, and most liked 1758 visual media items while serving and presenting to viewing users including searching users, requestors, followers or subscribers and scanned users (discussed in detail in Figure 9-14). Advertise user can limit adding or integrating advertised visual media item(s) with visual media items which are created at particular time or range of date & time or duration including any time, last 24 hours, past number of seconds, minutes, hours, days, weeks, months and years, range of date & time 1715.
- Advertiser user can provide language of creator or language associated with object inside visual media items or language associated with contents associated with visual media items 1717. After providing one or more advance search criteria as discussed above advertiser user can save or save as draft or update 1786 or discard 1787 said settings (wherein settings processes and saved at local storage of client device 200 and/or at server 110 via server module 176), target criteria and created advertisements and can start 1788 or pause 1789 or stop or cancel 1790 or scheduled to start 1791 advertisement campaign.
- Advertise user can create new campaign 1782, view and manage existing campaign(s) 1793, add new advertisement group 1795, view and manage existing advertisement group(s) 1796, create new advertisement 1785 and view statistics & analytics 1798 for all or selected campaigns related advertisements performance including number of viewers of visual media advertisements as per each provided advertisement criteria, associated spending, number of users who access advertisement associated one or more types of user actions or controls 1620, number of visual media item(s) presented at particular type of applications, interfaces, features, feeds, and stories 1625 and like.
- 1635 or 1638 or 1640 at user interface e.g. 997 or 1107 or 1130 or 1135 or 1223 or 1237 or 1273 or 1323 or 1383 or 1965 or 2626 or 2644 or 2626 2736 or 2744 or 3965 or 44413 or 4813 5438 or 5865 or 6305 or 6350 or 6372 or 63926683 on user device.
- Figure 18 illustrates user interface 271 for enabling user to add to selected or auto determined one or more recipient(s) or destination(s)'s local storage or sender named folder or gallery or album or feed of web page or application or interface or send or post or share or broadcast one or more types of content items or visual media items including select 1884, search 1882& capture 1886 photo(s), select 1884, search 1882 & record videos and/or voice 1888, augment or edit or apply one or more photo filters, and overlays on visual media, broadcast live stream, prepare, edit & draft text contents 1890 or any combination thereof from sender user device 1831 to one or more selected or pre-set or default or auto determined contacts or one or more types of one or more selected or pre-set or default or auto determined destinations e.g.
- a server 110 via server module 177 comprising: a processor; and a memory storing instructions executed by the processor to: receives said posted content item(s) or visual media item(s) 1861- 1869 from said sender or posting user or broadcaster user device e.g. 1831 for sending to one or more sender selected or target destination(s) or intended recipient(s) e.g. recipient's device 1832 or local storage medium 1824 of recipient's device 1832.
- Server 110 presents or sends or stores with recipient's permission or based on setting auto stores at local storage or stores at particular sender named gallery or album or feed or web page or application or interface or folder of recipient user's device 1832.
- Recipient user 1852 can search, filter, sort 1836 and select one or more senders or sources or content items or set or group or categories of content items or album 1856 and can access or view received content item(s) or visual media item(s) 1871-1879 from said selected sender or source 1854 at user interface 1833 of user device 1832.
- Sender user 1842 can search, filter, sort 1847 & select one or more recipients or destinations 1844 and can search, select, view, access, add or post new selected 1884 or search & selected 18882 or captured photo 1886 or recorded video 1888 or visual media item(s), update or remove 1845 from shared content item(s) or visual media item(s) by sender via sender user interface 1834, after add, update & remove changes or synchronization including employing of pull replication, push replication, snapshot & merge replication or updates will effect at recipient's device and/or access from server, after add, update & remove changes or synchronization or updates will effect at recipient's device and/or access directly at said selected recipient's or destination's user interface 1834 at user device 1831 via e.g.
- emulator for enabling sending user 1842 to access said posted content items 1861-1869 at recipient device 1832.
- user can add invitation to add contacts or destinations.
- user can block or mute one or more senders or sources or contacts to receive contents.
- user can scheduled to receive contents from one or more selected sources or senders.
- user can apply do not disturb settings to receive from all or selected or favorite contacts, receive when user is online, receive at particular schedules date & time.
- receiver can apply content item or visual media receiving, presenting and access settings for one or more contact(s) or sender(s) or source(s) as discuss in detail in Figure 8.
- sender can view various status including content or visual media sent or post or add new, received at server or by recipient at recipient's device, viewed or not-viewed by recipient, recipient is online or offline, update by sender, remove by sender, saved by recipient, screenshot taken by recipient, auto removed from recipient's device based on ephemeral settings as discussed in detail in Figure 7.
- sender can allow receiver to save, re-share, rate, make comment, like or dislike, update or edit and remove content items.
- sender can select one or more content items or visual media items and select one or more recipients and select one or more user action(s) including add new or updated or post new or updated, real-time view only, view within pre-set duration after that remove, view for particular number of times within particular life duration after that auto remove, view within particular life duration after that auto remove (as discussed in Figure 7) and remove at/from said selected one or more recipient(s)' device(s).
- the server receives a selection of a content view setting(s) and rule(s) (as discussed in detail in Figure 7) to be associated with the destination(s) or recipient(s) e.g. 1832 from the user 1831, the content view setting(s) and rule(s) establishing one or more
- sender(s) or source(s) of content is/are enabled to send one or more types of one or more media with associated applied or pre-set view settings, rules and conditions and associated dynamic actions to one or more contacts, connections, followers, targeted recipients based on one or more target criteria or contextual users or network, destinations, groups, networks, web sites, devices, databases, servers, applications and services.
- sender(s) or source(s) 1831 or 1842 of content 1861-1869 is/are enabled to access shared contents or media 1861-1869 and update or apply view settings at one or more recipient's ends 1832 or 1852 or at one or more devices, applications, interfaces e.g. 1833, web page or profile page, and storage medium of recipients 1832 or 1852.
- view settings, rules and conditions including remove after set period of time, set period of time to view each shared media item, particular number of or type of reaction required or required within particular set period of time for second time receiving of shared content.
- content including one or more type of media including photo, video, stream, voice, text, link, file, object or one or more types of digital items.
- access rights including add new or send one or more types of media, delete or remove, edit one or more types of media, update associated viewing settings for recipient including update set period of time to delete message, allow to save or not, re-share allow or not, sort, filter, search.
- sender to select one or more media items at sender's device or application or interface or storage medium or explorer or media gallery and select one or more contacts or user names or identities or destinations and send or send updated or update.
- sender to select one or more media items at sender's device or application or interface or storage medium or explorer or media gallery and select one or more contacts or user names or identities or destinations whom sender sends said media item(s) and remove.
- Figure 19 illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention and illustrates the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 275 to implement operations of the invention.
- the ephemeral message controller 275 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- the processor 230 is also coupled to one or more sensors including Orientation Sensor 237, Position Sensor 242, GPS Sensor 238, Audio sensor 245, Proximity Sensor 246, Gyroscope 247, Accelerometer 248 and other one or more types of sensors 250 including hover sensor, eye tracking system via optical sensors 240 or image sensors 244.
- the ephemeral message controller 275 monitors signals or senses or one or more types of pre-defined equivalents of generated or updated sensor data from the one or more sensors including voice command from audio sensor 245 or particular type of eye movement from eye tracking system based on image sensors 244 or optical sensors 240 or hover signal from e.g. specific type of proximity sensor 246. If one of the pre-defined types of said senses detected or is observed by the said one or more types of sensors during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message, if any, is displayed. In one embodiment, two types of signals or senses from sensors may be monitored.
- a continuous signal or senses from one or more types of sensors may be required to display a message, while an additional sensor signal or sense may operate to terminate the display of the message.
- the viewer might instruct via voice command or hover on display or done one or more types of pre-defined eye movement. This causes the screen to display the next piece of media in the set.
- the one or more types of pre-defined signal or senses provide by user and detected or sense by sensor to terminate a message while media viewer application or interface is open or while viewing of display 210.
- the sensor signal or sense is any sense applied on the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete" to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 100 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 242. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 275.
- Figure 19 (B) illustrates processing operations associated with the ephemeral message controller 275.
- an ephemeral message is displayed 1920 (In an embodiment message can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- API application programming interface
- SDK software development toolkit
- One or more types of user sense is/are then monitored, tracked, detected and identified 1925. If pre-defined user sense identified or detected or recognized or exists (1925— Yes), then the current message is deleted and the next message, if any, is displayed 1920. If user sense does not identified or detected or recognized or exist (1925— No), then the timer is checked 1930. If the timer has expired (1930— Yes), then the current message is deleted and the next message, if any, is displayed 1920. If the timer has not expired (1930— No), then another user sense identification or detection or recognition check is made 1925. This sequence between blocks 1925 and 1930 is repeated until one or more types of pre-defined user sense is identified or detected or recognized or the timer expires.
- FIG. 19 (A) illustrates processing operations associated with the ephemeral message controller 275.
- an ephemeral message is displayed 1910 (In an embodiment message can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- API application programming interface
- SDK software development toolkit
- pre-defined user sense identified or detected or recognized or exists (1915— Yes) If pre-defined user sense identified or detected or recognized or exists (1915— Yes), then the current message is deleted and the next message, if any, is displayed 1910, then another user sense identification or detection or recognition check is made 1915. This sequence between blocks 1910 and 1915 is repeated until one or more types of pre-defined user sense is identified or detected or recognized or the timer expires.
- Figure 19 (C) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 1960 available for viewing.
- a first message 1971 may be displayed.
- a second message 1970 is displayed.
- Figure 20 illustrates processing operations associated with real-time display of ephemeral messages in accordance with an embodiment of the invention.
- Figure 20 illustrates a data structure for real-time ephemeral messages.
- Figure 25 (A) illustrates the exterior of an electronic device implementing real-time accelerated display of ephemeral messages in accordance with the invention.
- a column 2062 may have a recipient user's unique identity
- a column 2064 may have a sender user's unique identity
- a column 2066 may have a list of messages or media items.
- Another column 2068 may have a list of message accept-to- view duration parameters for individual messages, wherein accept-to-view timer is a pre-set duration within which user have to accept notification or view indication or tap on notification to open & view message.
- Another column 2072 may have a list of message display or message view duration parameters for individual messages, wherein message display or message view duration is a pre-set duration within which user have to view said presented message and in the event of expiry of said view duration timer said message is removed and another message is presented (if any).
- message display or message view duration is a pre-set duration within which user have to view said presented message and in the event of expiry of said view duration timer said message is removed and another message is presented (if any).
- message display or message view duration is a pre-set duration within which user have to view said presented message and in the event of expiry of said view duration timer said message is removed and another message is presented (if any).
- message display or message view duration is a pre-set duration within which user have to view said presented message and in the event of expiry of said view duration timer said message is removed and another message is presented (if any).
- user "Cindy” selects or takes visual media including photo 2510 or video 2515 and select contacts 2520 e.g.
- recipient user in real-time i.e. before expiry of view timer 2510 can provide one or more types of user reactions including like, dislike, comment, re-share or save based on sender's permission or privacy settings, report, rating on said visual media 2505.
- sender user can in real-time view said reactions 2552 from one or more recipient users (e.g. from user interface 2590 of user [Candice]'s device 2580) of said shared or sent visual media 2505.
- sender user can apply settings as discussed in figure 7.
- recipient user can apply settings as discussed in figure 8.
- system after sending of first notification or indication about receiving message, remind or re-send non-accepted or pending notifications for particular pre-set number of times only based on identification of recipient is online, when recipient user's manual status is "available” or “online” & like, remind or re-send after pre-set interval duration, re-send when user is not muted, re-send based on sender's scheduled availability, re-send based on "Do not disturb" policy or settings of recipients including sender is allowed to send & like, re-send when recipient user is not block sender and when particular application or interface is open, and determine that user device is open and user is busy in predefine activities including calling or attending phone call of others, busy in texting via instant messenger(s) and determine that user is not busy and currently doing non-busy pre-defined activities including playing games, browsing social networks & like. So remind or re-send notification when user is not busy. So present invention makes possible maximum real-time or near real-time sending and
- Figure 21 illustrates processing operations associated with real-time display of ephemeral messages in accordance with an embodiment of the invention.
- An real-time ephemeral message controller 276 with instructions executed by a processor to: the first processing operation of Figure 21 is to maintain each real-time ephemeral message and associate accept-to-view duration and view duration 2105; the next processing operation of Figure 21 is to serve or present notification(s) or indication regarding receiving of real-time ephemeral message(s) or present on a display indicia of one or more notification(s) of receiving of real-time ephemeral messages available for viewing 2110 (In an embodiment message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or
- Figure 21 illustrates a data structure for real-time ephemeral messages.
- Figure 26 (A) illustrates the exterior of an electronic device implementing real-time accelerated display of ephemeral messages in accordance with the invention.
- a column 2162 may have a recipient user's unique identity
- a column 2164 may have a sender user's unique identity
- a column 2166 may have a list of messages or media items.
- Another column 2168 may have a list of message accept-to- view duration parameters for individual messages, wherein accept-to- view timer is a pre-set duration within which user have to accept notification or view indication or tap on notification to open & view message.
- Another column 2172 may have a list of message display or message view duration parameters for individual messages, wherein message display or message view duration is a pre-set duration within which user have to view said presented message and in the event of expiry of said view duration timer said message is removed and another message is presented (if any).
- said presented message [P2] is removed and in an embodiment in the event of receiving of next message notification not during viewing of second message [P2] but after viewing and removing of second message [P2], user is further enabled to accept or tap on next message [P3] notification within next message [P3] associated accept-to-view timer and in the event of acceptance or tapping on next message [P3] notification or indication within accept-to-view time, user is presented with next message [P3] and starts view timer associated with that message [P3] and in the event of expiry of view timer, message [P3] is removed and user is presented with next message (if any received and pending to view) e.g. ephemeral message [P4].
- system in the event of non-acceptance of notification or indication based on settings system remind or re-send non-accepted or pending notifications for particular pre-set number of times only and after re-sending of notifications for said particular pre-set number of times system removes message from the server 110.
- system after sending of first notification or indication about receiving message, remind or re-send non-accepted or pending notifications for particular pre-set number of times only based on identification of recipient is online, while recipient user's manual status is "available" or “online” & like, remind or re-send after pre-set interval duration, re-send when user is not muted, re-send based on sender's scheduled availability, re-send based on "Do not disturb" policy or settings of recipients including sender is allowed to send & like, re-send when recipient user is not block sender, when particular application or interface is open, and determine that user device is open and user is busy in pre-define activities including calling or attending phone call of others, busy in texting via instant messenger(s) and determine that user is not busy and currently doing non-busy predefined activities including playing games, browsing social networks & like. So remind or re- send notification when user is not busy. So present invention makes possible maximum real-time or near real-time sending and viewing
- FIG. 22 illustrates processing operations associated with real-time display of ephemeral messages in accordance with an embodiment of the invention.
- An real-time ephemeral message controller 276 with instructions executed by a processor to: maintain each Real-time Ephemeral Message and associate Accept-to- View Duration 2205; present
- message can serve by server 110 via server module 178 or serve from client device 200 storage medium or from message queue or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of
- API application programming interface
- SDK software development toolkit
- Notification(s) 2232 in the event of receiving instruction to close or hide or intention to view next message (if any) or remove presented Real-time Ephemeral Message(s) 2238, discard or remove or hide Real-time Ephemeral Message or content or media item associate with selected or identified Notification and start Accept-to-View Timer associate with all other Received Notification(s) 2241.
- Figure 22 illustrates a data structure for real-time ephemeral messages.
- Figure 27 (D) illustrates the exterior of an electronic device implementing real-time accelerated display of ephemeral messages in accordance with the invention.
- a column 2262 may have a recipient user's unique identity
- a column 2264 may have a sender user's unique identity
- a column 2266 may have a list of messages or media items.
- Another column 2268 may have a list of message accept-to- view duration parameters for individual messages, wherein accept-to- view timer is a pre-set duration within which user have to accept notification or view indication or tap on notification to open & view message.
- first received message 2705 when the user tapped on notification about receiving of first message within accept-to-view time then user is presented with first received message 2705 and pause accept-to-view times of one or more received notification(s) (e.g. 2711, 2712 & 2713) about received message and user is enabled to view said presented first message 2725 up- to user instruction to close or hide or remove (e.g. tap on remove or hide or close icon 2720 or tap anywhere on user interface 2728 or display 210) said first message 2725 and in the event of user instruction to close or hide or remove said presented first message e.g. 2720, first message [PI] 2725 removed and starts accept-to-view timer(s) of all paused notification(s) (e.g. 2711,
- accept-to-view time of second notification 2716 present message and pause accept- to-view timer of all other received notification(s) (e.g. 2711& 2712) and in the event of user instruction to close or hide or remove presented second message [P2], close or hide or remove presented second message [P2] and starts timer of all other paused notification(s).
- receive-to-view time of second notification 2716 present message and pause accept- to-view timer of all other received notification(s) (e.g. 2711& 2712) and in the event of user instruction to close or hide or remove presented second message [P2], close or hide or remove presented second message [P2] and starts timer of all other paused notification(s).
- Figure 26 (C) illustrates the exterior of an electronic device implementing real-time accelerated display of ephemeral messages in accordance with the invention.
- Figure 26 (C) illustrates the exterior of an electronic device implementing real-time accelerated display of ephemeral messages in accordance with the invention.
- about received message and user is enabled to view said presented first message up-to expiry of associate view timer and/or user instruction to close or hide or remove said first message and in the event of expiry of message associate view timer and/or user instruction to close or hide or remove said presented first message, first message [PI] removed and starts accept-to-view timer(s) of all paused
- notification(s) e.g. 2611, 2612 & 2613
- second notification or any preferred notification e.g. 2612 from list 2615 within accept-to-view time 2616 of second notification or preferred or selected message notification 2612
- present message 2625 and pause accept-to-view timer of all other received notification(s) e.g. 2611 & 2613
- expiry of message 2625 associate view timer 2620 and/or user instruction to close or hide or remove (e.g.
- system after sending of first notification or indication about receiving message, remind or re-send non-accepted or pending notifications for particular pre-set number of times only based on identification of recipient is online, while recipient user's manual status is "available" or “online” & like, remind or re-send after pre-set interval duration, re-send when user is not muted, re-send based on sender's scheduled availability, re-send based on "Do not disturb" policy or settings of recipients including sender is allowed to send & like, re-send when recipient user is not block sender, when particular application or interface is open, and determine that user device is open and user is busy in pre-define activities including calling or attending phone call of others, busy in texting via instant messenger(s) and determine that user is not busy and currently doing non-busy predefined activities including playing games, browsing social networks & like. So remind or re- send notification when user is not busy. So present invention makes possible maximum real-time or near real-time sending and viewing
- FIG. 23 illustrates processing operations associated with real-time display of ephemeral messages in accordance with an embodiment of the invention.
- An real-time ephemeral message controller 276 with instructions executed by a processor to: present notification(s) or indication regarding receiving of Real-time Ephemeral Message(s) in chronological order 230
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof)5; in the event of receiving of other notification(s) 2306, pause Accept- to- View timer for pre-set period of time of all other received Notification(s) 2311; start Accept- to- View Timer of first Notification of chronological list of received Notifications (
- Figure 23 illustrates a data structure for real-time ephemeral messages.
- Figure 27 (E) illustrates the exterior of an electronic device implementing real-time accelerated display of ephemeral messages in accordance with the invention.
- a column 2362 may have a recipient user's unique identity, a column 2364 may have a sender user's unique identity, a column 2366 may have a list of messages or media items.
- Another column 2368 may have a list of message accept-to-view duration parameters for individual messages and another column 2368 may have a remaining Accept-to-View Timer of next message 2370, wherein accept-to-view timer is a pre-set duration within which user have to accept next notification or view indication or tap on next notification or tap on timer icon to open & view next message.
- accept-to-view timer is a pre-set duration within which user have to accept next notification or view indication or tap on next notification or tap on timer icon to open & view next message.
- system in the event of non-acceptance of notification or indication based on settings system remind or re-send non-accepted or pending notifications for particular pre-set number of times only and after re-sending of notifications for said particular pre-set number of times system removes message from the server 110.
- system after sending of first notification or indication about receiving message, remind or re-send non-accepted or pending notifications for particular pre-set number of times only based on identification of recipient is online, while recipient user's manual status is "available" or “online” & like, remind or re-send after pre-set interval duration, re-send when user is not muted, re-send based on sender's scheduled availability, re-send based on "Do not disturb" policy or settings of recipients including sender is allowed to send & like, re-send when recipient user is not block sender, when particular application or interface is open, and determine that user device is open and user is busy in pre-define activities including calling or attending phone call of others, busy in texting via instant messenger(s) and determine that user is not busy and currently doing non-busy pre- defined activities including playing games, browsing social networks & like. So remind or re- send notification when user is not busy. So present invention makes possible maximum real-time or near real-time sending and
- FIG. 24 (A) illustrates processing operations associated with real- time display of ephemeral messages in accordance with an embodiment of the invention.
- An real-time ephemeral message controller with instructions executed by a processor to: maintain by the server system, each real-time ephemeral message and associate accept-to-view duration; present by the server system, on the display first notification for providing indication of receiving of first real-time ephemeral message from received or identified one or more ephemeral messages 2405
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof); start accept-to-view timer associate with first notification of receiving of first ephemer
- FIG. 24 (B) illustrates processing operations associated with realtime display of ephemeral messages in accordance with an embodiment of the invention.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof); start accept-to-view timer associate with first notification of receiving of first set of ephemeral message(s) 2437; in response to expire of accept-to-view timer associate with first notification of receiving of first set of ephemeral message(s) 2440, remove or disable first notification 2442; in response
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof); start accept-to-view timer 2409 associate with first received or selected notification and/or real-time ephemeral message for a first transitory period of time defined by associate accept-to-
- Figure 28 illustrates processing operations associated with real-time session specific display of ephemeral messages in accordance with an embodiment of the invention.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a Real-time or Live Ephemeral Message Session Controller and Application 283 to implement operations of the invention.
- the Real-time or Live Ephemeral Message Session Controller and Application includes executable instructions to real-time or live accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video, a voice, a one or more types of multi-media, augmented or edited media and the like.
- the display time for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient or a setting specified by the server or a setting auto determined based on type of sender, type of receiver, determined availability of receiver, number of messages pending to view, number of message sent by particular sender, type of relationship with sender, frequency of sharing between sender and receiver & like. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the Real-time or Live Ephemeral Message Session Controller 283 monitors signals from the touch controller 215. If haptic contact is observed by the touch controller 215 during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message, if any, is displayed. In one embodiment, two haptic signals may be monitored.
- a continuous haptic signal may be required to display a message, while an additional haptic signal may operate to terminate the display of the message. For example, the viewer might tap the screen with a finger while maintaining haptic contact with another finger. This causes the screen to display the next piece of media in the set.
- the haptic contact to terminate a message is any gesture applied to any location on the display 210.
- the haptic contact is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete" to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the Real-time or Live Ephemeral Message Session Controller 283.
- Figure 28 (A) illustrates processing operations associated with the Real-time or Live Ephemeral Message Session Controller 283.
- a notification is displayed 2805.
- accept-to view-timer associated with notification or in the event of rejection via tapping on notification associated "Reject" button or control or link or image or in the event of receiving of rejection signal or pre-defined user sense indication rejection command or instruction from user via one or more types of one or more sensors of user device(s) or haptic contact engagement on "rejection” or “cancel” or “end” button or link or image or control or pre-defined area or identification of block of sender by recipient or identification of mute by recipient or
- identification of "Do Not Disturb” policies or settings by recipient including not allow recipient or not falls in scheduled to receive by recipient or auto determined busy status of recipient or identification of offline status of user 2808, reject or end or cancel or miss said notification associated session 2810 OR in the event of haptic contact engagement on notification area or identification or recognition of one or more type of pre-defined user senses via one or more sensors of user device(s) or in the event of acceptance by tapping on "Accept” button or icon or link or control associated with notification or in the event of auto accept based on pre-set auto accept settings or auto accept after expiry of pre-set period of time settings or accept within preset accept-to-view duration timer, start session (i.e. real-time sharing or sending, receiving and viewing session) 2813.
- start session i.e. real-time sharing or sending, receiving and viewing session
- receiver or sender can any time haptic contact engagement or haptic contact swipe on tap on "end” button or link or image or control or pre-defined full or part of display area or detection or recognition or receiving of one or more types of pre-defined user senses via one or more types of one or more sensors of user device(s) instructing system to "end” session 2815 then end said started 2813 session 2820.
- select & send or auto send one or more ephemeral messages or server send received or stored one or more ephemeral messages from one or more sources or senders to one or more target recipients or requesting user or searching user or auto determined recipients and add to ephemeral message queue(s) at each intended or targeted recipient's device(s) or interface(s) for presenting said ephemeral message(s) to recipient or viewer or an ephemeral message is displayed 2828.
- an ephemeral message is displayed 2828.
- a timer is then started 2830.
- the timer may be associated with the processor 230.
- message 2828 or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of
- authentication information or one or more types of communication interfaces and any combination thereof.
- Haptic contact is then monitored 2835. If haptic contact exists (2835— Yes), then the current message is deleted and the next message, if any, is displayed 2828. If haptic contact does not exist (2835— No), then the timer is checked 2840. If the timer has expired (2840— Yes), then the current message is deleted and the next message, if any, is displayed 2828. If the timer has not expired (2840— No), then another haptic contact check is made 2835. This sequence between blocks 2835 and 2840 is repeated until haptic contact is identified or the timer expires.
- one or more types of pre-defined user sense(s) via one or more types of sensors e.g. voice sensor (for detecting or recognizing voice command), image sensor (for tracking eye movement), and proximity sensor (recognizing hovering on particular area of display) as discussed in detail in Figure 19) is/are then monitored 2835. If pre-defined user sense(s) or signal(s) detected or recognized or identified or exists (2835— Yes), then the current message is deleted and the next message, if any, is displayed 2828. If pre-defined user sense(s) or signal(s) does not detected or recognized or identified or exists (2835— No), then the timer is checked 2840.
- sensors e.g. voice sensor (for detecting or recognizing voice command), image sensor (for tracking eye movement), and proximity sensor (recognizing hovering on particular area of display) as discussed in detail in Figure 19
- Figure 28 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 2860 available for viewing.
- a first message 2871 may be displayed.
- a second message 2870 is displayed. Alternately, if haptic contact or one or more types of pre-defined user sense(s) is received before the timer expires the second message 2870 is displayed.
- ephemeral message is conditional including enable recipient user to view message unlimited times within pre-set life duration and after expiry of said pre-set life duration remove message or allow recipient or viewing user to view message for pre-set number of times then remove message after passing said limit of viewing of message or allow recipient or viewing user to view message for pre-set number of times within pre-set life duration then remove message after expiry of said life duration or after passing said limit of viewing of message whichever is earlier.
- Figure 29 illustrates processing operations associated with real-time session specific display of ephemeral messages in accordance with an embodiment of the invention.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a Real-time or Live Ephemeral Message Session Controller and Application 283 to implement operations of the invention.
- the Real-time or Live Ephemeral Message Session Controller and Application includes executable instructions to real-time or live accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video, a voice, a one or more types of multi-media, augmented or edited media and the like.
- the display time for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient or a setting specified by the server or a setting auto determined based on type of sender, type of receiver, determined availability of receiver, number of messages pending to view, number of message sent by particular sender, type of relationship with sender, frequency of sharing between sender and receiver & like. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the Real-time or Live Ephemeral Message Session Controller 283 monitors signals from the touch controller 215. If haptic contact is observed by the touch controller 215 during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message, if any, is displayed. In one embodiment, two haptic signals may be monitored.
- a continuous haptic signal may be required to display a message, while an additional haptic signal may operate to terminate the display of the message. For example, the viewer might tap the screen with a finger while maintaining haptic contact with another finger. This causes the screen to display the next piece of media in the set.
- the haptic contact to terminate a message is any gesture applied to any location on the display 210.
- the haptic contact is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete" to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the Real-time or Live Ephemeral Message Session Controller 283.
- Figure 29 (A) illustrates processing operations associated with the Real-time or Live Ephemeral Message Session Controller 283.
- sender or broadcaster or session starter After starting of session 2913 allowing sender or broadcaster or session starter to capture one or more or series or sequences of photo or video or voice or one or more types of content items or visual media items or augment or edit & send or select & send or search, select & send or auto send one or more ephemeral messages to one or more contacts and/or destinations.
- select & send or auto send one or more ephemeral messages to one or more contacts and/or destinations.
- Disturb" policies or settings by recipient including not allow recipient or not falls in scheduled to receive by recipient or auto determined busy status of recipient or identification of offline status of user 2908, then do not show shared or sent or broadcasted one or more types of visual media items or content items 2910 OR in the event of haptic contact engagement on notification area or identification or recognition of one or more type of pre-defined user senses via one or more sensors of user device(s) or in the event of acceptance by tapping on "Accept" button or icon or link or control associated with notification or in the event of auto accept based on pre-set auto accept settings or auto accept after expiry of pre-set period of time settings or accept within any time during session (i.e. accept any time before ending of currently started session), start presenting ephemeral message.
- sender can any time haptic contact engagement or haptic contact swipe on tap on "end” button or link or image or control or pre-defined full or part of display area or detection or recognition or receiving of one or more types of pre-defined user senses via one or more types of one or more sensors of user device(s) instructing system to "end” 2915 session 2913 then end 2920 said started session 2913.
- receiver can any time haptic contact engagement or haptic contact swipe on tap on "end” 2916 button or link or image or control or pre-defined full or part of display area or detection or recognition or receiving of one or more types of predefined user senses via one or more types of one or more sensors of user device(s) instructing system to "end” showing of ephemeral message(s) 2918.
- an ephemeral message which is/are posted by sender from start of session 2913 (which is/are stored by server 110) is displayed 2928.
- a timer is then started 2930. The timer may be associated with the processor 230.
- an ephemeral message which is/are posted by sender after accepting of said notification or indication is displayed 2928 (i.e. recipient user is not presented with contents which is posted by sender before acceptance).
- a timer is then started 2930. The timer may be associated with the processor 230.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of
- API application programming interface
- SDK software development toolkit
- authentication information or one or more types of communication interfaces and any combination thereof
- Haptic contact is then monitored 2935. If haptic contact exists (2935— Yes), then the current message is deleted and the next message, if any, is displayed 2928. If haptic contact does not exist (2935— No), then the timer is checked 2840. If the timer has expired (2940— Yes), then the current message is deleted and the next message, if any, is displayed 2928. If the timer has not expired (2940— No), then another haptic contact check is made 2935. This sequence between blocks 2935 and 2940 is repeated until haptic contact is identified or the timer expires.
- one or more types of pre-defined user sense(s) via one or more types of sensors e.g. voice sensor (for detecting or recognizing voice command), image sensor (for tracking eye movement), and proximity sensor (recognizing hovering on particular area of display) as discussed in detail in Figure 19) is/are then monitored 2935. If pre-defined user sense(s) or signal(s) detected or recognized or identified or exists (2935— Yes), then the current message is deleted and the next message, if any, is displayed 2928. If pre-defined user sense(s) or signal(s) does not detected or recognized or identified or exists (2935— No), then the timer is checked 2940.
- sensors e.g. voice sensor (for detecting or recognizing voice command), image sensor (for tracking eye movement), and proximity sensor (recognizing hovering on particular area of display) as discussed in detail in Figure 19
- Figure 29 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 2960 available for viewing.
- a first message 2971 may be displayed.
- a second message 2970 is displayed. Alternately, if haptic contact or one or more types of pre-defined user sense(s) is received before the timer expires the second message 2970 is displayed.
- ephemeral message is conditional including enable recipient user to view message unlimited times within pre-set life duration and after expiry of said pre-set life duration remove message or allow recipient or viewing user to view message for pre-set number of times then remove message after passing said limit of viewing of message or allow recipient or viewing user to view message for pre-set number of times within pre-set life duration then remove message after expiry of said life duration or after passing said limit of viewing of message whichever is earlier.
- Figure 30 illustrates processing operations associated with display of ephemeral messages and based on haptic contact engagement or tap on particular content item e.g. 3017 from list of presented on or more content items 3025, then hide said ephemeral messages 3017 or media items 3017 and load or present next available (if any) ephemeral messages e.g. 3027 and add or send said hided ephemeral messages 3017 or media items 3017 to another list illustrated in Figure 30 (C) e.g. 3019, wherein user can further add or send said hided ephemeral messages 3017 or media items 3017 to list illustrated in Figure 30 (B) or remove manually (by providing one or more types of remove instruction including e.g.
- system removes said hided ephemeral messages 3017 or media items 3017 from list illustrated in figure 30 (C) in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: displaying a set of or particular number(s) of list of content item(s) or visual media item(s) or ephemeral message(s) 3025 (e.g. 3027 and 3030); in response to receive haptic contact engagement or tap on particular content item e.g. 3017 then hide said ephemeral messages 3017 or media items 3017 and load or present next available (if any) ephemeral messages e.g. 3027.
- receive from a touch controller a haptic contact signal indicative of a gesture applied on the particular content item e.g.
- the ephemeral message controller hides the ephemeral content item(s) e.g,3017 in response to the haptic contact signal 3007 and proceeds to present on the display 210 a second ephemeral content item .g. 3027 of the collection of ephemeral content item(s) 3028 (e.g. 3027 and 3030), wherein system adds or sends said hided ephemeral messages 3017 or media items 3017 to another list illustrated in Figure 30 (C) e.g.
- Figure 31 illustrates processing operations associated with display of ephemeral messages and media item completely scroll-up to remove and append media item at the end of feed of or set of ephemeral messages in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: displaying a scrollable list of content items 3108 (e.g. 3113 & 3118); receiving input associated with a scroll command 3105; based on the scroll command identify complete scroll-up of one or more digital content items e.g. 3103 out of pre-defined boundary e.g. 3104, in response to identifying of complete scroll-up of one or more digital content items e.g. 3103, remove complete scrolled-up one or more digital content items e.g. 3103.
- a scrollable list of content items 3108 e.g. 3113 & 3118
- receiving input associated with a scroll command 3105 based on the scroll command identify complete scroll-up of one or more digital content items e.g. 3103 out of pre-defined boundary e.g. 3104, in response to identifying of complete scroll-up of one or more digital content items e.g. 3103, remove complete scrolled-up one or more digital content items e
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215. If receiving input associated with a scroll command, then based on the scroll command identify complete scroll-up of one or more digital content items, in response to identifying of complete scroll-up of one or more digital content items, remove complete scrolled-up one or more digital content items or if haptic swipe contact is observed by the touch controller 215 and displayed visual media item completely scrolled up over pre-defined boundaries during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message, if any, is displayed.
- the haptic swipe is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- FIG. 31 (A) illustrates processing operations associated with the ephemeral message controller 277.
- an ephemeral message is displayed 3140 (e.g. 3113 and 3118)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of
- API application programming interface
- SDK software development toolkit
- authentication information or one or more types of communication interfaces and any combination thereof
- Haptic swipe is then monitored 3145. If haptic swipe leads to complete scrolling up of displayed visual media item (e.g. 3113) out of pre-defined boundary e.g. 3104 (3145 ⁇ Yes), then the completely scrolled-up visual media item or message (e.g. 3113) is deleted (e.g. 3103) and the next message (e.g. 3109) 3140, if any, is appended to feed and displayed. This sequence is repeated until haptic swipe identified which leads to complete scrolling up of displayed visual media item out of pre-defined boundaries.
- Figure 31 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3108 available for viewing.
- a first one or more or set of message(s) e.g. 3113 & 3118 may be displayed.
- a second message or subsequent message(s) in queue 3109 is displayed.
- Figure 32 illustrates processing operations associated with display of ephemeral messages and based on load more user action, remove currently presented ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: displaying a set of or particular number(s) of list of content item(s) or visual media item(s) or ephemeral message(s) 3220 (e.g. 3207 and 3209); in response to receive instruction to load more or load next 3211 (if any available) or tap anywhere on screen r in an embodiment in the event of expiration of pre-set default timer or pre-set timer associated with presented set of contents, remove displayed list of content item(s) 3220 (e.g.
- next set or particular number(s) of list (if any available) of content item(s) or visual media item(s) or ephemeral message(s) 3228 e.g. 3238 and 3239
- receiving input associated with a load next command or receiving instruction to load next based on user input receive from a touch controller a haptic contact signal indicative of a gesture applied on the "Load More" icon or button or link or control 3211 of the display 210, wherein the ephemeral message controller deletes the first set of ephemeral content item(s) 3220 (e.g.
- a non-transitory computer readable storage medium of claim 158 wherein receive from a sensor controller a pre-defined user sense signal indicative of a user sense or gesture applied to the display, wherein the ephemeral message controller deletes the first set of ephemeral content item(s) in response to the user sense or sensor signal and proceeds to present on the display a second set of ephemeral content item(s) of the collection of ephemeral content item(s).
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215.
- haptic contact on or tap or click load more icon is observed by the touch controller 215 or keyboard during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message(s), if any, is displayed or in response to receive instruction to load more or load next (if any available), remove displayed list of content item(s) and displaying next set or particular number(s) of list (if any available) of content item(s) or visual media item(s) or ephemeral message(s).
- the haptic contact to terminate a message is any gesture applied to any location on the display 210. In another embodiment, the haptic contact is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user). In another embodiment, the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277. In an embodiment Figure 32 (A) illustrates processing operations associated with the ephemeral message controller 277.
- a set of visual media item(s) or content item(s) or ephemeral message(s) is/are displayed 3225 (e.g. 3207 and 3209)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- API application programming interface
- SDK software development toolkit
- Haptic contact on or tap on or click on "Load More” or “Load Next” icon or button or link or control is then monitored or user instruction to "Load More” or “Load Next” set of visual media item(s) or content item(s) or ephemeral message(s) is/are received (3230 ⁇ Yes), then the current one or more or set of visual media item(s) or content item(s) or message(s) is/are deleted (e.g.
- Figure 32 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3220 available for viewing.
- first one or more or set of message(s) e.g. 3207 and 3209 may be displayed.
- Upon receiving of Haptic contact on or tap on or click on "Load More" or “Load Next” icon or button or link or control is then monitored or user instruction to "Load More" or "Load Next” set of visual media item(s) or content item(s) or ephemeral message(s), a second set of message(s) 3238 (e.g. 3239 and 3240) or subsequent message(s) in queue 3238 is/are displayed.
- message(s) 3238 e.g. 3239 and 3240
- Figure 33 illustrates processing operations associated with display of ephemeral messages and based on push to refresh user instruction or user command or user action remove currently presented ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: displaying a scrollable particular set number(s) of list of content item(s) 3325 (e.g. 3317 and 3319); receiving input associated with a scroll command; based on the scroll command, displaying a scrollable refresh trigger 3315; and in response to determining, based on the scroll command, that the scrollable refresh trigger has been activated 3315, removing one or more or all or particular number of ephemeral message(s) or visual media item(s) or content item(s) and adding or updating or displaying next particular set number(s) of list of ephemeral message(s) or visual media item(s) or content item(s) 3328 (e.g. 3327 and 3330).
- removing of number of content item based on or equivalent to newly available number of content items or removing number of content item equivalent to updated number of content items available for viewing user In another embodiment removing of number of content item based on or equivalent to newly available number of content items or removing number of content item equivalent to updated number of content items available for viewing user.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215.
- haptic contact on or haptic swipe on or tap or click push to refresh icon is observed by the touch controller 215 or keyboard during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message(s), if any, is displayed or in response to receive input associated with a scroll command and based on the scroll command, displaying a scrollable refresh trigger 3315; and in response to determining, based on the scroll command, that the scrollable refresh trigger has been activated 3315, removing one or more or all or particular number of ephemeral message(s) or visual media item(s) or content item(s) and adding or updating or displaying next particular set number(s) of list of ephemeral message(s) or visual media item(s) or content item(s) 3328 (e.g.
- the haptic contact to terminate a message is any gesture applied to any location on the display 210. In another embodiment, the haptic contact is any gesture applied to the message itself. In one embodiment, the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user). In another embodiment, the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 33 (A) illustrates processing operations associated with the ephemeral message controller 277.
- a set of visual media item(s) or content item(s) or ephemeral message(s) is/are displayed 3325 (e.g. 3317 and 3319).
- Haptic contact on or tap on or click on "Push to Refresh” icon or button or link or control is then monitored or user instruction to "Push to Refresh” to load next set of visual media item(s) or content item(s) or ephemeral message(s) is/are received (3307— Yes), then the current one or more or set of visual media item(s) or content item(s) or message(s) is/are deleted (e.g.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any
- Figure 33 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3325 available for viewing.
- a first one or more or set of message(s) e.g. 3317 and 3319 may be displayed.
- a second set of message(s) 3328 e.g. 3227 and 3330 or subsequent message(s) in queue 3328 is/are displayed.
- Figure 34 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer auto refresh and remove currently presented ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: present on the display a first set of ephemeral content item(s) or message(s) of the collection of ephemeral content item(s) or messages(s) 3420 (3410 - e.g. 3405 and 3407)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication
- a first transitory period of time defined by a timer 3422, wherein the first set of ephemeral content item(s) or messages(s) 3420 (3410 - e.g. 3405 and 3407) is/are deleted when the first transitory period of time expires 3430; and proceeds to present on the display a second set of ephemeral content item(s) or messages(s) of the collection of or identified or contextual ephemeral content item(s) or messages(s) 3420 (3480 - e.g.
- the ephemeral message controller deletes the second set of ephemeral content item(s) or messages(s) (3480 - e.g. 3432 and 3435) upon the expiration of the second transitory period of time 3430; and wherein the ephemeral content or message controller initiates the timer 3422 upon the display of the first set of ephemeral content item(s) or messages(s) (3410 - e.g. 3405 and 3407) and the display of the second set of ephemeral content item(s) or messages(s) (3480 - e.g. 3432 and 3435).
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the auto refresh to display time for the ephemeral message(s) is/are typically set by the server or set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message(s) is/are transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 34 (A) illustrates processing operations associated with the ephemeral message controller 277. Initially, one or more or set of ephemeral message(s) is/are displayed 3420 (3410 - e.g. 3405 and 3407). A timer is then started 3422. The timer may be associated with the processor 230.
- the timer is then checked 3430. If the timer has expired (3430— Yes), then the current one or more or set of message(s) is/are deleted and the next message(s), if any, is/are displayed 3420 (3480 - e.g.3432 and 3435).
- Figure 34 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3410 available for viewing.
- a first set of message(s) 3410 may be displayed.
- a second set of message(s) 3480 is displayed.
- Figure 35 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer associated with or correspond to presented each set of ephemeral messages, remove currently presented set of ephemeral messages or media items and load or present next available (if any) set of ephemeral messages in accordance with an embodiment of the invention.
- an ephemeral message controller with instructions executed by a processor to: present on the display a first set of ephemeral content item(s) or message(s) of the collection of ephemeral content item(s) or messages(s) 3532 (3515 - e.g.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof) for a first transitory period of time defined by a timer 3534, wherein the first set of ephemeral content item(s) or messages(s) 3515 (e.g.
- API application programming interface
- SDK software development toolkit
- 3509 and 3510) is/are deleted when the first transitory period of time expires 3540; receive from a touch controller a haptic contact signal 3537 indicative of a gesture applied to the display 210 during the first transitory period of time 3534; wherein the ephemeral message controller deletes the first set of ephemeral content item(s) or messages(s) 3515 (e.g. 3509 and 3510) in response to the haptic contact signal 3537 and proceeds to present on the display a second set of ephemeral content item(s) or messages(s) 3532 (3516 - e.g.
- 3513 and 3518 is deleted when the touch controller receives another haptic contact signal 3537 indicative of another gesture applied to the display during the second transitory period of time 3534; and wherein the ephemeral content or message controller initiates the timer 3534 upon the display of the first set of ephemeral content item(s) or messages(s) 3532 (3516 - e.g. 3513 and 3518) and the display of the second set of ephemeral content item(s) or messages(s) 3532 (3516 - e.g. 3513 and 3518).
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the one or more or set of ephemeral message is/are typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215. If haptic contact is observed by the touch controller 215 during the display of set of ephemeral message(s), then the display of the existing message(s) is/are terminated and a subsequent set of ephemeral message(s), if any, is displayed.
- two haptic signals may be monitored. A continuous haptic signal may be required to display a message(s), while an additional haptic signal may operate to terminate the display of the set of displayed message(s).
- the viewer might tap the screen with a finger while maintaining haptic contact with another finger. This causes the screen to display the next set of media in the collection.
- the haptic contact to terminate a set of message(s) is any gesture applied to any location on the display 210.
- the haptic contact is any gesture applied to the message(s) itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 35 (B) illustrates processing operations associated with the ephemeral message controller 106.
- one or more or set of ephemeral message(s) is/are displayed 3532 (e.g. 3515 - 3509 and 3510).
- a timer associated with said displayed set of ephemeral message(s) is then started 3534.
- the timer may be associated with the processor 230.
- Haptic contact is then monitored 3537. If haptic contact exists (3537 ⁇ Yes), then the current one or more or set of message(s) is/are deleted and the next message, if any, is displayed 3532. If haptic contact does not exist (3537— No), then the timer is checked 3540. If the timer has expired (3540— Yes), then the current one or more or set of message(s) is/are deleted and the next one or more or set of message(s), if any, is/are displayed 3532. If the timer has not expired (3540— No), then another haptic contact check is made 3537. This sequence between blocks 3537 and 3540 is repeated until haptic contact is identified or the timer expires.
- Figure 35 (A) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3532 available for viewing.
- a first set of message(s) 3532 may be displayed.
- a second set of message(s) 3532 is displayed. Alternately, if haptic contact is received before the timer expires the second set of message(s) 3532 is displayed.
- an ephemeral message controller with instructions executed by a processor to: present on the display a first set of ephemeral content item(s) or message(s) 3552 (3535 - e.g. 3524 and 3526)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof) of the collection of ephemeral content item(s) or messages(s) for a first transitory period of time defined by a timer 3354, wherein the first set of ephemeral content item(s) or messages(s) 3552 (3535 - e.g.
- 3524 and 3526 is/are deleted when the first transitory period of time expires 3558; receive from a sensor controller a pre-defined user sense or sensor signal 3556 indicative of a gesture applied to the display during the first transitory period of time 3554; wherein the ephemeral message controller deletes the first set of ephemeral content item(s) or messages(s) 3552 (3535 - e.g. 3524 and 3526) in response to the pre-defined user sense or sensor signal 3556 and proceeds to present on the display a second set of ephemeral content item(s) or messages(s) 3552 (3525 - e.g.
- the ephemeral message controller deletes the second set of ephemeral content item(s) or messages(s) 3552 (3525 - e.g. 3523 and 3528) upon the expiration of the second transitory period of time 3558; wherein the second set of ephemeral content item(s) or messages(s) 3552 (3525 - e.g.
- 3523 and 3528 is/are deleted when the sensor controller receives another pre-defined user sense or sensor signal 3556 indicative of another gesture applied to the display during the second transitory period of time 3554; and wherein the ephemeral content or message controller initiates the timer 3554 upon the display of the first set of ephemeral content item(s) or messages(s) 3552 (3535 - e.g. 3524 and 3526) and the display of the second set of ephemeral content item(s) or messages(s) 3552 (3525 - e.g. 3523 and 3528).
- Figure 35 (D) illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention
- Figure 35 (C) illustrates the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- the processor 230 is also coupled to one or more sensors including Orientation Sensor 237, Position Sensor 242, GPS Sensor 238, Audio sensor 245, Proximity Sensor 246, Gyroscope 247, Accel erometer 248 and other one or more types of sensors 250 including hover sensor, eye tracking system via optical sensors 240 or image sensors 244.
- the ephemeral message controller 277 monitors signals or senses or one or more types of pre-defined equivalents of generated or updated sensor data from the one or more sensors including voice command from audio sensor 245 or particular type of eye movement from eye tracking system based on image sensors 244 or optical sensors 240 or hover signal from e.g. specific type of proximity sensor 246. If one of the pre-defined types of said senses detected or is observed by the said one or more types of sensors during the display of a set of ephemeral message(s), then the display of the existing set of message(s) is/are terminated and a subsequent set of ephemeral message(s), if any, is displayed. In one embodiment, two types of signals or senses from sensors may be monitored.
- a continuous signal or senses from one or more types of sensors may be required to display a set of message(s), while an additional sensor signal or sense may operate to terminate the display of the set of message(s). For example, the viewer might instruct via voice command or hover on display or done one or more types of predefined eye movement. This causes the screen to display the next set of media in the collection.
- the one or more types of pre-defined signal or senses provide by user and detected or sense by sensor to terminate a message while media viewer application or interface is open or while viewing of display 210.
- the sensor signal or sense is any sense applied on the message area itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete" to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be sensed or touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 242. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 35 (D) illustrates processing operations associated with the ephemeral message controller 277.
- a set of ephemeral message(s) is/are displayed 3552 (e.g. 3535 - 3524 and 3526).
- a timer associated with displayed set of message(s) is then started 3554. The timer may be associated with the processor 230.
- One or more types of user sense is/are then monitored, tracked, detected and identified 3556. If pre-defined user sense identified or detected or recognized or exists (3556— Yes), then the current set of message(s) is/are deleted and the next set of message(s) 3552 (e.g. 3525 - 3523 and 3528), if any, is displayed 3552. If user sense does not identified or detected or recognized or exist (3556— No), then the timer is checked 3558. If the timer has expired (3558— Yes), then the current set of message(s) (e.g. 3525 - 3523 and 3528) is/are deleted and the next set of message(s) (e.g. 3525 - 3523 and 3528), if any, is displayed 3552.
- pre-defined user sense identified or detected or recognized or exists 3556— Yes
- the current set of message(s) is/are deleted and the next set of message(s) 3552 (e.g. 3525 - 3523 and 3528), if any, is displayed 3552
- Figure 35 (C) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3552 (e.g. 3535 - 3524 and 3525) available for viewing.
- a first set of message 3552 e.g. 3535 - 3524 and 3525
- a second set of messages 3552 is displayed.
- the second set of message(s) 3552 (e.g. 3525 - 3523 and 3528) is displayed.
- Figure 36 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer for scrolled-up ephemeral message (s) or media item(s), remove expired scrolled-up ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or particular pre-set number of or available to present ephemeral messages in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: displaying a scrollable list of content items 3650 (3630 - e.g. 3620 & 3622)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication
- API application programming interface
- SDK software development toolkit
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215. If receiving input associated with a scroll command, then based on the scroll command identify complete scrolled-up of one or more digital content items, start timer associated with each scrolled-up visual media item or content item and in the event of expiry of said each scrolled-up visual media item or content item associated started time, remove expired timer associated complete scrolled-up one or more digital content items or if haptic swipe contact is observed by the touch controller 215 and displayed visual media item completely scrolled up over pre-defined boundaries during the display of an ephemeral message, then each scrolled-up message or visual media item or content item associated timer started and in the vent of expiration of said each timer the display of the said timer related existing message is terminated and a subsequent ephemeral message, if
- the haptic swipe is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277. In an embodiment Figure 36 (A) illustrates processing operations associated with the ephemeral message controller 277.
- an ephemeral message is displayed 3650 (e.g. 3630 - 3620 and 3622). Haptic swipe is then monitored 3618. If haptic swipe leads to complete scrolling up of displayed visual media item (e.g. 3605 and 3615) out of pre-defined boundary 3616 of display 210 or container 3630 (3655 ⁇ Yes), then the timer associated with each scrolled-up message(s) or visual media item(s) or content item(s) is starts 3657 (e.g. timer 3608 associated with complete scrolled-up message or visual media item 3605 starts and timer 3610 associated with complete scrolled-up message or visual media item 3615 starts) and in the event of expiry of said timer 3660 (e.g.
- Figure 36 (B) illustrates processing operations associated with the ephemeral message controller 277.
- an ephemeral message is displayed 3665 (e.g. 3630 - 3620 and 3622)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- Haptic swipe is then monitored 3618.
- haptic swipe leads to complete scrolling up of displayed visual media item (e.g. 3605 and 3615) out of pre-defined boundary 3616 of display 210 or container 3630 (3675 ⁇ Yes)
- the timer associated with each scrolled- up message(s) or visual media item(s) or content item(s) is starts 3677 (e.g. timer 3608 associated with complete scrolled-up message or visual media item 3605 starts and timer 3610 associated with complete scrolled-up message or visual media item 3615 starts) and before expiry of timer 3677 (e.g. 3608 and 3610) enabling user to scroll down said previously scrolled- up message(s) (e.g.
- FIG. 36 (C) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3130 available for viewing. A first one or more or set of message(s) e.g.
- 3620 & 3622 may be displayed.
- a second message(s) 3645 or subsequent message(s) in queue 3645 is/are displayed.
- Figure 37 illustrates processing operations associated with display of ephemeral messages with no scrolling and based on expiration of pre-set duration of timer associated with each ephemeral message, remove expired ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or available to present ephemeral messages in accordance with an embodiment of the invention.
- an ephemeral message controller 277 with instructions executed by a processor 230 to: present on the display 210 a one or more ephemeral content item(s) or message(s) 3725 (e.g.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof) of the collection of ephemeral content item(s) or messages(s) 3710 for a corresponding associated transitory period of time defined by a timer 3727 (e.g. 3702 and 3704) for each presented ephemeral content item(s) or messages(s) (e.g.
- a second set of ephemeral content item(s) or messages(s) 3725 (e.g. 3710 - 3712 and 3713) of the collection of or identified or contextual ephemeral content item(s) or messages(s) 3710 for a corresponding associated transitory period of time defined by the timer 3727 for each presented ephemeral content item(s) or messages(s) 3725, wherein the ephemeral message controller 277 deletes the presented ephemeral content item(s) or messages(s) upon the expiration of the corresponding associated transitory period of time 3734 defined by a timer for each presented ephemeral content item(s) or messages(s); wherein the second set of ephemeral content item(s) or messages(s) 3710 is deleted when the touch controller receives another haptic contact signal 3732 indicative of another gesture
- the ephemeral message controller 277 in response to deletion of ephemeral message(s) e.g. 3705 and 3707, add to display or present to display 210 another available ephemeral message(s) e.g. 3712 and 3713 on the display 210.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the one or more or set of ephemeral message is/are typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215. If haptic contact is observed by the touch controller 215 during the display of set of ephemeral message(s), then the display of the existing message(s) is/are terminated and a subsequent set of ephemeral message(s), if any, is displayed.
- two haptic signals may be monitored. A continuous haptic signal may be required to display a message(s), while an additional haptic signal may operate to terminate the display of the set of displayed message(s).
- the viewer might tap the screen with a finger while maintaining haptic contact with another finger. This causes the screen to display the next set of media in the collection.
- the haptic contact to terminate a set of message(s) is any gesture applied to any location on the display 210.
- the haptic contact is any gesture applied to the message(s) itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 37 (B) illustrates processing operations associated with the ephemeral message controller 277.
- one or more or set of ephemeral message(s) is/are displayed 3725 (e.g. 3703 - 3705 and 3707).
- a timer associated with said each ephemeral message e.g. timer 3702 for media item 3701 and timer 3704 for media item 3707) is/are then started 3725.
- the timer may be associated with the processor 230.
- Haptic contact is then monitored 3732. If haptic contact exists (7532 ⁇ Yes), then the current one or more or set of message(s) (e.g. 3703 - 3705 and 3707) is/are deleted and the next message(s) (e.g. 3710 - 3712 and 3713), if any, is displayed 3532. If haptic contact does not exist (3732— No), then the timer is checked 3734. If the timer has expired (3734— Yes), then the current one or more or set of message(s) (e.g. 3703 - 3705 and 3707) is/are deleted and the next one or more or set of message(s) 3725 (e.g.
- Figure 37 (A) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages (3703) available for viewing.
- a first set of message(s) 3725 (3703) may be displayed.
- a second message e.g. 3712 is displayed.
- haptic contact 3732 is received before the timer expires 3734 (e.g. 3702, 3704) the second set of message(s) 3725 (3710) is displayed.
- Figure 37 (D) illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention
- Figure 37 (C) illustrates the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices. The image sensors capture visual media, which is presented on display 210.
- the processor 230 is also coupled to one or more sensors including Orientation Sensor 237, Position Sensor 242, GPS Sensor 238, Audio sensor 245, Proximity Sensor 246, Gyroscope 247, Accelerometer 248 and other one or more types of sensors 250 including hover sensor, eye tracking system via optical sensors 240 or image sensors 244.
- the ephemeral message controller 277 monitors signals or senses or one or more types of pre-defined equivalents of generated or updated sensor data from the one or more sensors including voice command from audio sensor 245 or particular type of eye movement from eye tracking system based on image sensors 244 or optical sensors 240 or hover signal from e.g. specific type of proximity sensor 246.
- the display of the existing set of message(s) is/are terminated and a subsequent set of ephemeral message(s), if any, is displayed.
- two types of signals or senses from sensors may be monitored.
- a continuous signal or senses from one or more types of sensors may be required to display a set of message(s), while an additional sensor signal or sense may operate to terminate the display of the set of message(s).
- the viewer might instruct via voice command or hover on display or done one or more types of predefined eye movement. This causes the screen to display the next set of media in the collection.
- the one or more types of pre-defined signal or senses provide by user and detected or sense by sensor to terminate a message while media viewer application or interface is open or while viewing of display 210.
- the sensor signal or sense is any sense applied on the message area itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be sensed or touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 242. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 37 (D) illustrates processing operations associated with the ephemeral message controller 277.
- a set of ephemeral message(s) is/are displayed 3738 (e.g. 3719 - 3715, 3717, 3721 and 3723)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- API application programming interface
- SDK software development toolkit
- a timer 3740 (3716, 3718, 3720 and 3724) associated with displayed set of message(s) (e.g. 3719 - 3715, 3717, 3721 and 3723) is/are then started 3740.
- the timer may be associated with the processor 230.
- One or more types of user sense is/are then monitored, tracked, detected and identified 3743. If pre-defined user sense identified or detected or recognized or exists (3743— Yes), then the current set of message(s) (e.g. 3719 - 3715, 3717, 3721 and 3723) is/are deleted and the next set of message(s) 3738 (e.g. 3750 -3752 and 3754), if any, is displayed 3738. If user sense does not identified or detected or recognized or exist (3743— No), then the timer is checked 3746. If the timer has expired (3746— Yes), then the each expired timer associated message (e.g.
- Figure 37 (C) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3738 (e.g. 3719 - 3715, 3717, 3721 and 3723) available for viewing.
- a first set of message 3738 (e.g. 3719 - 3715, 3717, 3721 and 3723) may be displayed.
- a second messages 3738 e.g. 3752 or 3754
- the second set of message(s) 3738 (e.g. 3750 - 3752 and 3754) is/are displayed.
- Figure 38 illustrates processing operations associated with display of ephemeral messages and based on expiration of pre-set duration of timer associated with each ephemeral message, remove expired ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or particular pre-set number of or available to present ephemeral messages in accordance with an embodiment of the invention.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: present on the display a first set of ephemeral content item(s) or message(s) of the collection of ephemeral content item(s) or messages(s) 3860 (3820 - e.g. 3822 and 3425) for a first transitory period of time defined by a timer (e.g. 3802 and 3803) associated with each message or visual media item or content item 3820 (3822 and 3825), wherein the first one or more or set of ephemeral content item(s) or messages(s) (3420 - e.g.
- 3822 and 3825 is/are deleted when the first transitory period of time associated with each message expires 3864; and proceeds to present on the display a second one or more or set of ephemeral content item(s) or messages(s) of the collection of or identified or contextual ephemeral content item(s) or messages(s) 3860 (3830 - e.g. 3831 and 3832) for a second transitory period of time defined by the timer associated with each message(3802 and 3803), wherein the ephemeral message controller 277 deletes the second set of ephemeral content item(s) or messages(s) (3830 - e.g. 3831 and 3832) upon the expiration of the second transitory period of time associated with each message 3864; and wherein the ephemeral content or message controller initiates the timer 3862 associated with each next displayed message.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: present on the display a scrollable list of ephemeral content item(s) or message(s) 3833 (e.g.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof) for a corresponding associated transitory period of time defined by a timer 3834 (e.g. 3802 and 3803) for each presented ephemeral content item(s) or messages(s) (e.g.
- a timer 3834 e.g. 3802 and 3803
- first ephemeral content item(s) or messages(s) e.g. 3822
- the presented set of ephemeral content item(s) or messages(s) (3820 - e.g. 3822 and 3825) is/are deleted when the corresponding associated transitory period of time expires 3840 (e.g. 3802); receive from a touch controller a haptic contact signal indicative of a gesture applied on the particular ephemeral content item or message area 3836; wherein the ephemeral message controller 277 deletes first presented ephemeral content item(s) or messages(s) (e.g.
- ephemeral message controller 277 deletes the presented ephemeral content item(s) or messages(s) upon the expiration of the corresponding associated transitory period of time defined by a timer 3840 for each presented ephemeral content item(s) or messages(s); wherein the second ephemeral content item(s) or messages(s) (e.g.
- a non-transitory computer readable storage medium comprising instructions executed by a processor to: present on the display a scrollable list of ephemeral content item(s) or message(s) 3842 (e.g.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof) for a corresponding associated transitory period of time defined by a timer 3845 (e.g. 3802 and 3803) for each presented ephemeral content item(s) or messages(s) (e.g.
- a timer 3845 e.g. 3802 and 3803
- first ephemeral content item(s) or messages(s) e.g. 3822
- the presented set of ephemeral content item(s) or messages(s) (3820 - e.g. 3822 and 3825) is/are deleted when the corresponding associated transitory period of time expires 3853 (e.g. 3802); receive from a one or more types of one or more sensors of user device(s) a pre-defined user sense or sensor data or sensor signal indicative of a gesture applied on the particular ephemeral content item or message area 3848; wherein the ephemeral message controller 277 deletes first presented ephemeral content item(s) or messages(s) (e.g.
- a pre-defined user sense or sensor data or sensor signal on message area e.g. 3822
- a second ephemeral content item(s) or messages(s) 3842 e.g.
- 3803 is deleted when the one or more types of one or more sensors of user device(s) receives another a pre-defined user sense or sensor data or sensor signal indicative of another gesture applied on the particular ephemeral content item or message area; and wherein the ephemeral content or message controller 277 initiates the corresponding timer associated with each next displayed ephemeral content item or messages.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the auto refresh to display time for the ephemeral message(s) is/are typically set by the server or set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message(s) is/are transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 38 (B) illustrates processing operations associated with the ephemeral message controller 277.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- a timer associated with each presented message is then started 3862. The timer may be associated with the processor 230.
- the timer associated with each displayed message is then checked 3864. If the timer associated with one or more message has expired (3864 ⁇ Yes), then the expired timer 3864 associated one or more or set of message(s) (e.g. 3822) is/are deleted and the next message(s), if any, is/are displayed 3860 (e.g. 3831).
- Figure 38 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3860 (e.g. 3822 and 3825) available for viewing.
- a first set of message(s) 3860 e.g. 3822 and 3825
- a second set of message(s) 3860 e.g. 3830 - 3831 and 3832
- Figure 38 illustrates processing operations associated with display of ephemeral messages with scrolling and based on expiration of pre-set duration of timer associated with each ephemeral message, remove expired ephemeral message(s) or media item(s) from presented feed or set of ephemeral messages and in the event of removal of ephemeral message(s) or media item(s), load or present next available (if any) removed number equivalent or available to present ephemeral messages in accordance with an embodiment of the invention.
- an ephemeral message controller 277 with instructions executed by a processor 230 to: present on the display 210 a one or more ephemeral content item(s) or message(s) 3833 (e.g. 3822 and 3825) of the collection of ephemeral content item(s) or messages(s) 3820 for a corresponding associated transitory period of time defined by a timer 3834 (e.g. 3802 and 3803) for each presented ephemeral content item(s) or messages(s) (e.g.
- timer 3802 for media item 3822 and timer 3803 for media item 3825 wherein the first ephemeral content item(s) or messages(s) 3822 from the presented set of ephemeral content item(s) or messages(s) 3833 (e.g. 3822 and 3825) is/are deleted when the corresponding associated transitory period of time expires 3840 (e.g. 3802); receive from a touch controller a haptic contact signal 3836 indicative of a gesture applied on the particular message area (e.g.
- the ephemeral message controller 277 deletes first set of presented ephemeral content item(s) or messages(s) (e.g. 3820 - 3822 and 3825) in response to the haptic contact signal (3836) on the particular message area (e.g. 3822) and proceeds to present on the display 210 a second set of ephemeral content item(s) or messages(s) 3833 (e.g.
- the second set of ephemeral content item(s) or messages(s) (e.g. 3830 - 3831 and 3832) is deleted when the touch controller receives another haptic contact signal 3836 indicative of another gesture applied on the particular message area (e.g. 3825) of the display during the second transitory period of time 3840; and wherein the ephemeral content or message controller 277 initiates the timer upon the display of the first set of ephemeral content item(s) or messages(s) 3833 (e.g. 3820 - 3822 and 3825) and the display of the second set of ephemeral content item(s) or messages(s) 3833 (e.g.
- the ephemeral message controller 277 in response to deletion of ephemeral message(s) e.g. 3822 and 3825, add or append to display or present to display 210 replaced in place of deleted message another available ephemeral message(s) e.g. 3831 and 3832 on the display 210.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the one or more or set of ephemeral message is/are typically set by the message sender.
- the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied on the particular message area (e.g. 3822 or 3825) of the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215 on the particular message area (e.g. 3822). If haptic contact is observed by the touch controller 215 on the particular message area (e.g. 3822) during the display of set of ephemeral message(s), then the display of the existing message(s) (e.g. 3822) is/are terminated and a subsequent set of ephemeral message(s) (e.g. 3831), if any, is displayed.
- two haptic signals on the particular message area may be monitored.
- a continuous haptic signal on the particular message area may be required to display a message(s), while an additional haptic signal on the particular message area may operate to terminate the display of the one or more or set of displayed message(s).
- the viewer might tap the screen with a finger while maintaining haptic contact on the particular message area with another finger. This causes the screen to display the next set of media in the collection.
- the haptic contact to terminate a message(s) is any gesture applied to any location on the particular message area (e.g. 3822) on the display 210.
- the haptic contact is any gesture applied to the message(s) area itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 37 (C) illustrates processing operations associated with the ephemeral message controller 277.
- one or more or set of ephemeral message(s) is/are displayed 3833 (e.g. 3820 - 3822 and 3825).
- a timer associated with said each ephemeral message e.g. timer 3802 for media item 3822 and timer 3803 for media item 3825
- the timer may be associated with the processor 230.
- Haptic contact on the each message area is then monitored 3836. If haptic contact on particular message area (e.g. 3822) exists (3836 ⁇ Yes), then the said message (e.g. 3822) is deleted and the next message (e.g.
- haptic contact on particular message area e.g. 3822
- the timer is checked 3840 (e.g. timer 3802 of message 3822). If the timer has expired (3840— Yes) (e.g. timer 3802 of message 3822 expired), then the message (e.g. 3822) is deleted and the next message 3833 (e.g. 3831), if any, is displayed 3833. If the timer has not expired (3840— No), then another haptic contact check is made 3836. This sequence between blocks 3836 and 3840 is repeated until haptic contact on particular message area is identified or the timer associated with particular message expires.
- Figure 38 (A) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages (3820) available for viewing.
- a first set of message(s) 3833 (3820) may be displayed.
- a second message e.g. 3831 is displayed.
- haptic contact on message 3802 area e.g. 3822
- the timer expires 3840 e.g. 3802
- the second set of message(s) (3831) is displayed 3833.
- Figure 38 (D) illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention
- Figure 38 (A) illustrates the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the display time for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- the processor 230 is also coupled to one or more sensors including Orientation Sensor 237, Position Sensor 242, GPS Sensor 238, Audio sensor 245, Proximity Sensor 246, Gyroscope 247, Accelerometer 248 and other one or more types of sensors 250 including hover sensor, eye tracking system via optical sensors 240 or image sensors 244.
- the ephemeral message controller 277 monitors signals or senses or one or more types of pre-defined equivalents of generated or updated sensor data from the one or more sensors 3848 including user voice command via audio sensor 245 or particular type of user's eye movement via eye tracking system based on image sensors 244 or optical sensors 240 or hover signal from user via e.g. specific type of proximity sensor 246. If one of the pre-defined types of said senses detected or is observed 3848 on particular or selected or identified message area (e.g. 3822) by the said one or more types of sensors 3848 during the display of a set of ephemeral message(s) 3842, then the display of the particular or selected or identified message (e.g.
- a subsequent ephemeral message (e.g. 3831), if any, is displayed.
- two types of signals or senses from sensors may be monitored 3848.
- a continuous signal or senses from one or more types of sensors may be required 3848 to display a one or more or set of message(s), while an additional sensor signal or sense 3848 may operate to terminate the display of the set of message(s).
- the viewer might instruct via voice command or hover on display or done one or more types of pre-defined eye movement. This causes the screen to display the next set of media (e.g. 3831 and 3832) in the collection (e.g. 3830).
- the one or more types of pre-defined signal or senses provide by user and detected or sense by sensor 3848 to terminate a message while media viewer application or interface is open or while viewing of display 210.
- the sensor signal or sense is any sense applied on the message area itself.
- the gesture is unprompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be sensed or touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 242. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 38 (D) illustrates processing operations associated with the ephemeral message controller 277.
- a set of ephemeral message(s) is/are displayed 3842 (e.g. 3820 - 3822 and 3825).
- a timer 3845 correspondingly (timer 3802 associated with message 3822 and timer associated 3803 with message 3825) associated with displayed each message is then started 3845.
- the timer may be associated with the processor 230.
- One or more types of user sense on particular or selected or identified message area is then monitored, tracked, detected and identified 3848. If pre-defined user sense identified or detected or recognized or exists on particular or selected or identified message area (e.g. 3822) (3848 ⁇ Yes), then said message (e.g. 3822) is deleted and the next message (e.g. 3831), if any, is displayed 3842. If user sense does not identified or detected or recognized or exist on particular or selected or identified message area (e.g. 3822) (3848— No), then the timer associated with each displayed message is checked 3853. If the timer associated with each displayed message has expired (3853— Yes), then the each expired timer associated message (e.g.
- Figure 38 (A) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3842 (e.g. 3820 - 3822 and 3825) available for viewing.
- a first set of message 3842 e.g. 3820 - 3822 and 3825
- a second message (e.g. 3831) is displayed 3842.
- a second message (e.g. 3831) is displayed 3842.
- one or more types of pre-define user sense or user sense data or signal via one or more types of sensor is received (3848) before the timer expires 3848 the second message (e.g. 3831) is displayed 3842.
- Figure 39 illustrates processing operations associated with accelerated display of ephemeral messages in accordance with an embodiment of the invention and illustrates interface and the exterior of an electronic device implementing accelerated display of ephemeral messages in accordance with the invention.
- an ephemeral message controller with instructions executed by a processor to: present on a display indicia of a set of ephemeral messages available for viewing ; present on the display a first ephemeral message 3971 of the set of ephemeral messages 3960; receive from a touch controller a haptic contact signal 3933 indicative of a gesture applied to the display 210; wherein the ephemeral message controller 277 deletes the first ephemeral message 3971 in response to the haptic contact signal 3933 and proceeds to present on the display a second ephemeral message 3970 of the set of ephemeral messages 3960; wherein the second ephemeral message 3970 is deleted when the touch controller receives another haptic contact signal 3933 indicative of another gesture applied to the display 210.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215. If haptic contact is observed by the touch controller 215 during the display of an ephemeral message, then the display of the existing message is terminated and a subsequent ephemeral message, if any, is displayed.
- two haptic signals may be monitored. A continuous haptic signal may be required to display a message, while an additional haptic signal may operate to terminate the display of the message.
- the viewer might tap the screen with a finger while maintaining haptic contact with another finger. This causes the screen to display the next piece of media in the set.
- the haptic contact to terminate a message is any gesture applied to any location on the display 210.
- the haptic contact is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- FIG. 2 illustrates processing operations associated with the ephemeral message controller 277.
- an ephemeral message is displayed 3931 (e.g. 3971)
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof).
- Haptic contact is then monitored 3933. If haptic contact exists (3933 ⁇ Yes), then the current message (e.g.
- 3971 is deleted and the next message (e.g. 3970), if any, is displayed 3931. Then another haptic contact check is made 3933. If haptic contact exists (3933 ⁇ Yes), then the current message (e.g. 3970) is deleted and the next message (e.g. 3969), if any, is displayed 3931. If haptic contact does not exist (3933— No) then does not show next message.
- Figure 39 (C) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3960 available for viewing.
- a first message 3971 may be displayed 3931. If haptic contact 3933 is received then the second message 3970 is displayed 3931.
- an ephemeral message controller 277 with instructions executed by a processor to: present on a display indicia of a set of ephemeral messages available for viewing; present on the display 210 a first ephemeral message 3920 (e.g.
- message or notification can serve by server 110 via server module 178 or serve from client device 200 storage medium or serve from one or more sources including one or more contacts, connections, users, domains, servers, web sites, applications, user accounts, storage mediums, databases, networks, devices via one or more web services, application programming interface (API) or software development toolkit (SDK) or providing of authentication information or one or more types of communication interfaces and any combination thereof) for a first pre-set number of times of views 3922 defined by a sender or server or as per default settings or receiver, wherein the first ephemeral message (e.g.
- API application programming interface
- SDK software development toolkit
- ephemeral message controller 277 hides the first ephemeral message (e.g. 3971) in response to the haptic contact signal 3927 and proceeds to present on the display 210 a second ephemeral message (e.g.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores an ephemeral message controller 277 to implement operations of the invention.
- the ephemeral message controller 277 includes executable instructions to accelerate display of ephemeral messages.
- An ephemeral message may be a text, an image, a video and the like.
- the pre-set number of times of views or display for the ephemeral message is typically set by the message sender. However, the display time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory i.e. after pre-set number of times of views, it will remove by system.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the ephemeral message controller 277 monitors signals from the touch controller 215. If haptic contact is observed by the touch controller 215 during the display of an ephemeral message, then the subsequent ephemeral message, if any, is displayed. In one embodiment, two haptic signals may be monitored. A continuous haptic signal may be required to display a message, while an additional haptic signal may operate to the display of the next message. For example, the viewer might tap the screen with a finger while maintaining haptic contact with another finger. This causes the screen to display the next piece of media in the set.
- the haptic contact to display a next message is any gesture applied to any location on the display 210.
- the haptic contact is any gesture applied to the message itself.
- the gesture is un-prompted (i.e., there is no prompt such as "Delete” to solicit an action from the user).
- the gesture may be prompted (e.g., by supplying a "Delete” option on the display, which must be touched to effectuate deletion).
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the ephemeral message controller 277.
- Figure 39 (B) illustrates processing operations associated with the ephemeral message controller 277. Initially, an ephemeral message is displayed 3971. A starting of counting of number of times of views or displays associated with each ephemeral message is then started 3922.
- Haptic contact is then monitored 3927. If haptic contact exists (3927 ⁇ Yes), then the current message is hide and the next message, if any, is displayed 3920. If haptic contact does not exist (3927— No), then the counter is checked 3925. If the counter threshold exceeded (3925— Yes), then the current message is deleted and the next message, if any, is displayed 200. If the counter threshold not exceeded (3925— No), then another haptic contact check is made 3927. This sequence between blocks 3925 and 3927 is repeated until haptic contact 3927 is identified or the pre-set number of times of views or displays counter exceeded (3925— Yes).
- Figure 39 (C) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a set of ephemeral messages 3960 available for viewing.
- a first message 3971 may be displayed.
- a second message 3970 is displayed.
- the second message 3970 is displayed 3920.
- an ephemeral message controller with instructions executed by a processor to: present on a display indicia of a set of ephemeral messages available for viewing; present on the display a first ephemeral message of the set of ephemeral messages for a pre-set interval period of time defined by a timer, wherein the first ephemeral message is deleted when the pre-set interval period of time expires and proceeds to present on the display a second ephemeral message of the set of ephemeral messages for a preset interval period of time defined by the timer, wherein the ephemeral message controller deletes the second ephemeral message upon the expiration of the pre-set interval period of time; and wherein enabling viewer to change pre-set interval period of time (e.g. change interval period of time via slider 3915) and based on changes the ephemeral message controller initiates the interval timer for display
- pre-set interval period of time e.g
- Figure 39 (E) illustrates the user interface of electronic device 200.
- the Figure also illustrates the display 210.
- the display 210 presents a first ephemeral message 3910 available for viewing.
- a first message 3910 may be displayed.
- a second message is displayed.
- User is enabling to change pre-set interval period of time (e.g. change interval period of time via slider 3915) and based on changes the ephemeral message controller initiates the interval timer for display of the next ephemeral message, so use can make slow or fast of auto presenting and removing of message(s) as for their dynamic need.
- pre-set interval period of time e.g. change interval period of time via slider 3915
- user is enabled to pause, play or re-start and stop 3955 presenting of visual media items or content items or particular story or set of visual media items or content items.
- user can view previous via swipe right or next via swipe left visual media items for content items for pre-set number of times, in the event of exceeding of said pre-set number of times of view, removing of visual media items or content items.
- Figure 40 illustrates logic flow for the visual media capture system. Techniques to selectively capture front camera or back camera photo using a single user interface element are described.
- an apparatus may comprise a touch controller 215, a visual media capture controller 278, and a storage 236.
- the touch controller 215 may be operative to receive a haptic engagement signal.
- the visual media capture controller 278 may be operative to be configured in a capture mode based on whether a haptic disengagement signal is received by the touch controller 215 before expiration of a pre-set threshold of first timer, the capture mode one of a front camera photo capture mode or back camera photo capture mode, the first timer 4020 started in response to receiving the haptic engagement signal 4015, the first timer 4020 maximum threshold configured to expire after a first preset duration.
- the storage component may be operative to store visual media captured by the visual media capture controller in the configured capture mode. Other embodiments are described and claimed.
- an electronic device 200 comprising: digital image sensors 244 to capture visual media; a display 210 to present the visual media from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release or haptic contact disengagement on the display 210; and a visual media capture controller 278 to alternately record the visual media as a back camera photo or a front camera photo based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- the visual media capture controller 278 presents a single mode input icon on the display to receive the haptic contact engagement, haptic contact persistence and haptic content release.
- the visual media capture controller 278 selectively stores the photo in a photo library. After capturing back camera photo or front camera photo, the visual media capture controller invokes a photo preview mode.
- the visual media capture controller selects a frame of the video to form the photo.
- the visual media capture controller stores the photo upon haptic contact engagement.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a visual media capture controller 278 to implement operations of the invention.
- the visual media capture controller 278 includes executable instructions to alternately record a front camera photo or a back camera photo based upon the processing of haptic signals, as discussed below.
- the visual media controller 278 interacts with a photo library controller 294, which includes executable instructions to store, organize and present photos 291.
- the photo library controller may be a standard photo library controller known in the art.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210. That is, in a visual media mode controlled by the visual media capture controller 278, the image sensors 244 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the visual media capture controller 278 presents a single mode input icon on the display 210. That is, the visual media capture controller 278 includes executable instructions executed by the processor to present a single mode input icon on the display 210.
- the visual media capture controller 278 communicates with the processor 230 regarding haptic signals applied to the display 210, which are recorded by the touch controller 215. In one configuration, the visual media capture controller 278 processes haptic signals applied to the single mode input icon, as detailed in connection with the discussion of figure 40 (B), and determines whether to record a front camera photo or a back camera photo, as discussed below.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the visual media capture controller 278.
- Figure 40 (A) illustrates processing operations associated with the visual media capture controller 278.
- a visual media capture mode is invoked 4005.
- a user may access an application presented on display 210 to invoke a visual media capture mode.
- Figure 40 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the electronic device 200 is in a visual media capture mode and presents visual media 4007.
- the display 210 also includes a single mode input icon 4008.
- the amount of time that a user presses the single mode input icon 4008 determines whether a capture photo will be a front camera photo or a back camera photo. For example, if a user initially intends to take a back camera photo, then the icon 4008 is engaged with a haptic signal.
- the user decides that the visual media should instead be a front camera photo, the user continues to engage the icon 4008. If the engagement is persists for a specified period of time (e.g., 3 seconds), then the output of the visual media capture is determined to be front camera photo.
- the back or front camera mode may be indicated on the display 210 with an icon 4010.
- haptic contact engagement is identified 4015.
- the haptic contact engagement may be at icon 4008 on display 210.
- the touch controller 215 generates haptic contact engagement signals for processing by the visual media capture controller 278 in conjunction with the processor 230.
- the haptic contact may be at any location on the display 210.
- Video is recorded and a timer is started 4020 in response to haptic contact engagement 4015.
- the video is recorded by the processor 230 operating in conjunction with the memory 236.
- a still frame is taken from the video feed and is stored as a photo in response to haptic contact engagement.
- the timer is executed by the processor 230 under the control of the visual media capture controller 278.
- Video continues to record and the timer continues to run in response to persistent haptic contact on the display.
- the elapsed time recorded by the timer is then evaluated by the visual media capture controller 278 against a specified threshold (e.g., 3 seconds). If the threshold is exceeded (4035— Yes), then the back camera mode change to front camera mode or front camera mode change to back camera mode (e.g. front camera mode) 4036. Then Saving or identifying time of loading or showing of or switching of front camera or back camera (e.g. front camera) 4038.
- Haptic contact release is identified 4040.
- the timer is then stopped then video is stored 4042, a frame of video is selected after loading time of front camera 4047 and is stored as a photo 4055.
- an alternate approach is to capture a still frame from the camera video feed as a photo upon haptic engagement. Such a photo is then passed to the photo library controller 294 for storage 291.
- the visual media capture controller 278 may then invoke a photo preview mode 4057 to allow a user to easily view the new photo.
- the threshold not exceeded (4035— No) and Haptic contact release is identified 4025.
- the timer is then stopped then video is stored 4030, a frame of video is selected 4047 and is stored as a photo 4058.
- an alternate approach is to capture a still frame from the camera video feed as a photo upon haptic engagement. Such a photo is then passed to the photo library controller 294 for storage.
- the visual media capture controller 278 may then invoke a photo preview mode 4057 to allow a user to easily view the new photo.
- the foregoing embodiment relies upon evaluating haptic contact engagement, haptic contact persistence and haptic contact release. Based upon the expired time, either a front camera photo or a front camera photo is preserved. Thus, a single recording mode allows one to seamlessly transition between front camera photo and back camera photo capturing or recording.
- the visual media capture mode is responsive to a first haptic contact signal (e.g., one tap) to record a front camera photo and a second haptic contact signal (e.g., two taps) to record a back camera photo.
- the visual media capture mode is responsive to a first haptic contact signal (e.g., one tap) to record a back camera photo and a second haptic contact signal (e.g., two taps) to record a front camera photo.
- the visual media capture controller 278 may be configured to interpret two taps within a specified period as an invocation of the front or back camera photo capture mode. This allows a user to smoothly transition from intent to take a front camera picture to a desire to take a back camera picture or allows a user to smoothly transition from intent to take a back camera picture to a desire to take a front camera picture.
- Figure 41 illustrates in an embodiment the visual media capture controller 278 which enable user to single mode visual media capture that alternately produces photos and pre-set duration of video and in the event of haptic contact engagement enables user to stop said pre-set duration of video limitation and allow user to record video up-to further haptic contact engagement for manually stopping video.
- Figure 41 explains, a computer-implemented method, comprising: receiving a haptic
- pre-set maximum of 10 seconds of video stop timer and stop video; in the event of pre-set maximum duration of timer not expired or pre-set maximum duration of video not yet recorded (4125— No) (e.g. recording of less than pre-set maximum of 10 seconds of video) and receiving a haptic engagement signal 4135; stop timer (for enabling user to take more than preset duration of video i.e. more than pre-set 10 seconds of video) 4138; in the event of identifying Haptic Contact Engagement & Release 4140, stop video and store video 4142.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a visual media capture controller 278 to implement operations of the invention.
- the visual media capture controller 278 includes executable instructions to alternately record a photo or a pre-set duration of recording of video or cancel said pre-set duration of video and record video as per user need base length of video based upon the processing of haptic signals, as discussed below.
- the visual media controller 278 interacts with a photo library controller 294, which includes executable instructions to store, organize and present photos 291.
- the photo library controller may be a standard photo library controller known in the art.
- the visual media controller 278 also interacts with a video library controller 296, which includes executable instructions to store, organize and present videos 292.
- the video library controller may also be a standard video library controller known in the art.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210. That is, in a visual media mode controlled by the visual media capture controller 278, the image sensors 244 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the visual media capture controller 278 presents a single mode input icon on the display 210. That is, the visual media capture controller 278 includes executable instructions executed by the processor to present a single mode input icon on the display 210.
- the visual media capture controller 278 communicates with the processor 230 regarding haptic signals applied to the display 210, which are recorded by the touch controller 215.
- the visual media capture controller 278 processes haptic signals applied to the single mode input icon, as detailed in connection with the discussion of Figure 41 (A), and determines whether to record a photo or auto stop and save pre-set duration of video or based on haptic contact engagement & release record length or duration of video as per user need, as discussed below.
- the electronic device 100 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the visual media capture controller 278.
- Figure 41 (A) illustrates processing operations associated with the visual media capture controller 278. Initially, a visual media capture mode is invoked 4105. For example, a user may access an application presented on display 210 to invoke a visual media capture mode.
- Figure 41 (B) illustrates the exterior of electronic device 200. The Figure also illustrates the display 210. The electronic device 200 is in a visual media capture mode and presents visual media 4170.
- the display 210 also includes a single mode input icon 4180.
- the amount of time that a user presses the single mode input icon 4180 determines whether a photo will be recorded or a pre-set duration of video and further haptic contact engagement & release enable user to cancel auto stopping of video after pre-set duration of video and continue record video as per user need or up-to stop by user via further haptic contact engagement & release. For example, if a user initially intends to take a photo, then the icon 4180 is engaged with a haptic signal. If the user decides that the visual media should instead be a pre-set duration of video and in the expiry of pre-set duration timer auto stop video & auto save video, the user continues to engage the icon 4180 to start recording of video.
- the output of the visual media capture is determined to be video and starts recording of video.
- a specified period of time e.g. 3 seconds
- the output of the visual media capture is determined to be video and starts recording of video.
- the output of the visual media capture is determined to be video and starts recording of video.
- the output of the visual media capture is determined to be video and starts recording of video.
- the video mode may be indicated on the display 210 with an icon.
- haptic contact engagement is identified 4107.
- the haptic contact engagement may be at icon 4180 on display 210.
- the touch controller 215 generates haptic contact engagement signals for processing by the visual media capture controller 210 in conjunction with the processor 230.
- the haptic contact may be at any location on the display 210.
- Video is recorded and a timer is started 4109 in response to haptic contact engagement 4107.
- the video is recorded by the processor 230 operating in conjunction with the memory 236. Alternately, a still frame is taken from the video feed 4117 and is stored as a photo 4121 in response to haptic contact engagement and then video is recorded.
- the timer is executed by the processor 230 under the control of the visual media capture controller 278. Video continues to record up-to pre-set duration of timer expired 4125. Haptic contact release is subsequently identified 4111. The elapsed time recorded by the timer is then evaluated by the visual media capture controller 278 against a specified threshold (e.g., 3 seconds).
- a video is sent to the video library controller 296 for handling.
- the visual media capture controller 278 includes executable instructions to prompt the video library controller to enter a video preview mode 4130 or 4144. Consequently, a user can conveniently review a recently recorded video.
- a frame of video is selected 4117 and is stored as a photo 4121.
- an alternate approach is to capture a still frame from the camera video feed as a photo upon haptic engagement. Such a photo is then passed to the photo library controller 294 for storage 291.
- the visual media capture controller 278 may then invoke a photo preview mode 4123 to allow a user to easily view the new photo.
- user is informed about remaining time of pre-set duration of video via text status or icon or visual presentation e.g. 4175.
- Figure 42 illustrates logic flow for the visual media capture system. Techniques to selectively capture front camera or back camera video using a single user interface element are described.
- an apparatus may comprise a touch controller 215, a visual media capture controller 278, and a storage 236.
- the touch controller 215 may be operative to receive a haptic engagement signal.
- the visual media capture controller 278 may be operative to be configured in a capture mode based on whether a haptic disengagement signal is received by the touch controller 215 before expiration of a pre-set threshold of first timer, the capture mode one of a front camera video record mode or back camera video record mode, the first timer 4020 started in response to receiving the haptic engagement signal 4215, the first timer 4220 maximum threshold configured to expire after a first preset duration.
- the storage component may be operative to store visual media captured by the visual media capture controller in the configured capture mode. Other embodiments are described and claimed.
- an electronic device 200 comprising: digital image sensors 244 to capture visual media; a display 210 to present the visual media from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release or haptic contact disengagement on the display 210; and a visual media capture controller 278 to alternately record the visual media as a back camera video or a front camera video based upon an evaluation of the time period between the haptic contact engagement and the haptic contact release.
- the visual media capture controller 278 presents a single mode input icon on the display to receive the haptic contact engagement, haptic contact persistence and haptic content release.
- the visual media capture controller 278 selectively stores the video in a video library. After capturing back camera video or front camera video, the visual media capture controller invokes a video preview mode. The visual media capture controller stores the video upon haptic contact engagement.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a visual media capture controller 278 to implement operations of the invention.
- the visual media capture controller 278 includes executable instructions to alternately record a front camera video or a back camera video based upon the processing of haptic signals, as discussed below.
- the visual media controller 278 also interacts with a video library controller 296, which includes executable instructions to store, organize and present videos 292.
- the video library controller may also be a standard video library controller known in the art.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210. That is, in a visual media mode controlled by the visual media capture controller 278, the image sensors 244 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230. The touch controller 215 is responsive to haptic signals applied to the display 210.
- the visual media capture controller 278 presents a single mode input icon on the display 210. That is, the visual media capture controller 278 includes executable instructions executed by the processor to present a single mode input icon on the display 210.
- the visual media capture controller 278 communicates with the processor 230 regarding haptic signals applied to the display 210, which are recorded by the touch controller 215.
- the visual media capture controller 278 processes haptic signals applied to the single mode input icon, as detailed in connection with the discussion of figure 42 (B), and determines whether to record a front camera photo or a back camera video, as discussed below.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the visual media capture controller 278.
- Figure 42 (A) illustrates processing operations associated with the visual media capture controller 278.
- a visual media capture mode is invoked 4205.
- a user may access an application presented on display 210 to invoke a visual media capture mode.
- Figure 42 (B) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the electronic device 200 is in a visual media capture mode and presents visual media 4207.
- the display 210 also includes a single mode input icon 4208.
- the amount of time that a user presses the single mode input icon 4208 determines whether a capture video will be a front camera video or a back camera video. For example, if a user initially intends to take a back camera video, then the icon 4208 is engaged with a haptic signal.
- haptic contact engagement is identified 4215.
- the haptic contact engagement may be at icon 4208 on display 210.
- the touch controller 215 generates haptic contact engagement signals for processing by the visual media capture controller 278 in conjunction with the processor 230. Alternately, the haptic contact may be at any location on the display 210.
- Video is recorded and a timer is started 4220 in response to haptic contact engagement 4215.
- the video is recorded by the processor 230 operating in conjunction with the memory 236.
- the timer is executed by the processor 230 under the control of the visual media capture controller 278.
- Video continues to record and the timer continues to run in response to persistent haptic contact on the display.
- the elapsed time recorded by the timer is then evaluated by the visual media capture controller 278 against a specified threshold (e.g., 3 seconds). Identify haptic contact release 4222 and stop timer 4224. If the threshold is exceeded (4235 ⁇ Yes), then the back camera mode change to front camera mode or front camera mode change to back camera mode (e.g. front camera mode) 4235. Then Saving or identifying time of loading or showing of or switching of front camera or back camera (e.g. front camera) 4238. In an embodiment further Haptic contact engagement & release is identified 4276.
- the timer is then stopped, video is stopped then video is stored 4242 or in another embodiment auto stop video after expiry of pre-set duration and store video. Then trim video before identified time of loading or showing of front camera 4245 and is stored as a trimmed video 4255.
- the visual media capture controller 278 may then invoke a video preview mode 4257 to allow a user to easily view the new video.
- the timer is then stopped then video is stopped 4230, and video is stored 4258.
- the visual media capture controller 278 may then invoke a video preview mode 4257 to allow a user to easily view the new video.
- the foregoing embodiment relies upon evaluating haptic contact engagement, haptic contact persistence and haptic contact release. Based upon the expired time, either a front camera video or a front camera video is preserved. Thus, a single recording mode allows one to seamlessly transition between front camera video and back camera video recording.
- the visual media capture mode is responsive to a first haptic contact signal (e.g., one tap) to record a front camera video and a second haptic contact signal (e.g., two taps) to record a back camera video.
- the visual media capture mode is responsive to a first haptic contact signal (e.g., one tap) to record a back camera video and a second haptic contact signal (e.g., two taps) to record a front camera video.
- a first haptic contact signal e.g., one tap
- a second haptic contact signal e.g., two taps
- the visual media capture controller 278 may be configured to interpret two taps within a specified period as an invocation of the front or back camera video capture mode. This allows a user to smoothly transition from intent to take a front camera video to a desire to take a back camera video or allows a user to smoothly transition from intent to take a back camera video to a desire to take a front camera video.
- Figure 43-47 illustrates various embodiments of intelligent multi-tasking visual media capture controller 278.
- Figure 44 illustrates processing operations associated with an embodiment of the invention.
- Figure 43 (A) illustrates the exterior of an electronic device implementing multi-tasking single mode visual media capture.
- Figures 44 with 43(A) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g. 4320 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4330 on the display 210 to a user of a client device 200, the user interface 4330 comprising visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) includes a plurality of contact icons or labels e.g.
- digital image sensors 244 to capture visual media e.g. 4320
- a display 210 to present the visual media e.g. 4320 from the digital image sensors 244
- a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4330; a multi-tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328); in responsive to receiving the single user interaction or identification of haptic swipe or particular type of swipe from particular visual media capture controller control (e.g. 4322) including e.g. swipe right (4329) to swipe left (4331) for changing from front camera to back camera (4409— Yes) then show Back Camera 4428 or e.g.
- swipe left (4331) to swipe right (4329) for changing from back camera to front camera (4407— Yes) then show Front Camera 4424 OR in response to not changing of front camera to back camera or back camera to front camera or keep current mode as it 4407— No or 4409— No, receiving haptic contact engagement 4431 on left side 4331 or on right side 4329 of particular visual media capture controller control e.g. 4322 from set of presented visual media capture controller controls or icons or labels 4331 (4321-4328); after changing mode via swipe from left to right or swipe from right to left and maintaining of haptic contact persistent on left side (e.g. 4331) or right side (e.g. 4329) area of visual media capture controller control (e.g.
- 4444 No then select or extract one or more frames or images 4455 from recorded video or series of images 4440 and store photo 4460; optionally invoke photo preview mode 4468 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4322 or selected visual media capture controller control or label associated contact e.g. 4322, and send the captured photo to the identified contact 4470 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- In another embodiment enable user to execute or provide or instruct cancel command from particular visual media controller control to cancel the capturing or recording of from or back camera photo or video via one or more type of haptic contact including swipe up or via one or more types of predefined user sense via one or more types of one or more sensors of user device(s) including voice command or particular type of eye movement.
- Figure 2 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a visual media capture controller 278 to implement operations of the invention.
- the visual media capture controller 278 includes executable instructions to alternately record a front camera or back camera photo or a video or conduction one or more pre-configured tasks or activities or processing or executing of functions including cancel capturing of photo or recording of video or view received contents from visual media capture controller label associated contact(s) or group(s) or source(s) or broadcast live video streaming & like based upon the processing of haptic signals, as discussed below.
- the visual media controller 278 interacts with a photo library controller 294, which includes executable instructions to store, organize and present photos 291.
- the photo library controller may be a standard photo library controller known in the art.
- the visual media controller 278 also interacts with a video library controller 296, which includes executable instructions to store, organize and present videos 292.
- the video library controller may also be a standard video library controller known in the art.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210. That is, in a visual media mode controlled by the visual media capture controller 278, the image sensors 244 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the visual media capture controller 278 presents a single mode input icon e.g. 4322 on the display 210. That is, the visual media capture controller 278 includes executable instructions executed by the processor to present a single mode input icon on the display 210.
- the visual media capture controller 278 communicates with the processor 230 regarding haptic signals applied to the display 210, which are recorded by the touch controller 215.
- the visual media capture controller 278 processes haptic signals applied to the single mode input icon or multi-tasking visual media capture controller label and/or icon or control e.g.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the visual media capture controller 278.
- Figure 44 illustrates processing operations associated with the visual media capture controller 278.
- a visual media capture mode is invoked 4405.
- a user may access an application presented on display 210 to invoke a visual media capture mode.
- Figure 43 (A) illustrates the exterior of electronic device 200.
- the Figure also illustrates the display 210.
- the electronic device 200 is in a visual media capture mode and presents visual media 4320.
- the display 210 also includes one or more visual media capture controller controls or a single mode input icons 4330 (e.g. 4321-4328).
- the single mode input icon e.g. 4322 determines whether a front or back camera photo will be recorded or a front camera or back camera video.
- the amount of time that a user presses the single mode input icon e.g. 4322 determines whether a photo will be recorded or a video. For example, if a user initially intends to take a photo, then the icon (4330 - front camera photo or 4329 - back camera photo) 4322 is engaged with a haptic signal. If the user decides that the visual media should instead be a video, the user continues to engage the icon (4330 - front camera video or 4329 - back camera video) 4322. If the engagement is persists for a specified period of time (e.g., 3 seconds), then the output of the visual media capture is determined to be video.
- a specified period of time e.g. 3 seconds
- the video mode may be indicated on the display 210 (at prominent place-not shown in Figure) with an icon or label or animated or visual presentation.
- a single gesture allows the user to seamlessly transition from a front camera to back camera mode or from a back camera to a front camera mode and from a photo mode to a video mode and therefore control the media output during the recording process. This is accomplished without entering one mode or another prior to the capture sequence.
- haptic contact swipe including swipe or slide left to right or right to left is identified 4407 or 4409.
- the haptic contact engagement may be at icon or pre-defined area 4331 or icon or pre-defined area 4329 on the visual media capture controller control or label e.g., 4322 on display 210.
- the touch controller 215 generates haptic contact swipe signals for processing by the visual media capture controller 278 in conjunction with the processor 230.
- the haptic contact may be at any location on the display 210 for single visual media capture controller for capturing or recording front camera or back camera photo or video e.g.
- swipe left to switch to back camera and swipe right to switch to front camera and based on haptic contact persist for particular period decide capture mode as photo (e.g. 3 seconds) or video (if more than 3 seconds) and record video up-to haptic contact release then stop video and store video.
- Based on haptic contact persist after switching of front camera to back camera mode or from back camera to front camera mode or direct haptic contact engagement on current default mode icon or area e.g. left side 4331 or right side 4329 of visual media capture controller label e.g. 4322, video is recorded and a timer is started in response to haptic contact engagement or persist 4431.
- the video is recorded by the processor 230 operating in conjunction with the memory 236. Alternately, a still frame is taken from the video feed and is stored as a photo in response to haptic contact engagement and then video is recorded.
- the timer is executed by the processor 230 under the control of the visual media capture controller 278.
- Video continues to record 4432 and the timer continues to run 4432in response to persistent haptic contact 4431 on the display 210. Haptic contact release is subsequently identified 4435.
- the timer is then stopped 4440, as is the recording of video 4440.
- the elapsed time recorded by the timer is then evaluated by the visual media capture controller 278 against a specified threshold (e.g., 3 seconds). If the threshold is exceeded (4444 ⁇ Yes), then video is stored 4450.
- a specified threshold e.g., 3 seconds
- a video is sent to the video library controller 296 for handling.
- the visual media capture controller 278 includes executable instructions to prompt the video library controller to enter a video preview mode 4958. Consequently, a user can conveniently review a recently recorded video.
- a frame of video is selected 4455 and is stored as a photo 4460.
- an alternate approach is to capture a still frame from the camera video feed as a photo upon haptic engagement. Such a photo is then passed to the photo library controller 294 for storage 291.
- the visual media capture controller 278 may then invoke a photo preview mode 4468 to allow a user to easily view the new photo.
- the foregoing embodiment relies upon evaluating haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release. Based upon the expired time, either a front camera or back camera photo or a video is preserved. Thus, a single recording mode allows one to seamlessly transition between front camera and back camera and between photo and video recording.
- a photo is taken upon haptic contact engagement and a timer is started (but video is not recorded). If persistent haptic contact exists, as measured by the timer, for a specified period of time, then video is recorded. In this case, the user may then access both the photo and the video. Indeed, an option to choose both a photo and video may be supplied in accordance with different embodiments of the invention.
- haptic contact persist e.g. 4331 (to take video up-to haptic release) swipe left side e.g. 4331 or swipe right side 4329 on particular visual media capture controller e.g. 4322
- haptic engagement and swipe left e.g. 4331 or swipe right e.g. 4329 on particular visual media capture controller e.g. 4322 (3) directly haptic contact engagement on left side or right side area or icon of particular visual media capture controller e.g. 4322 for switching from front camera to back camera or from back camera to front camera while recording of video, so user can record single video in both front camera or back camera mode.
- Figure 45 illustrates various embodiments of Figure 44.
- an electronic device 200 comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g. 4320 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4330 on the display 210 to a user of a client device 200, the user interface 4330 comprising visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) includes a plurality of contact icons or labels e.g.
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4330; a multi-tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328); in responsive to receiving the single user interaction or identification of haptic swipe or particular type of swipe from particular visual media capture controller control (e.g. 4322) including e.g. swipe right (4329) to swipe left (4331) for changing from front camera to back camera (4409— Yes) then show Back Camera 4428 or e.g.
- swipe left (4331) to swipe right (4329) for changing from back camera to front camera (4407— Yes) then show Front Camera 4424 OR in response to not changing of front camera to back camera or back camera to front camera or keep current mode as it 4407— No or 4409— No, receiving haptic contact engagement 4431 on left side 4331 or on right side 4329 of particular visual media capture controller control e.g. 4322 from set of presented visual media capture controller controls or icons or labels 4330 (4321-4328); after changing mode via swipe from left to right or swipe from right to left and maintaining of haptic contact persistent on left side (e.g. 4331) or right side (e.g. 4329) area of visual media capture controller control (e.g.
- Figure 43 (A) and Figure 44 with Figure 45 (C) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g.
- a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4330 on the display 210 to a user of a client device 200, the user interface 4330 comprising visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) includes a plurality of contact icons or labels e.g.
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4330; a multi-tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328); in responsive to receiving the single user interaction or
- identification of haptic swipe or particular type of swipe from particular visual media capture controller control including e.g. swipe right (4329) to swipe left (4331) for changing from front camera to back camera (4409— Yes) then show Back Camera 4428 or e.g. swipe left (4331) to swipe right (4329) for changing from back camera to front camera (4407— Yes) then show Front Camera 4424 OR in response to not changing of front camera to back camera or back camera to front camera or keep current mode as it 4407— No or 4409— No, receiving haptic contact engagement 4431 on left side 4331 or on right side 4329 of particular visual media capture controller control e.g.
- In another embodiment enable user to execute or provide or instruct cancel command from particular visual media controller control to cancel the capturing or recording of from or back camera photo or video via one or more type of haptic contact including swipe up or via one or more types of predefined user sense via one or more types of one or more sensors of user device(s) including voice command or particular type of eye movement.
- Figure 45 illustrates various embodiments of Figure 44.
- an electronic device 200 comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g. 4320 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4330 on the display 210 to a user of a client device 200, the user interface 4330 comprising visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) includes a plurality of contact icons or labels e.g.
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4330; a multi-tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328); in responsive to receiving the single user interaction or identification of haptic swipe or particular type of swipe from particular visual media capture controller control (e.g. 4322) including e.g. swipe right (4329) to swipe left (4331) for changing from front camera to back camera (4409— Yes) then show Back Camera 4428 or e.g.
- swipe left (4331) to swipe right (4329) for changing from back camera to front camera (4407— Yes) then show Front Camera 4424 OR in response to not changing of front camera to back camera or back camera to front camera or keep current mode as it 4407— No or 4409— No, receiving haptic contact engagement 4431 on left side 4331 or on right side 4329 of particular visual media capture controller control e.g. 4322 from set of presented visual media capture controller controls or icons or labels 4330 (4321-4328); after changing mode via swipe from left to right or swipe from right to left and maintaining of haptic contact persistent on left side (e.g. 4331) or right side (e.g. 4329) area of visual media capture controller control (e.g.
- video preview mode 4544 for pre-set duration of time (for optionally enabling user to review video, delete video, edit or augment video and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close video preview interface and identify the contact represented by the selected contact icon or selected visual media capture controller control or label associated contact e.g. 4322, and send the recorded video to the identified contact 4552 or in other embodiment send to selected contact.
- Figure 45 illustrates various embodiments of Figure 44.
- Figure 43 (A) and Figure 44 with Figure 45 (E) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g.
- a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4330 on the display 210 to a user of a client device 200, the user interface 4330 comprising visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) includes a plurality of contact icons or labels e.g.
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4330; a multi-tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328); in responsive to receiving the single user interaction or identification of haptic swipe or particular type of swipe from particular visual media capture controller control (e.g. 4322) including e.g. swipe right (4329) to swipe left (4331) for changing from front camera to back camera (4409— Yes) then show Back Camera 4428 or e.g.
- swipe left (4331) to swipe right (4329) for changing from back camera to front camera (4407— Yes) then show Front Camera 4424 OR in response to not changing of front camera to back camera or back camera to front camera or keep current mode as it 4407— No or 4409— No, receiving haptic contact engagement 4431 on left side 4331 or on right side 4329 of particular visual media capture controller control e.g. 4322 from set of presented visual media capture controller controls or icons or labels 4330 (4321-4328); after changing mode via swipe from left to right or swipe from right to left and maintaining of haptic contact persistent on left side (e.g. 4331) or right side (e.g. 4329) area of visual media capture controller control (e.g.
- haptic contact persist e.g. 4331 (to take video up-to haptic release) swipe left side e.g. 4331 or swipe right side 4329 on particular visual media capture controller e.g. 4322 (2) haptic engagement and swipe left e.g.
- photo or video preview mode invoke for pre-set duration and in the event of expiration of said pre-set preview of preview auto send said recorded video as well as one or more captured photo(s) while recording of video to said particular or accessed or selected visual media capture controller control or icon or label e.g.
- system 4322 associated one or more contacts or groups or one or more types of one or more destinations (e.g. send to contact [Yogesh]). While capturing photo(s) during recording of video, system identifies or saves or mark time(s) in video and extract or takes or select said marked time(s) associated frame(s) or screenshot(s) or image(s) inside video or in another embodiment simultaneously recording video and capturing photo(s) while recording of video.
- destinations e.g. send to contact [Yogesh]
- Figure 43 (A) and Figure 44 illustrates, an electronic for taking a still picture while recording a moving picture including video, the electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g. 4320 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4330 on the display 210 to a user of a client device 200, the user interface 4330 comprising visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) includes a plurality of contact icons or labels e.g.
- digital image sensors 244 to capture visual media e.g. 4320
- a display 210 to present the visual media e.g. 4320 from the digital image sensors 244
- a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4330; a multi-tasking pre-configured default or user selected or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4406 (e.g. 4321-4328) to enable to capture photo while recording front camera or back camera video based on a user input instruction to take the still picture while recording the moving picture; in responsive to receiving the single user interaction or identification of haptic swipe or particular type of swipe from particular visual media capture controller control (e.g. 4322) including e.g.
- sequences of photo(s) or video preview mode invoke for pre-set interval of duration and in the event of expiration of said pre-set preview timer present next photo or video and after expiry of last presented visual media associated preview timer, auto send said recorded video as well as one or more captured photo(s) to said particular or accessed or selected visual media capture controller control or icon or label e.g. 4322 associated one or more contacts or groups or one or more types of one or more destinations (e.g. send to contact [Yogesh]).
- the controller in a case where the picture being recorded has the same size as the previously set size of the still picture, the controller generates a control signal for instructing the camera picture capturing unit to capture the still picture and memorize a position where the captured picture is recorded as a recorded picture in the moving picture recorder, wherein information on the memorized position of the recorded picture is used to search the moving picture recorder for a corresponding picture, and then the corresponding picture is decoded and displayed so that, after recording is finished, a user can view the still picture and determine whether to store the still picture.
- the controller reads and transmits the captured picture from the memory to the image signal processor, and the image signal processor adjusts the captured picture to have a size of the moving recorded picture and transmits the adjusted captured picture to the moving picture recorder.
- user after capturing photo and invoking photo preview mode or after recording video and invoking video preview mode, during pre-set duration of preview time user can tap on left or right side or enabled cross icon to cancel & remove photo or video and stop sending to destination(s) or contact(s) or user can tap on left or right side or enabled edit icon on visual media capture controller e.g. 4322 to edit or augment including select overlays, write text, use photo filters on recorded visual media.
- visual media capture controller e.g. 4322 to edit or augment including select overlays, write text, use photo filters on recorded visual media.
- start of video user can swipe left or right to stop video.
- swipe left for photo or swipe right for video and in the event of not exceeding threshold use default or currently available or at present viewed or front or back camera and in the event of exceeding threshold change default or currently enabled mode e.g. if current mode is front camera mode then change to back camera mode and if current mode is back camera mode then change to front camera mode.
- FIG. 46 and 47 are slight variation of figures 44 and 45, additional component is 3 rd or more customized button(s) 4695 in multi-tasking visual media capture control explains in 4695 and 4685 and Figure 43 (B) which adds additional pre-defined area or button 4354 and enable user to customized said 3 rd button or pre-defined area e.g. 4354.
- additional component is 3 rd or more customized button(s) 4695 in multi-tasking visual media capture control explains in 4695 and 4685 and Figure 43 (B) which adds additional pre-defined area or button 4354 and enable user to customized said 3 rd button or pre-defined area e.g. 4354.
- function(s) e.g. show album or gallery or received media items (e.g. Inbox) from visual media capture controller control associated contact(s) or group(s) or destination(s) or start live broadcasting etc.
- FIG. 46 with 43(B) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g. 4340 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4355 on the display 210 to a user of a client device 200, the user interface 4355 comprising visual media capture controller controls or labels and/or images or icons 4606 (e.g. 4341-4348) includes a plurality of contact icons or labels e.g.
- digital image sensors 244 to capture visual media e.g. 4320
- a display 210 to present the visual media e.g. 4340 from the digital image sensors 244
- a touch controller 215 to identify haptic contact engagement, haptic swipe, haptic contact persistence and haptic contact release on the display 210
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4355; a multi -tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4606 (e.g. 4341-4348); in responsive to receiving the single user interaction or identification of haptic swipe or particular type of swipe from particular visual media capture controller control (e.g. 4341) including e.g.
- swipe right to swipe left (second or middle or center button or pre-defined area (4331)) for changing from front camera to back camera (4409— Yes) then show Back Camera 4428 or e.g. swipe left (4331) to swipe right (4329) for changing from back camera to front camera (4407— Yes) then show Front Camera 4424 and swipe to end or 3 rd button or pre-defined area 4354 to access, execute, open, invoke or present one or more types of pre-configured or pre-set one or more interfaces, applications, features (e.g. show all or received new media items from said visual media capture controller associate contact or show inbox 4315), media items, functions (e.g.
- In another embodiment enable user to execute or provide or instruct cancel command from particular visual media controller control to cancel the capturing or recording of from or back camera photo or video via one or more type of haptic contact including swipe up or via one or more types of pre-defined user sense via one or more types of one or more sensors of user device(s) including voice command or particular type of eye movement.
- Figure 46 is similar to figure 44 and details of figure 46 are same as discussed in figure 44, only addition is 3 rd button or 3 rd pre-defined area (e.g. 4354) 4695 and 4685 (as describe above).
- Figure 47 is similar to figure 45 and details of figure 47 are same as discussed in figure 45.
- user after selecting back camera mode and after staring of back camera video user can swipe to 3 rd button or pre-defined area 4354 and can able to start front camera selfie video 4349 to provide commentary on recording of video 4340 via back camera.
- user is recording natural scenery video 4340 at particular tourist place and can also enable to concurrently record front camera video to provide video comments or reviews or description or commentary on said currently recording of video via back camera related to current scene view by recorder.
- user after selecting back camera mode and after staring of back camera video user can swipe to 3 rd button or pre-defined area 4354 and can able to start capturing of one or more front camera selfie photo(s) 4349 via tapping at/on or swiping to particular pre-defined area of visual media capture controller label e.g. 4341 to provide user's expressions during recording of video 4340 via back camera.
- visual media capture controller label e.g. 4341 to provide user's expressions during recording of video 4340 via back camera.
- user is recording natural scenery video 4340 at particular tourist place and can also enable to concurrently capture one or more photo(s) via tapping on 4341 to provide user's face expressions on said currently recording of video via back camera related to current scene view by recorder.
- Figure 4309 illustrates single multi-tasking visual media capture controller with 2 buttons or 2 pre-defined area as discussed above, difference is it is not associated with contact, it enables to switch from front camera 4302 to back camera 4301 or from back camera 4305 to front camera 4308 and capture photo or record video based on duration of haptic contact persist and engagement as discussed above.
- auto sending to visual media capture controller associate contact user have to manually select one or more contact(s) and/or destinations or auto send to pre-set one or more types of contact(s)/group(s) and/or destination(s) associate with said single visual media capture controller. It's an alternative to currently presented front or back camera mode button or icon, photo capture icon and video record icon of standard smartphone camera application or interface.
- User doesn't have to first change mode by tapping on camera mode change icon, then tap on photo icon to load photo capture interface and capture photo or tap on video icon to load video mode and start recording of video and then tap on stop icon to stop video. Instead user can swipe left to change mode to back camera or swipe right to change mode to front camera and based on duration of haptic contact engagement and persist on single mode multi-tasking visual media capture controller control or label and/or icon user can take photo or start recording of video and in an embodiment further tap to stop & store video or in an embodiment auto stop after pre-set duration of time and auto store (or tap before expiry of said pre-set max. duration of video to stop and store video) or in an embodiment in the event of haptic release from said single mode multi-tasking visual media capture controller control or label and/or icon stop and store video.
- Figure 4392 illustrates single multi-tasking visual media capture controller control with 3 buttons or 3 pre-defined area as discussed above, difference is it is not associated with contact, it enables to switch from front camera 4390 to back camera 4393 or from back camera 4393 to front camera 4390 and capture photo or record video based on duration of haptic contact persist and engagement as discussed above and also provide 3 rd button or predefined area on single multi-tasking visual media capture controller control 4392 and enable to configure or associate one or more applications, features, interfaces, functions and in the event of swipe to 3 rd button or 3 rd end side pre-defined area, present said associated or pre-set or pre-configured one or more applications, features, interfaces, or execute functions.
- user can swipe left to change mode to back camera or swipe right to change mode to front camera and based on duration of haptic contact engagement and persist on single mode multi -tasking visual media capture controller control or label and/or icon user can take photo or start recording of video and in an embodiment further tap to stop & store video or in an embodiment auto stop after pre-set duration of time and auto store (or tap before expiry of said pre-set max. duration of video to stop and store video) or in an embodiment in the event of haptic release from said single mode multi-tasking visual media capture controller control or label and/or icon stop and store video.
- User can swipe to end or swipe to pre-defined area of said single mode multi -tasking visual media capture controller control or label and/or icon to view or access said presented preset or pre-configured one or more applications, interfaces, features and/or execute functions.
- Figure 48-52 illustrates various embodiments of intelligent multi -tasking visual media capture controller 278.
- Figure 49 illustrates processing operations associated with an embodiment of the invention.
- Figure 48 (A) illustrates the exterior of an electronic device implementing multi -tasking single mode visual media capture.
- Figures 49 with 48(A) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4320; a display 210 to present the visual media e.g. 4320 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4830 on the display 210 to a user of a client device 200, the user interface 4830 comprising visual media capture controller controls or labels and/or images or icons 4906 (e.g. 4821-4828) includes a plurality of contact icons or labels e.g.
- digital image sensors 244 to capture visual media e.g. 4320
- a display 210 to present the visual media e.g. 4320 from the digital image sensors 244
- a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210
- each contact icon or label representing one or more contacts of the user; receive a single user interaction with a selected contact icon of the plurality of contact icons included in the user interface or tray 4830; a multi-tasking pre-configured or auto generated or auto presented visual media capture controller controls or labels and/or images or icons 4906 (e.g. 4821-4828); in responsive to receiving the single user interaction or identification of haptic contact engagement on particular visual media capture controller control (e.g.
- 4822 from set of presented visual media capture controller controls or icons or labels 4830 (4821-4828); after haptic contact engagement on pre-defined area and maintaining of haptic contact persistent on left side (e.g. 4831) or right side (e.g. 4829) area of visual media capture controller control (e.g. 4822), start recording of video and start timer 4932; in response to identification or receiving of haptic contact release 4935 stop video and stop timer and if threshold not exceeded (e.g.
- 4944 No then select or extract one or more frames or images 4955 from recorded video or series of images 4940 and store photo 4960; optionally invoke photo preview mode 4968 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4822 or selected visual media capture controller control or label associated contact e.g. 4822, and send the captured photo to the identified contact 4970 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- In another embodiment enable user to execute or provide or instruct cancel command from particular visual media controller control to cancel the capturing or recording of from or back camera photo or video via one or more type of haptic contact including swipe up or via one or more types of pre-defined user sense via one or more types of one or more sensors of user device(s) including voice command or particular type of eye movement.
- Figure 1 illustrates an electronic device 200 implementing operations of the invention.
- the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
- the processor 230 may be a central processing unit and/or a graphics processing unit.
- the memory 236 is a combination of flash memory and random access memory.
- the memory 236 stores a visual media capture controller 278 to implement operations of the invention.
- the visual media capture controller 278 includes executable instructions to alternately record a front camera or back camera photo or a video or conduction one or more pre-configured tasks or activities or processing or executing of functions including cancel capturing of photo or recording of video or view received contents from visual media capture controller label associated contact(s) or group(s) or source(s) or broadcast live video streaming & like based upon the processing of haptic signals, as discussed below.
- the visual media controller 278 interacts with a photo library controller 294, which includes executable instructions to store, organize and present photos 291.
- the photo library controller may be a standard photo library controller known in the art.
- the visual media controller 278 also interacts with a video library controller 296, which includes executable instructions to store, organize and present videos 292.
- the video library controller may also be a standard video library controller known in the art.
- the processor 230 is also coupled to image sensors 244.
- the image sensors 244 may be known digital image sensors, such as charge coupled devices.
- the image sensors capture visual media, which is presented on display 210. That is, in a visual media mode controlled by the visual media capture controller 278, the image sensors 244 capture visual media and present the visual media on the display 210 so that a user can observe the captured visual media.
- a touch controller 215 is connected to the display 210 and the processor 230.
- the touch controller 215 is responsive to haptic signals applied to the display 210.
- the visual media capture controller 278 presents a single mode input icon e.g. 4822 on the display 210. That is, the visual media capture controller 278 includes executable instructions executed by the processor to present a single mode input icon on the display 210.
- the visual media capture controller 278 communicates with the processor 230 regarding haptic signals applied to the display 210, which are recorded by the touch controller 215.
- the visual media capture controller 278 processes haptic signals applied to the single mode input icon or multi-tasking visual media capture controller label and/or icon or control e.g.
- the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220, a power control circuit 225 and a global positioning system processor 235. While many of components of Figure 2 are known in the art, new functionality is achieved through the visual media capture controller 278.
- Figure 49 illustrates processing operations associated with the visual media capture controller 278. Initially, a visual media capture mode is invoked 4905. For example, a user may access an application presented on display 210 to invoke a visual media capture mode.
- Figure 48 (A) illustrates the exterior of electronic device 200. The Figure also illustrates the display 210. The electronic device 200 is in a visual media capture mode and presents visual media 4820.
- the display 210 also includes one or more visual media capture controller controls or a single mode input icons 4830 (e.g. 4821-4828).
- the single mode input icon e.g. 4822 determines whether a front or back camera photo will be recorded or a front camera or back camera video.
- the amount of time that a user presses the single mode input icon e.g. 4822 determines whether a photo will be recorded or a video. For example, if a user initially intends to take a photo, then the icon (4831 - front camera photo or 4829 - back camera photo) 4822 is engaged with a haptic signal.
- the user decides that the visual media should instead be a video
- the user continues to engage the icon (4831 - front camera video or 4829 - back camera video) 4822. If the engagement is persists for a specified period of time (e.g., 3 seconds), then the output of the visual media capture is determined to be video.
- the video mode may be indicated on the display 210 (at prominent place-not shown in Figure) with an icon or label or animated or visual presentation.
- the haptic contact engagement may be at icon or pre-defined area 4831 or icon or predefined area 4829 on the visual media capture controller control or label e.g., 4822 on display 210.
- the touch controller 215 generates haptic contact engagement and persist signals for processing by the visual media capture controller 278 in conjunction with the processor 230.
- the haptic contact may be at any location on the display 210 for single visual media capture controller for capturing or recording front camera or back camera photo or video e.g. haptic contact engagement on pre-defined area on left side to switch to back camera and haptic contact engagement on pre-defined area on right side to switch to front camera and based on haptic contact persist for particular period decide capture mode as photo (e.g. 3 seconds) or video (if more than 3 seconds) and record video up-to haptic contact release then stop video and store video.
- haptic contact engagement on pre-defined area on left side to switch to back camera and haptic contact engagement on pre-defined area on right side to switch to front camera and based on haptic contact persist for particular period decide capture mode as photo (e.g. 3 seconds) or video (if more than 3 seconds) and record video up-to haptic contact release then stop video and store video.
- haptic contact persist after haptic contact engagement on current default mode icon or area e.g. left side 4831 for back camera mode or right side 4829 for front camera mode of visual media capture controller label e.g. 4822, video is recorded and a timer is started 4932 in response to haptic contact engagement or persist 4931.
- the video is recorded by the processor 230 operating in conjunction with the memory 236. Alternately, a still frame is taken from the video feed and is stored as a photo in response to haptic contact engagement and then video is recorded.
- the timer is executed by the processor 230 under the control of the visual media capture controller 278.
- Video continues to record 4932 and the timer continues to run 4932 in response to persistent haptic contact 4931 on the display 210. Haptic contact release is subsequently identified 4935.
- the timer is then stopped 4940, as is the recording of video 4940.
- the elapsed time recorded by the timer is then evaluated by the visual media capture controller 278 against a specified threshold (e.g., 3 seconds). If the threshold is exceeded (4944 ⁇ Yes), then video is stored 4950.
- a video is sent to the video library controller 296 for handling.
- the visual media capture controller 278 includes executable instructions to prompt the video library controller to enter a video preview mode 4958. Consequently, a user can conveniently review a recently recorded video.
- a frame of video is selected 4955 and is stored as a photo 4960.
- an alternate approach is to capture a still frame from the camera video feed as a photo upon haptic engagement. Such a photo is then passed to the photo library controller 294 for storage 291.
- the visual media capture controller 278 may then invoke a photo preview mode 4968 to allow a user to easily view the new photo.
- the foregoing embodiment relies upon evaluating haptic contact engagement, haptic contact persistence and haptic contact release. Based upon the expired time, either a front camera or back camera photo or a video is preserved. Thus, a single recording mode allows one to seamlessly transition between front camera and back camera and between photo capturing and video recording.
- a photo is taken upon haptic contact engagement and a timer is started (but video is not recorded). If persistent haptic contact exists, as measured by the timer, for a specified period of time, then video is recorded. In this case, the user may then access both the photo and the video. Indeed, an option to choose both a photo and video may be supplied in accordance with different embodiments of the invention.
- user is enable to (1) while haptic contact persist e.g. 4831 (to take video up-to haptic release) swipe left side e.g. 4831 or swipe right side 4829 on particular visual media capture controller e.g. 4822 (2) haptic engagement and swipe left e.g. 4831 or swipe right e.g. 4829 on particular visual media capture controller e.g. 4822 (3) directly haptic contact engagement on left side or right side area or icon of particular visual media capture controller e.g. 4822 for switching from front camera to back camera or from back camera to front camera while recording of video, so user can record single video in both front camera or back camera mode.
- haptic contact persist e.g. 4831 (to take video up-to haptic release) swipe left side e.g. 4831 or swipe right side 4829 on particular visual media capture controller e.g. 4822
- Figure 50 illustrates various embodiments of Figure 49.
- Figure 48 (A) and Figure 49 with Figure 50 (B) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4820; a display 210 to present the visual media e.g. 4820 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4830 on the display 210 to a user of a client device 200, the user interface 4830 comprising visual media capture controller controls or labels and/or images or icons 4906 (4830 e.g. 4821-4828) includes a plurality of contact icons or labels e.g.
- 4822 from set of presented visual media capture controller controls or icons or labels 4830 (4821-4828); after haptic contact engagement or left side area or right side area and maintaining of haptic contact persistent on left side (e.g. 4831) or right side (e.g. 4829) area of visual media capture controller control (e.g. 4822) or after not changing mode and receiving of direct haptic contact engagement on left side e.g. 4831 or right side e.g. 4829 of particular visual media capture controller control (e.g. 4822), start recording of video and start timer 4932; receive haptic contact release 4935; if threshold not exceeded (e.g.
- 5005 No then stop video and stop timer 5007; select or extract one or more frames or images 5009 from recorded video or series of images and store photo 5011; optionally invoke photo preview mode 5013 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4822 or selected visual media capture controller control or label associated contact e.g. 4822, and send the captured photo to the identified contact 5022 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- In another embodiment enable user to execute or provide or instruct cancel command from particular visual media controller control to cancel the capturing or recording of from or back camera photo or video via one or more type of haptic contact including swipe up or via one or more types of pre-defined user sense via one or more types of one or more sensors of user device(s) including voice command or particular type of eye movement.
- Figure 48 (A) and Figure 49 with Figure 50 (C) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4820; a display 210 to present the visual media e.g. 4820 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4830 on the display 210 to a user of a client device 200, the user interface 4830 comprising visual media capture controller controls or labels and/or images or icons 4906 (e.g. 4821-4828) includes a plurality of contact icons or labels e.g.
- digital image sensors 244 to capture visual media e.g. 4820
- a display 210 to present the visual media e.g. 4820 from the digital image sensors 244
- a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210
- 4822 from set of presented visual media capture controller controls or icons or labels 4830 (4821-4828); after haptic contact engagement or left side area or right side area and maintaining of haptic contact persistent on left side (e.g. 4831) or right side (e.g. 4829) area of visual media capture controller control (e.g. 4822) or after not changing mode and receiving of direct haptic contact engagement on left side e.g. 4831 or right side e.g. 4829 of particular visual media capture controller control (e.g. 4822), start recording of video and start timer 4932; receive haptic contact release 4935; if threshold not exceeded (e.g.
- 5055 No then stop video and stop timer 5057; select or extract one or more frames or images 5059 from recorded video or series of images and store photo 5062; optionally invoke photo preview mode 5063 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4822 or selected visual media capture controller control or label associated contact e.g. 4822, and send the captured photo to the identified contact 5072 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- video preview mode 5070 for pre-set duration of time (for optionally enabling user to review video, delete video, edit or augment video and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close video preview interface and identify the contact represented by the selected contact icon or selected visual media capture controller control or label associated contact e.g. 4822, and send the recorded video to the identified contact 5072 or in other embodiment
- In another embodiment enable user to execute or provide or instruct cancel command from particular visual media controller control to cancel the capturing or recording of from or back camera photo or video via one or more type of haptic contact including swipe up or via one or more types of predefined user sense via one or more types of one or more sensors of user device(s) including voice command or particular type of eye movement.
- Figure 50 illustrates various embodiments of Figure 49.
- Figure 48 (A) and Figure 49 with Figure 50 (D) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4820; a display 210 to present the visual media e.g. 4820 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4830 on the display 210 to a user of a client device 200, the user interface 4830 comprising visual media capture controller controls or labels and/or images or icons 4906 (e.g. 4821-4828) includes a plurality of contact icons or labels e.g.
- 4822 from set of presented visual media capture controller controls or icons or labels 4830 (4821-4828); after haptic contact engagement or left side area or right side area and maintaining of haptic contact persistent on left side (e.g. 4831) or right side (e.g. 4829) area of visual media capture controller control (e.g. 4822) or after not changing mode and receiving of direct haptic contact engagement on left side e.g. 4831 or right side e.g. 4829 of particular visual media capture controller control (e.g. 4822), start recording of video and start timer 4932; receive haptic contact release 4935; if threshold not exceeded (e.g.
- 5025 No then stop video and stop timer 5027; select or extract one or more frames or images 5029 from recorded video or series of images and store photo 5031; optionally invoke photo preview mode 5033 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4822 or selected visual media capture controller control or label associated contact e.g. 4822, and send the captured photo to the identified contact 5052 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- video preview mode 5044 for pre-set duration of time (for optionally enabling user to review video, delete video, edit or augment video and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close video preview interface and identify the contact represented by the selected contact icon or selected visual media capture controller control or label associated contact e.g. 4822, and send the recorded video to the identified contact 5052 or in other embodiment send to selected contact.
- Figure 50 illustrates various embodiments of Figure 49.
- Figure 48 (A) and Figure 49 with Figure 50 (E) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4820; a display 210 to present the visual media e.g.
- a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4830 on the display 210 to a user of a client device 200, the user interface 4830 comprising visual media capture controller controls or labels and/or images or icons 4906 (e.g. 4821-4828) includes a plurality of contact icons or labels e.g.
- 4822 from set of presented visual media capture controller controls or icons or labels 4830 (4821-4828); after haptic contact engagement or left side area or right side area and maintaining of haptic contact persistent on left side (e.g. 4831) or right side (e.g. 4829) area of visual media capture controller control (e.g. 4822) or after not changing mode and receiving of direct haptic contact engagement on left side e.g. 4831 or right side e.g. 4829 of particular visual media capture controller control (e.g. 4822), start recording of video and start timer 4932; receive haptic contact release 4935; if threshold not exceeded (e.g.
- 5075 No then stop video and stop timer 5077; select or extract one or more frames or images 5079 from recorded video or series of images and store photo 5081; optionally invoke photo preview mode 5083 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4822 or selected visual media capture controller control or label associated contact e.g. 4822, and send the captured photo to the identified contact 5099 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- pre-set duration of video timer 5075 ⁇ Yes then in the event of pre-set duration of video timer expires (5085 ⁇ Yes) then store video 5095; optionally invoke video preview mode 5098 for pre-set duration of time (for optionally enabling user to review video, delete video, edit or augment video and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close video preview interface and identify the contact represented by the selected contact icon or selected visual media capture controller control or label associated contact e.g. 4822, and send the recorded video to the identified contact 5099 or in other embodiment send to selected contact. If threshold exceeded (e.g.
- haptic contact persist e.g. 4831 (to take video up-to haptic release) swipe left side e.g. 4831 or swipe right side 4829 on particular visual media capture controller e.g. 4822 (2) haptic engagement and swipe left e.g. 4831 or swipe right e.g. 4829 on particular visual media capture controller e.g. 4822 (3) directly haptic contact engagement on left side or right side area or icon or anywhere area of particular visual media capture controller e.g. 4822 for capturing photo simultaneously while recording of video and after ending of video via (1) haptic release 4935 (49A), (2) Max.
- haptic release 4935 49A
- Max Max.
- photo or video preview mode invoke for pre-set duration and in the event of expiration of said pre-set preview of preview auto send said recorded video as well as one or more captured photo(s) while recording of video to said particular or accessed or selected visual media capture controller control or icon or label e.g. 4822 associated one or more contacts or groups or one or more types of one or more destinations (e.g. send to contact [Yogesh]).
- Figure 48 (A) and Figure 49 illustrates, an electronic device 200 for taking a still picture while recording a moving picture including video, the electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4820; a display 210 to present the visual media e.g.
- a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4830 on the display 210 to a user of a client device 200, the user interface 4830 comprising visual media capture controller controls or labels and/or images or icons 4906 (e.g. 4821-4828) includes a plurality of contact icons or labels e.g.
- 4822 from set of presented visual media capture controller controls or icons or labels 4830 (4821-4828); after haptic contact engagement or left side area or right side area and maintaining of haptic contact persistent on left side (e.g. 4831) or right side (e.g. 4829) area of visual media capture controller control (e.g. 4822) or after not changing mode and receiving of direct haptic contact engagement on left side e.g. 4831 or right side e.g. 4829 of particular visual media capture controller control (e.g. 4822), start recording of video and start timer 4932; in response of threshold exceeded (4944— Yes) for start recording video user is enable to (1) while haptic contact persist e.g.
- sequences of photo(s) or video preview mode invoke for pre-set interval of duration and in the event of expiration of said pre-set preview timer present next photo or video and after expiry of last presented visual media associated preview timer, auto send said recorded video as well as one or more captured photo(s) to said particular or accessed or selected visual media capture controller control or icon or label e.g. 4822 associated one or more contacts or groups or one or more types of one or more destinations (e.g. send to contact [Yogesh]).
- the controller in a case where the picture being recorded has the same size as the previously set size of the still picture, the controller generates a control signal for instructing the camera picture capturing unit to capture the still picture and memorize a position where the captured picture is recorded as a recorded picture in the moving picture recorder, wherein information on the memorized position of the recorded picture is used to search the moving picture recorder for a corresponding picture, and then the corresponding picture is decoded and displayed so that, after recording is finished, a user can view the still picture and determine whether to store the still picture.
- the controller reads and transmits the captured picture from the memory to the image signal processor, and the image signal processor adjusts the captured picture to have a size of the moving recorded picture and transmits the adjusted captured picture to the moving picture recorder.
- haptic release from visual media capture controller e.g. 4822
- user can haptic engagement or tap on left side icon or left side pre-defined area to capture photo or user can haptic engagement or tap on right side icon or right side pre-defined area to record video and stop video in the event of (1) expiration of pre-set duration of timer, (2) manually tap by user on video icon or further haptic contact engagement on pre-defined area of visual media capture controller control e.g. 4822, (3) one or more type of user sense via one or more types of sensor(s) and (4) hold to start recording of video and release to stop video..
- user can tap on left or right side or enabled cross icon to cancel & remove photo or video and stop sending to destination(s) or contact(s) or user can tap on left or right side or enabled edit icon on visual media capture controller e.g. 4822 to edit or augment including select overlays, write text, use photo filters on recorded visual media.
- visual media capture controller e.g. 4822 to edit or augment including select overlays, write text, use photo filters on recorded visual media.
- haptic contact engagement on left side pre-defined area or swipe left for photo or haptic contact engagement on right side pre-defined area or swipe right for video and in the event of not exceeding threshold use default or currently available or at present viewed or front or back camera and in the event of exceeding threshold change default or currently enabled mode e.g. if current mode is front camera mode then change to back camera mode and if current mode is back camera mode then change to front camera mode.
- FIG. 51 and 52 are slight variation of figures 49 and 50, additional component is 3 rd or more customized button(s) 5195 in multi -tasking visual media capture control explains in 5195 and 5185 and Figure 48 (B) which adds additional pre-defined area or button 4854 and enable user to customized said 3 rd button or pre-defined area e.g. 4854.
- pre-defined area e.g. end side of visual media capture controller control or label and/or icon
- pre-set interface e.g. end side of visual media capture controller control or label and/or icon
- present said pre-set interface(s) or execute function(s) e.g. show album or gallery or received media items (e.g. Inbox) from visual media capture controller control associated contact(s) or group(s) or destination(s) or start live broadcasting etc.
- FIG. 51 with 48(B) illustrates, an electronic device 200, comprising: digital image sensors 244 to capture visual media e.g. 4820; a display 210 to present the visual media e.g. 4840 from the digital image sensors 244; a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210; present a user interface or tray 4855 on the display 210 to a user of a client device 200, the user interface 4855 comprising visual media capture controller controls or labels and/or images or icons 5106 (e.g. 4841-4848) includes a plurality of contact icons or labels e.g.
- digital image sensors 244 to capture visual media e.g. 4820
- a display 210 to present the visual media e.g. 4840 from the digital image sensors 244
- a touch controller 215 to identify haptic contact engagement, haptic contact persistence and haptic contact release on the display 210
- 5144 No then select or extract one or more frames or images 5155 from recorded video or series of images 5140 and store photo 5160; optionally invoke photo preview mode 5168 for pre-set duration of time (for optionally enabling user to review photo, remove photo, edit or augment photo and change destination(s) for sending) and in the event of expiry of said pre-set duration of preview timer, hide or close photo preview interface and identify the contact represented by the selected contact icon e.g. 4841 or selected visual media capture controller control or label associated contact e.g. 4841, and send the captured photo to the identified contact 5170 or in other embodiment send to user selected contact; If threshold exceeded (e.g.
- Figure 51 is similar to figure 49 and details of figure 51 are same as discussed in figure 49, only addition is 3 rd button or 3 rd pre-defined area (e.g. 4854) 5195 and 5185 (as describe above).
- Figure 52 is similar to figure 50 and details of figure 52 are same as discussed in figure 50.
- user after selecting back camera mode and after staring of back camera video user can haptic contact engagement on 3 rd button or pre-defined area 4854 and can able to start front camera selfie video 4849 to provide commentary on recording of video 4840 via back camera.
- user is recording natural scenery video 4840 at particular tourist place and can also enable to concurrently record front camera video to provide video comments or reviews or description or commentary on said currently recording of video via back camera related to current scene view by recorder.
- user after selecting back camera mode and after staring of back camera video user can haptic contact engagement on 3 rd button or pre-defined area 4854 and can able to start capturing of one or more front camera selfie photo(s) 4849 via tapping at/on or swiping to particular pre-defined area of visual media capture controller label e.g. 4841 to provide user's expressions during recording of video 4840 via back camera.
- visual media capture controller label e.g. 4841 to provide user's expressions during recording of video 4840 via back camera.
- user is recording natural scenery video 4840 at particular tourist place and can also enable to concurrently capture one or more photo(s) via tapping on 4841 to provide user's face expressions on said currently recording of video via back camera related to current scene view by recorder.
- Figure 4809 illustrates single multi-tasking visual media capture controller with 2 buttons or 2 pre-defined area as discussed above, difference is it is not associated with contact, it enables to switch from front camera 4802 to back camera 4801 or from back camera 4805 to front camera 4808 via haptic contact engagement on pre-defined area and capture photo or record video based on duration of haptic contact persist and engagement as discussed above.
- auto sending to visual media capture controller associate contact user have to manually select one or more contact(s) and/or destinations or auto send to pre-set one or more types of contact(s)/group(s) and/or destination(s) associate with said single visual media capture controller.
- photo capture icon and video record icon of standard smartphone camera application or interface.
- User doesn't have to first change mode by tapping on camera mode change icon, then tap on photo icon to load photo capture interface and capture photo or tap on video icon to load video mode and start recording of video and then tap on stop icon to stop video.
- user can haptic contact engagement on pre-defined area e.g. left side to change mode to back camera or haptic contact engagement on pre-defined area e.g.
- Figure 4892 illustrates single multi-tasking visual media capture controller control with 3 buttons or 3 pre-defined area as discussed above, difference is it is not associated with contact, it enables to switch from front camera 4890 to back camera 4893 or from back camera 4893 to front camera 4890 via haptic contact engagement on pre-defined area and capture photo or record video based on duration of haptic contact persist and engagement as discussed above and also provide 3 rd button or predefined area on single multi-tasking visual media capture controller control 4892 and enable to configure or associate one or more applications, features, interfaces, functions and in the event of haptic contact engagement on predefined area on 3 rd button or 3 rd end side pre-defined area, present said associated or pre-set or pre-configured one or more applications, features, interfaces, or execute functions.
- user can haptic contact engagement on pre-defined area on left side to change mode to back camera or haptic contact engagement on pre-defined area on right side to change mode to front camera and based on duration of haptic contact engagement and persist on single mode multi -tasking visual media capture controller control or label and/or icon user can take photo or start recording of video and in an embodiment further tap to stop & store video or in an embodiment auto stop after pre-set duration of time and auto store (or tap before expiry of said pre-set max. duration of video to stop and store video) or in an embodiment in the event of haptic release from said single mode multi-tasking visual media capture controller control or label and/or icon stop and store video.
- User can haptic contact engagement on pre-defined area e.g.
- multi-tasking visual media capture controller discussed in figures 43 to 52 or anywhere else in specification (e.g.
- user can create or add one or more contact(s) or connection(s) or group(s) specific MVMCCs via auto or manually importing phone book contacts, import from device storage, one or more contacts, connections or social connections from one or more 3 rd parties applications, web sites, servers, databases, devices, networks, user accounts, user profiles, followers via providing login information, web services, APIs and SDKs and one or more types of destinations including sharing or posting or publishing or storing destinations including one or more applications, web sites, web pages, interfaces, servers, databases, devices, and networks.
- system 261 Based on contacts and/or connections and/or destinations or creating of named group(s) of contacts and/or connections and/or destinations, system 261 auto creates contact name specific or group name specific
- MVMCC labels and/or images e.g. auto import profile picture of contact
- icons present at user device 200 display 210 (e.g. 4330) for enabling user to capture or record front or back camera photo or video and send said visual media to said MVMCC control or label and/or image or icon associated or added contact(s) and/or group(s) and/or destination(s).
- user can edit or update MVMCC label name or change icon or image (contact's profile picture).
- user is enabled to add or remove contacts to existing MVMCC control or label and/or image or icon for sending visual media to added contacts and/or users and/or one or more types of destinations.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
L'invention concerne divers modes de réalisation d'un système, de procédés, d'une plate-forme, d'une base de données, d'un moteur de recherche et d'un dispositif permettant pour un utilisateur l'ouverture ou le déverrouillage automatique d'un dispositif d'utilisateur, l'ouverture automatique d'un écran d'affichage d'appareil photo, la capture automatique d'une photo ou le démarrage automatique de l'enregistrement d'une vidéo, l'ouverture automatique d'une visionneuse de média lorsque l'utilisateur souhaite visualiser, l'application de règles d'accès au contenu et de réglages éphémères ou non éphémères et en temps réel pour une ou plusieurs destinations et/ou sources, permettant à l'utilisateur de rechercher, de mettre en correspondance, de sauvegarder, de marquer des favoris, de s'abonner et de visualiser un ou plusieurs contenus spécifiques à des critères d'objets, des média visuels liés à l'utilisateur capturé par d'autres réglages liés à la protection de la vie privée de l'utilisateur, des publicités spécifique modèles d'objets fournies dans des flux de média visuels, permettant à l'expéditeur de média d'accéder à des média partagés par l'expéditeur au niveau du dispositif destinataire, permettant divers modes de réalisation qui se rapportent à l'affichage de messages éphémères et de messages éphémères en temps réel, permettant une commande multitâche intelligente de capture média visuels de telle façon que l'utilisateur puisse facilement acquérir une photo ou une vidéo avant ou arrière ou diffuser en direct et visualiser des éléments de média reçus et/ou accéder à une ou plusieurs interfaces ou applications prédéfinies, permettant la génération automatique de l'état actuel de l'utilisateur, des activités actuelles de l'utilisateur et la génération automatique d'émoticônes, d'émojis, de dessins animés d'après une ou des photos et/ou vidéos de caméra avant et/ou arrière et des données d'utilisateur, l'ouverture ou le déverrouillage automatique d'un dispositif de caméra et le démarrage automatique de l'enregistrement d'une vidéo mère et pendant l'enregistrement de ladite vidéo mère, permettant de stocker une ou des vidéos abrégées et/ou une ou des vidéos de caméra arrière et des vidéos de caméra avant et/ou de capturer une ou des photos et/ou de partager avec un ou des contacts et/ou groupe(s) et/ou destination(s), permettant de créer des événements de telle façon que des participants invités ou ciblés spécifiques à des critères, notamment d'après des données de profil, un objet fourni, une voix, une position ou un lieu, un état ou des membres présentés au niveau du lieu ou de la position particuliers puissent capturer, partager et visualiser un ou plusieurs types de média, permettant une plate-forme à réalité augmentée, permettant un nouveau type de média comprenant une photo ou vidéo à réalité augmentée que l'utilisateur peut partager avec d'autres, permettant l'enregistrement automatisé et assurant la visualisation des réactions de l'utilisateur, permettant des réponses spécifiques à des caractéristiques de besoins et des communications en temps réel pour un appariement, une qualité et des services améliorés et des économies pécuniaires, permettant la fourniture et la consumation d'utilisateur à utilisateur de service(s) d'acquisition de média visuels, permettant une présentation spécifique à la date et à l'heure de début et de fin d'une session péri-oculaire d'un ou plusieurs types de contenus pour permettre un ou plusieurs types d'actions de masse d'utilisateurs (visualiser, acheter, participer, réagir, etc.), permettant une présentation spécifique à la disponibilité de l'utilisateur d'activités suggérées, permettant un suivi par l'utilisateur de flux de types multiples, permettant d'identifier des mots-clés liés à l'utilisateur, et permettant une conversation naturelle comme une application de communication.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/336,346 US20220179665A1 (en) | 2017-01-29 | 2021-06-02 | Displaying user related contextual keywords and controls for user selection and storing and associating selected keywords and user interaction with controls data with user |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IB2016057398 | 2016-12-07 | ||
IBPCT/IB2016/057398 | 2016-12-07 | ||
IBPCT/IB2017/050468 | 2017-01-29 | ||
IBPCT/IB2017/050468 | 2017-01-29 |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
IBPCT/IB2017/050468 Continuation | 2016-12-07 | 2017-01-29 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/336,346 Continuation US20220179665A1 (en) | 2017-01-29 | 2021-06-02 | Displaying user related contextual keywords and controls for user selection and storing and associating selected keywords and user interaction with controls data with user |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018104834A1 true WO2018104834A1 (fr) | 2018-06-14 |
Family
ID=62490812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2017/057578 WO2018104834A1 (fr) | 2016-12-07 | 2017-12-01 | Prise en temps réel, éphémère, en mode unique, en groupe et automatique de média visuels, d'histoires, état automatique, types de flux en suivi, actions de masse, activités suggérées, média ar et plate-forme |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018104834A1 (fr) |
Cited By (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109101109A (zh) * | 2018-08-03 | 2018-12-28 | 百度在线网络技术(北京)有限公司 | 基于用户动作的ar设备的控制方法、装置 |
CN109177577A (zh) * | 2018-09-20 | 2019-01-11 | 京东方科技集团股份有限公司 | 智能书签、移动终端、交互方法及其装置和存储介质 |
CN109543516A (zh) * | 2018-10-16 | 2019-03-29 | 深圳壹账通智能科技有限公司 | 签约意向判断方法、装置、计算机设备和存储介质 |
CN109920065A (zh) * | 2019-03-18 | 2019-06-21 | 腾讯科技(深圳)有限公司 | 资讯的展示方法、装置、设备及存储介质 |
CN109979462A (zh) * | 2019-03-21 | 2019-07-05 | 广东小天才科技有限公司 | 一种结合上下文语境获取意图的方法和系统 |
US10375009B1 (en) | 2018-10-11 | 2019-08-06 | Richard Fishman | Augmented reality based social network with time limited posting |
CN110213667A (zh) * | 2019-04-16 | 2019-09-06 | 威比网络科技(上海)有限公司 | 在线视频交互的网络保障方法、系统、设备及存储介质 |
CN110534094A (zh) * | 2019-07-31 | 2019-12-03 | 大众问问(北京)信息科技有限公司 | 一种语音交互方法、装置及设备 |
WO2020023563A1 (fr) * | 2018-07-24 | 2020-01-30 | John Bruno | Système de commerce en consortium géographique de commerçants pour commerce contextuel |
CN110753102A (zh) * | 2019-10-15 | 2020-02-04 | 浙江口碑网络技术有限公司 | 基于地磁的服务信息推送方法及装置 |
CN110933508A (zh) * | 2019-12-09 | 2020-03-27 | 北京奇艺世纪科技有限公司 | 一种视频播放方法、装置及电子设备 |
CN111163274A (zh) * | 2020-01-21 | 2020-05-15 | 海信视像科技股份有限公司 | 一种视频录制方法及显示设备 |
WO2020101984A1 (fr) * | 2018-11-14 | 2020-05-22 | Microsoft Technology Licensing, Llc | Compilateurs vidéo logiciels mis en œuvre dans des systèmes informatiques |
CN111447468A (zh) * | 2019-09-25 | 2020-07-24 | 来享享网络科技股份有限公司 | 一种信息共享系统、方法及非暂时性机器可读媒体 |
CN111782878A (zh) * | 2020-07-06 | 2020-10-16 | 聚好看科技股份有限公司 | 服务器、显示设备及其视频搜索排序方法 |
CN111833864A (zh) * | 2019-04-22 | 2020-10-27 | 北京京东尚科信息技术有限公司 | 信息处理方法、装置、系统和可读介质 |
CN111918016A (zh) * | 2020-07-24 | 2020-11-10 | 武汉烽火众智数字技术有限责任公司 | 一种视频通话中高效的实时画面标注方法 |
US10873558B2 (en) | 2017-12-14 | 2020-12-22 | Facebook, Inc. | Systems and methods for sharing content |
US20200410548A1 (en) * | 2019-06-26 | 2020-12-31 | Interactive Offers Llc | Method and system for commerce and advertising |
CN112508717A (zh) * | 2020-12-01 | 2021-03-16 | 中国人寿保险股份有限公司 | 一种影像信息的审核方法、装置、电子设备及存储介质 |
CN112507220A (zh) * | 2020-12-07 | 2021-03-16 | 中国平安人寿保险股份有限公司 | 信息推送方法、装置及介质 |
US10949671B2 (en) * | 2019-08-03 | 2021-03-16 | VIRNECT inc. | Augmented reality system capable of manipulating an augmented reality object and an augmented reality method using the same |
CN112569607A (zh) * | 2020-12-11 | 2021-03-30 | 腾讯科技(深圳)有限公司 | 预购道具的显示方法、装置、设备及介质 |
CN112639682A (zh) * | 2018-08-24 | 2021-04-09 | 脸谱公司 | 在增强现实环境中的多设备地图构建和协作 |
CN112703711A (zh) * | 2018-09-10 | 2021-04-23 | 天梭股份有限公司 | 用于在移动设备之间进行监视或跟踪的方法 |
CN112717404A (zh) * | 2021-01-25 | 2021-04-30 | 腾讯科技(深圳)有限公司 | 虚拟对象的移动处理方法、装置、电子设备及存储介质 |
CN113312266A (zh) * | 2021-06-11 | 2021-08-27 | 成都精灵云科技有限公司 | 基于自动化测试快速生成测试拓扑结构图的系统及其方法 |
CN113359986A (zh) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | 增强现实数据展示方法、装置、电子设备及存储介质 |
WO2021195404A1 (fr) * | 2020-03-26 | 2021-09-30 | Snap Inc. | Sélection, basée sur la voix, d'un contenu de réalité augmentée pour des objets détectés |
US11153665B2 (en) | 2020-02-26 | 2021-10-19 | The Toronto-Dominion Bank | Systems and methods for controlling display of supplementary data for video content |
CN113535299A (zh) * | 2021-07-08 | 2021-10-22 | 聚好看科技股份有限公司 | 服务器、显示设备以及健康管理方法 |
US11157558B2 (en) | 2020-02-26 | 2021-10-26 | The Toronto-Dominion Bank | Systems and methods for controlling display of video content in an online media platform |
WO2022005841A1 (fr) * | 2020-06-29 | 2022-01-06 | Snap Inc. | Contenu de réalité augmentée basé sur un déplacement pour des commentaires |
CN113900609A (zh) * | 2021-09-24 | 2022-01-07 | 当趣网络科技(杭州)有限公司 | 大屏终端交互方法、大屏终端及计算机可读存储介质 |
CN113986805A (zh) * | 2021-10-26 | 2022-01-28 | 北京小米移动软件有限公司 | 计时方法、装置和计算机可读存储介质 |
US20220036079A1 (en) * | 2019-03-29 | 2022-02-03 | Snap Inc. | Context based media curation |
CN114041283A (zh) * | 2019-02-20 | 2022-02-11 | 谷歌有限责任公司 | 利用事件前和事件后输入流来接洽自动化助理 |
CN114155477A (zh) * | 2022-02-08 | 2022-03-08 | 成都考拉悠然科技有限公司 | 一种基于平均教师模型的半监督视频段落定位方法 |
US11303601B2 (en) | 2017-12-14 | 2022-04-12 | Meta Platforms, Inc. | Systems and methods for sharing content |
US11314943B2 (en) * | 2018-10-05 | 2022-04-26 | Capital One Services, Llc | Typifying emotional indicators for digital messaging |
US20220164774A1 (en) * | 2020-11-23 | 2022-05-26 | C2 Monster Co., Ltd. | Project management system with capture review transmission function and method thereof |
US20220182457A1 (en) * | 2019-11-28 | 2022-06-09 | Ricoh Company, Ltd. | Information processing system, information processing apparatus, and information processing method |
CN114666625A (zh) * | 2022-04-08 | 2022-06-24 | 海南车智易通信息技术有限公司 | 一种热门主播列表的生成方法、直播系统及计算设备 |
US11416360B2 (en) | 2019-10-09 | 2022-08-16 | Fujifilm Medical Systems U.S.A., Inc. | Systems and methods for detecting errors in artificial intelligence engines |
US11509621B2 (en) * | 2018-12-05 | 2022-11-22 | Snap Inc. | UI and devices for ranking user generated content |
US11516270B1 (en) | 2021-08-20 | 2022-11-29 | T-Mobile Usa, Inc. | Network protocol for enabling enhanced features for media content |
US20220391915A1 (en) * | 2021-06-07 | 2022-12-08 | Toshiba Tec Kabushiki Kaisha | Information processing system, information processing device, and control method thereof |
US20220404863A1 (en) * | 2018-01-12 | 2022-12-22 | Julio Cesar Castañeda | Eyewear device with fingerprint sensor for user input |
US20230017181A1 (en) * | 2019-08-29 | 2023-01-19 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
WO2023024880A1 (fr) * | 2021-08-25 | 2023-03-02 | 腾讯科技(深圳)有限公司 | Procédé et appareil d'affichage d'expression dans un scénario virtuel, dispositif et support |
CN116134797A (zh) * | 2020-09-16 | 2023-05-16 | 斯纳普公司 | 增强现实自动反应 |
US20230196419A1 (en) * | 2020-06-22 | 2023-06-22 | Karya Property Management, Llc | Review and ticket management system and method |
CN116450919A (zh) * | 2022-01-06 | 2023-07-18 | 腾讯科技(深圳)有限公司 | 信息获取方法、图形码生成方法、装置、终端及介质 |
US11711493B1 (en) * | 2021-03-04 | 2023-07-25 | Meta Platforms, Inc. | Systems and methods for ephemeral streaming spaces |
US11798550B2 (en) | 2020-03-26 | 2023-10-24 | Snap Inc. | Speech-based selection of augmented reality content |
CN117830910A (zh) * | 2024-03-05 | 2024-04-05 | 沈阳云翠通讯科技有限公司 | 一种用于视频检索的自动混剪视频方法、系统及存储介质 |
CN118013090A (zh) * | 2024-02-04 | 2024-05-10 | 国网经济技术研究院有限公司 | 针对电网工程勘测数据的快速检索方法及系统 |
US11983461B2 (en) | 2020-03-26 | 2024-05-14 | Snap Inc. | Speech-based selection of augmented reality content for detected objects |
EP4369266A1 (fr) * | 2022-11-10 | 2024-05-15 | Canon Kabushiki Kaisha | Appareil de traitement d'image, procédé de traitement d'image et programme |
US12028300B2 (en) | 2020-05-29 | 2024-07-02 | Huawei Technologies Co., Ltd. | Method, apparatus, and system for sending pictures after thumbnail selections |
EP4172792A4 (fr) * | 2020-06-25 | 2024-07-03 | Snap Inc | Mise à jour d'un état d'avatar dans un système de messagerie |
US12050717B1 (en) * | 2022-09-13 | 2024-07-30 | CAPEIT.ai, inc. | Method and system for mapping knowledge objects for data compliance |
US12095846B2 (en) | 2019-07-19 | 2024-09-17 | Snap Inc. | On-demand camera sharing over a network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140096077A1 (en) * | 2012-09-28 | 2014-04-03 | Michal Jacob | System and method for inferring user intent based on eye movement during observation of a display screen |
US20140145935A1 (en) * | 2012-11-27 | 2014-05-29 | Sebastian Sztuk | Systems and methods of eye tracking control on mobile device |
-
2017
- 2017-12-01 WO PCT/IB2017/057578 patent/WO2018104834A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140096077A1 (en) * | 2012-09-28 | 2014-04-03 | Michal Jacob | System and method for inferring user intent based on eye movement during observation of a display screen |
US20140145935A1 (en) * | 2012-11-27 | 2014-05-29 | Sebastian Sztuk | Systems and methods of eye tracking control on mobile device |
Non-Patent Citations (1)
Title |
---|
CARMELO PINO ET AL.: "Improving Mobile Device Interaction by Eye Tracking Analysis", PROCEEDINGS OF THE FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS, 9 September 2012 (2012-09-09), pages 1199 - 1202, XP032267249, ISBN: 978-83-60810-48-4 * |
Cited By (95)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11303601B2 (en) | 2017-12-14 | 2022-04-12 | Meta Platforms, Inc. | Systems and methods for sharing content |
US10873558B2 (en) | 2017-12-14 | 2020-12-22 | Facebook, Inc. | Systems and methods for sharing content |
US11743223B2 (en) | 2017-12-14 | 2023-08-29 | Meta Platforms, Inc. | Systems and methods for sharing content |
US11892710B2 (en) * | 2018-01-12 | 2024-02-06 | Snap Inc. | Eyewear device with fingerprint sensor for user input |
US20220404863A1 (en) * | 2018-01-12 | 2022-12-22 | Julio Cesar Castañeda | Eyewear device with fingerprint sensor for user input |
WO2020023563A1 (fr) * | 2018-07-24 | 2020-01-30 | John Bruno | Système de commerce en consortium géographique de commerçants pour commerce contextuel |
US11151625B2 (en) | 2018-07-24 | 2021-10-19 | John Bruno | Geographical merchant consortium commerce system for contextual commerce |
CN109101109A (zh) * | 2018-08-03 | 2018-12-28 | 百度在线网络技术(北京)有限公司 | 基于用户动作的ar设备的控制方法、装置 |
CN112639682A (zh) * | 2018-08-24 | 2021-04-09 | 脸谱公司 | 在增强现实环境中的多设备地图构建和协作 |
CN112703711B (zh) * | 2018-09-10 | 2023-11-28 | 天梭股份有限公司 | 用于在移动设备之间进行监视或跟踪的方法 |
CN112703711A (zh) * | 2018-09-10 | 2021-04-23 | 天梭股份有限公司 | 用于在移动设备之间进行监视或跟踪的方法 |
CN109177577A (zh) * | 2018-09-20 | 2019-01-11 | 京东方科技集团股份有限公司 | 智能书签、移动终端、交互方法及其装置和存储介质 |
US11314943B2 (en) * | 2018-10-05 | 2022-04-26 | Capital One Services, Llc | Typifying emotional indicators for digital messaging |
US10375009B1 (en) | 2018-10-11 | 2019-08-06 | Richard Fishman | Augmented reality based social network with time limited posting |
CN109543516A (zh) * | 2018-10-16 | 2019-03-29 | 深圳壹账通智能科技有限公司 | 签约意向判断方法、装置、计算机设备和存储介质 |
WO2020101984A1 (fr) * | 2018-11-14 | 2020-05-22 | Microsoft Technology Licensing, Llc | Compilateurs vidéo logiciels mis en œuvre dans des systèmes informatiques |
US11876770B2 (en) | 2018-12-05 | 2024-01-16 | Snap Inc. | UI and devices for ranking user generated content |
US11509621B2 (en) * | 2018-12-05 | 2022-11-22 | Snap Inc. | UI and devices for ranking user generated content |
US11575639B2 (en) | 2018-12-05 | 2023-02-07 | Snap Inc. | UI and devices for incenting user contribution to social network content |
CN114041283A (zh) * | 2019-02-20 | 2022-02-11 | 谷歌有限责任公司 | 利用事件前和事件后输入流来接洽自动化助理 |
CN114041283B (zh) * | 2019-02-20 | 2024-06-07 | 谷歌有限责任公司 | 利用事件前和事件后输入流来接洽自动化助理 |
CN109920065B (zh) * | 2019-03-18 | 2023-05-30 | 腾讯科技(深圳)有限公司 | 资讯的展示方法、装置、设备及存储介质 |
CN109920065A (zh) * | 2019-03-18 | 2019-06-21 | 腾讯科技(深圳)有限公司 | 资讯的展示方法、装置、设备及存储介质 |
CN109979462A (zh) * | 2019-03-21 | 2019-07-05 | 广东小天才科技有限公司 | 一种结合上下文语境获取意图的方法和系统 |
US20220036079A1 (en) * | 2019-03-29 | 2022-02-03 | Snap Inc. | Context based media curation |
CN110213667A (zh) * | 2019-04-16 | 2019-09-06 | 威比网络科技(上海)有限公司 | 在线视频交互的网络保障方法、系统、设备及存储介质 |
CN111833864A (zh) * | 2019-04-22 | 2020-10-27 | 北京京东尚科信息技术有限公司 | 信息处理方法、装置、系统和可读介质 |
CN111833864B (zh) * | 2019-04-22 | 2024-04-16 | 北京京东尚科信息技术有限公司 | 信息处理方法、装置、系统和可读介质 |
US20200410548A1 (en) * | 2019-06-26 | 2020-12-31 | Interactive Offers Llc | Method and system for commerce and advertising |
US12095846B2 (en) | 2019-07-19 | 2024-09-17 | Snap Inc. | On-demand camera sharing over a network |
CN110534094B (zh) * | 2019-07-31 | 2022-05-31 | 大众问问(北京)信息科技有限公司 | 一种语音交互方法、装置及设备 |
CN110534094A (zh) * | 2019-07-31 | 2019-12-03 | 大众问问(北京)信息科技有限公司 | 一种语音交互方法、装置及设备 |
US10949671B2 (en) * | 2019-08-03 | 2021-03-16 | VIRNECT inc. | Augmented reality system capable of manipulating an augmented reality object and an augmented reality method using the same |
US20230017181A1 (en) * | 2019-08-29 | 2023-01-19 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
US11922112B2 (en) * | 2019-08-29 | 2024-03-05 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
CN111447468B (zh) * | 2019-09-25 | 2023-04-25 | 来享享网络科技股份有限公司 | 一种信息共享系统、方法及非暂时性机器可读媒体 |
CN111447468A (zh) * | 2019-09-25 | 2020-07-24 | 来享享网络科技股份有限公司 | 一种信息共享系统、方法及非暂时性机器可读媒体 |
US11416360B2 (en) | 2019-10-09 | 2022-08-16 | Fujifilm Medical Systems U.S.A., Inc. | Systems and methods for detecting errors in artificial intelligence engines |
CN110753102B (zh) * | 2019-10-15 | 2021-01-05 | 浙江口碑网络技术有限公司 | 基于地磁的服务信息推送方法及装置 |
CN110753102A (zh) * | 2019-10-15 | 2020-02-04 | 浙江口碑网络技术有限公司 | 基于地磁的服务信息推送方法及装置 |
US20220182457A1 (en) * | 2019-11-28 | 2022-06-09 | Ricoh Company, Ltd. | Information processing system, information processing apparatus, and information processing method |
US11924291B2 (en) * | 2019-11-28 | 2024-03-05 | Ricoh Company, Ltd. | Information processing system, information processing apparatus, and information processing method |
CN110933508B (zh) * | 2019-12-09 | 2022-02-01 | 北京奇艺世纪科技有限公司 | 一种视频播放方法、装置及电子设备 |
CN110933508A (zh) * | 2019-12-09 | 2020-03-27 | 北京奇艺世纪科技有限公司 | 一种视频播放方法、装置及电子设备 |
CN111163274B (zh) * | 2020-01-21 | 2022-04-22 | 海信视像科技股份有限公司 | 一种视频录制方法及显示设备 |
CN111163274A (zh) * | 2020-01-21 | 2020-05-15 | 海信视像科技股份有限公司 | 一种视频录制方法及显示设备 |
US11157558B2 (en) | 2020-02-26 | 2021-10-26 | The Toronto-Dominion Bank | Systems and methods for controlling display of video content in an online media platform |
US11886501B2 (en) | 2020-02-26 | 2024-01-30 | The Toronto-Dominion Bank | Systems and methods for controlling display of video content in an online media platform |
US12096092B2 (en) | 2020-02-26 | 2024-09-17 | The Toronto-Dominion Bank | Systems and methods for controlling display of supplementary data for video content |
US11716518B2 (en) | 2020-02-26 | 2023-08-01 | The Toronto-Dominion Bank | Systems and methods for controlling display of supplementary data for video content |
US11153665B2 (en) | 2020-02-26 | 2021-10-19 | The Toronto-Dominion Bank | Systems and methods for controlling display of supplementary data for video content |
WO2021195404A1 (fr) * | 2020-03-26 | 2021-09-30 | Snap Inc. | Sélection, basée sur la voix, d'un contenu de réalité augmentée pour des objets détectés |
US11798550B2 (en) | 2020-03-26 | 2023-10-24 | Snap Inc. | Speech-based selection of augmented reality content |
US11983461B2 (en) | 2020-03-26 | 2024-05-14 | Snap Inc. | Speech-based selection of augmented reality content for detected objects |
US12028300B2 (en) | 2020-05-29 | 2024-07-02 | Huawei Technologies Co., Ltd. | Method, apparatus, and system for sending pictures after thumbnail selections |
US11954712B2 (en) * | 2020-06-22 | 2024-04-09 | Karya Property Management, Llc | Review and ticket management system and method |
US20230196419A1 (en) * | 2020-06-22 | 2023-06-22 | Karya Property Management, Llc | Review and ticket management system and method |
EP4172792A4 (fr) * | 2020-06-25 | 2024-07-03 | Snap Inc | Mise à jour d'un état d'avatar dans un système de messagerie |
WO2022005841A1 (fr) * | 2020-06-29 | 2022-01-06 | Snap Inc. | Contenu de réalité augmentée basé sur un déplacement pour des commentaires |
CN111782878A (zh) * | 2020-07-06 | 2020-10-16 | 聚好看科技股份有限公司 | 服务器、显示设备及其视频搜索排序方法 |
CN111782878B (zh) * | 2020-07-06 | 2023-09-19 | 聚好看科技股份有限公司 | 服务器、显示设备及其视频搜索排序方法 |
CN111918016A (zh) * | 2020-07-24 | 2020-11-10 | 武汉烽火众智数字技术有限责任公司 | 一种视频通话中高效的实时画面标注方法 |
CN116134797A (zh) * | 2020-09-16 | 2023-05-16 | 斯纳普公司 | 增强现实自动反应 |
US20220164774A1 (en) * | 2020-11-23 | 2022-05-26 | C2 Monster Co., Ltd. | Project management system with capture review transmission function and method thereof |
US11978018B2 (en) * | 2020-11-23 | 2024-05-07 | Memorywalk Co, Ltd | Project management system with capture review transmission function and method thereof |
CN112508717A (zh) * | 2020-12-01 | 2021-03-16 | 中国人寿保险股份有限公司 | 一种影像信息的审核方法、装置、电子设备及存储介质 |
CN112507220A (zh) * | 2020-12-07 | 2021-03-16 | 中国平安人寿保险股份有限公司 | 信息推送方法、装置及介质 |
CN112569607B (zh) * | 2020-12-11 | 2022-07-26 | 腾讯科技(深圳)有限公司 | 预购道具的显示方法、装置、设备及介质 |
CN112569607A (zh) * | 2020-12-11 | 2021-03-30 | 腾讯科技(深圳)有限公司 | 预购道具的显示方法、装置、设备及介质 |
CN112717404A (zh) * | 2021-01-25 | 2021-04-30 | 腾讯科技(深圳)有限公司 | 虚拟对象的移动处理方法、装置、电子设备及存储介质 |
CN112717404B (zh) * | 2021-01-25 | 2022-11-29 | 腾讯科技(深圳)有限公司 | 虚拟对象的移动处理方法、装置、电子设备及存储介质 |
US11711493B1 (en) * | 2021-03-04 | 2023-07-25 | Meta Platforms, Inc. | Systems and methods for ephemeral streaming spaces |
CN113359986A (zh) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | 增强现实数据展示方法、装置、电子设备及存储介质 |
US20220391915A1 (en) * | 2021-06-07 | 2022-12-08 | Toshiba Tec Kabushiki Kaisha | Information processing system, information processing device, and control method thereof |
US12062053B2 (en) * | 2021-06-07 | 2024-08-13 | Toshiba Tec Kabushiki Kaisha | Information processing system, purchase registration device, and control method thereof |
CN113312266B (zh) * | 2021-06-11 | 2023-09-15 | 成都精灵云科技有限公司 | 基于自动化测试快速生成测试拓扑结构图的系统及其方法 |
CN113312266A (zh) * | 2021-06-11 | 2021-08-27 | 成都精灵云科技有限公司 | 基于自动化测试快速生成测试拓扑结构图的系统及其方法 |
CN113535299A (zh) * | 2021-07-08 | 2021-10-22 | 聚好看科技股份有限公司 | 服务器、显示设备以及健康管理方法 |
CN113535299B (zh) * | 2021-07-08 | 2024-02-27 | 聚好看科技股份有限公司 | 服务器、显示设备以及健康管理方法 |
US11924261B2 (en) | 2021-08-20 | 2024-03-05 | T-Mobile Usa, Inc. | Network protocol for enabling enhanced features for media content |
US11516270B1 (en) | 2021-08-20 | 2022-11-29 | T-Mobile Usa, Inc. | Network protocol for enabling enhanced features for media content |
WO2023024880A1 (fr) * | 2021-08-25 | 2023-03-02 | 腾讯科技(深圳)有限公司 | Procédé et appareil d'affichage d'expression dans un scénario virtuel, dispositif et support |
CN113900609B (zh) * | 2021-09-24 | 2023-09-29 | 当趣网络科技(杭州)有限公司 | 大屏终端交互方法、大屏终端及计算机可读存储介质 |
CN113900609A (zh) * | 2021-09-24 | 2022-01-07 | 当趣网络科技(杭州)有限公司 | 大屏终端交互方法、大屏终端及计算机可读存储介质 |
CN113986805A (zh) * | 2021-10-26 | 2022-01-28 | 北京小米移动软件有限公司 | 计时方法、装置和计算机可读存储介质 |
CN116450919A (zh) * | 2022-01-06 | 2023-07-18 | 腾讯科技(深圳)有限公司 | 信息获取方法、图形码生成方法、装置、终端及介质 |
CN114155477B (zh) * | 2022-02-08 | 2022-04-29 | 成都考拉悠然科技有限公司 | 一种基于平均教师模型的半监督视频段落定位方法 |
CN114155477A (zh) * | 2022-02-08 | 2022-03-08 | 成都考拉悠然科技有限公司 | 一种基于平均教师模型的半监督视频段落定位方法 |
CN114666625A (zh) * | 2022-04-08 | 2022-06-24 | 海南车智易通信息技术有限公司 | 一种热门主播列表的生成方法、直播系统及计算设备 |
CN114666625B (zh) * | 2022-04-08 | 2023-12-01 | 海南车智易通信息技术有限公司 | 一种热门主播列表的生成方法、直播系统及计算设备 |
US12050717B1 (en) * | 2022-09-13 | 2024-07-30 | CAPEIT.ai, inc. | Method and system for mapping knowledge objects for data compliance |
EP4369266A1 (fr) * | 2022-11-10 | 2024-05-15 | Canon Kabushiki Kaisha | Appareil de traitement d'image, procédé de traitement d'image et programme |
CN118013090A (zh) * | 2024-02-04 | 2024-05-10 | 国网经济技术研究院有限公司 | 针对电网工程勘测数据的快速检索方法及系统 |
CN117830910B (zh) * | 2024-03-05 | 2024-05-31 | 沈阳云翠通讯科技有限公司 | 一种用于视频检索的自动混剪视频方法、系统及存储介质 |
CN117830910A (zh) * | 2024-03-05 | 2024-04-05 | 沈阳云翠通讯科技有限公司 | 一种用于视频检索的自动混剪视频方法、系统及存储介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220179665A1 (en) | Displaying user related contextual keywords and controls for user selection and storing and associating selected keywords and user interaction with controls data with user | |
WO2018104834A1 (fr) | Prise en temps réel, éphémère, en mode unique, en groupe et automatique de média visuels, d'histoires, état automatique, types de flux en suivi, actions de masse, activités suggérées, média ar et plate-forme | |
Ghose | TAP: Unlocking the mobile economy | |
US11201981B1 (en) | System for notification of user accessibility of curated location-dependent content in an augmented estate | |
US20220374849A1 (en) | Graphical user interface for making payment to selected place on map or selected place in list | |
US10740804B2 (en) | Systems, methods and apparatuses of seamless integration of augmented, alternate, virtual, and/or mixed realities with physical realities for enhancement of web, mobile and/or other digital experiences | |
Frost et al. | E-marketing | |
CN109416692B (zh) | 用于制作和分布一种或多种定制的以媒体为中心的产品的方法 | |
Puthussery | Digital marketing: an overview | |
WO2019171128A1 (fr) | Filtres photo à pages multiples, exploitables sur photo, et éphémères, avec publicité de commandes et sur support, intégration automatisée de contenus externes, défilement d'alimentation automatisé, modèle basé sur une publication publicitaire et actions et commandes de réaction sur des objets reconnus dans une photo ou une vidéo | |
US20180032997A1 (en) | System, method, and computer program product for determining whether to prompt an action by a platform in connection with a mobile device | |
Ash et al. | Landing page optimization: The definitive guide to testing and tuning for conversions | |
US9191238B2 (en) | Virtual notes in a reality overlay | |
Kreutzer et al. | Digital Darwinism: Branding and business models in jeopardy | |
US20120209748A1 (en) | Devices, systems, and methods for providing gift selection and gift redemption services in an e-commerce environment over a communication network | |
CN113271480A (zh) | 用于提供定制的娱乐内容的计算机处理方法和系统 | |
Heinemann et al. | Social-Local-Mobile | |
Schadler et al. | The mobile mind shift: Engineer your business to win in the mobile moment | |
US11785161B1 (en) | System for user accessibility of tagged curated augmented reality content | |
US20120295542A1 (en) | System for creating web based applications linked to rfid tags | |
US20230252540A1 (en) | User applications store and connecting, registering, following with and synchronizing or accessing user data of user applications from/to parent application and other user applications | |
US11222361B2 (en) | Location-based book identification | |
US20230177259A1 (en) | System and Method of Annotating Transmitted and Posted Images | |
Swilley | Mobile commerce: How it contrasts, challenges, and enhances electronic commerce | |
US11876941B1 (en) | Clickable augmented reality content manager, system, and network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17879496 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17879496 Country of ref document: EP Kind code of ref document: A1 |