WO2016179087A1 - Personalized image-based communication on mobile platforms - Google Patents

Personalized image-based communication on mobile platforms Download PDF

Info

Publication number
WO2016179087A1
WO2016179087A1 PCT/US2016/030402 US2016030402W WO2016179087A1 WO 2016179087 A1 WO2016179087 A1 WO 2016179087A1 US 2016030402 W US2016030402 W US 2016030402W WO 2016179087 A1 WO2016179087 A1 WO 2016179087A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
linked
content item
text
Prior art date
Application number
PCT/US2016/030402
Other languages
French (fr)
Inventor
David Andrew RITCH
Antony Stewart RITCH
Original Assignee
Ink Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ink Corp. filed Critical Ink Corp.
Publication of WO2016179087A1 publication Critical patent/WO2016179087A1/en
Priority to US15/799,897 priority Critical patent/US20180054405A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0238Programmable keyboards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • G06F40/129Handling non-Latin characters, e.g. kana-to-kanji conversion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/08Annexed information, e.g. attachments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Methods and systems described in this disclosure are directed to creating linked images for including as part of in a text message sent to other user(s) via a text messaging application on a user's mobile device. A linked image is generated by creating links between one or more images and a content item such as a word, an expression, a collection of characters, or an emoji. Linked images are meant to be a visual substitute or representation of a content item. The images used in creating linked images can be obtained from a variety of sources. Linked images can be saved in the form of a library or a folder in a user's mobile devices. The present disclosure facilitates multiple linked images for a single content item, thus providing the choice to represent a word or an emoji by multiple linked images.

Description

PERSONALIZED IMAGE-BASED COMMUNICATION ON MOBILE PLATFORMS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No. 62/291 ,564, entitled "Personalized Image-Based Communication On Mobile Platforms," filed on February 5, 2016, and to U.S. Provisional Patent Application No. 62/155,537, entitled "Image Tagging For Mobile Messaging," filed on May 1 , 201 5, the contents of each of which are incorporated by reference in their entirety for all purposes.
BACKGROUND
[0002] Current image-based communication options, whilst popular, are very difficult to personalize whilst maintaining the speed at which users communicate. Commercially available specialized types of images such as emojis and digital stickers are limited in their styles and number and cannot be personalized by a user. Emojis are also represented differently depending on the software and hardware requirements of the system on which the emojis are displayed. I n some situations, if a first user communicating with a second user sends an emoji via text messaging or multimedia messaging, the emoji sent by the first user can change appearance upon receipt by the second user.
[0003] Images are one of the primary ways in which people express themselves. Collectively they can represent what people like, whom they know, what they wear, how they want the world to see them and how they see themselves. Users today have large quantities of images saved on various platforms (e.g. , phone, online collaborative spaces, photo sharing platforms, photo backup platforms, etc.) which can be difficult to access at the speed at which users communicate. This difficulty can lead to standardized and non- personalized communications.
[0004] An opportunity exists to create a system through which users can create, classify, acquire, access and distribute images at fast speed for use in conducting personalized image-based communication via a variety of mobile platforms, applications and devices.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Fig. 1 illustrates a block diagram showing a sequence of steps demonstrating how ink sets are generated.
[0006] Fig. 2 illustrates an example starting point for a text messaging session.
[0007] Fig. 3 illustrates an example scenario where a user enters content items corresponding to a linked image. [0008] Fig. 4 illustrates an example scenario of a text message sent by a user and including a linked image.
[0009] Fig. 5 illustrates an example scenario of a text message received by a user and including a linked image.
[0010] Fig. 6 illustrates an example scenario of the text linked to an image having been revealed.
[0011] Fig. 7 illustrates a display of a conversation, in which the recipient of a message inputs text to respond to a message.
[0012] Fig. 8 illustrates an example scenario of the text linked to an image having been revealed over multiple images.
[0013] Fig. 9 illustrates an example scenario of enlarging an image included within a message received.
[0014] Fig. 10 illustrates an example scenario of a user linking text to an image.
[0015] Fig. 1 1 illustrates the various options available for a user to link an image to text.
[0016] Fig. 12 illustrates an example scenario of a user linking text to an image using the "take photo" option.
[0017] Fig. 13 illustrates an example scenario of a user cropping an image.
[0018] Fig. 14 illustrates an example scenario of an image ready to be linked to text.
[0019] Fig. 15 illustrates an example scenario in which the text input has been converted to the linked image.
[0020] Fig. 16 illustrates an example scenario of a user linking text to an image using the "image search" option.
[0021] Fig. 17 illustrates an example of the web search results from the word "run."
[0022] Fig. 18 illustrates an image a user has chosen to be linked from the web search results.
[0023] Fig. 19 illustrates an example of an image ready to be linked to text.
[0024] Fig. 20 illustrates an example scenario in which the text has been converted to the linked image from the web image search.
[0025] Fig. 21 illustrates an example scenario of a user linking text to an image using the "photo library" option.
[0026] Fig. 22 illustrates an example page from a user's photo library. [0027] Fig. 23 illustrates an example of an image that a user has chosen from their photo library.
[0028] Fig. 24 illustrates an example scenario in which the text has been converted to the linked image from a user's photo library.
[0029] Fig. 25 illustrates an example scenario in which a user links an image to text by first choosing/creating an image.
[0030] Fig. 26 illustrates an example scenario in which a user is offered options for adding an image.
[0031] Fig. 27 illustrates an example scenario in which an image has been added using the "take photo" option.
[0032] Fig. 28 illustrates an example scenario in which a user inputs text.
[0033] Fig. 29 illustrates an example scenario of the linked image inserted within the compose message field.
[0034] Fig. 30 illustrates an example scenario in which a user continues typing a message.
[0035] Fig. 31 illustrates an example scenario of a message having been sent.
[0036] Fig. 32 illustrates an example scenario in which the text linked to the image is revealed within the message.
[0037] Fig. 33 illustrates an example of a user starting a message.
[0038] Fig. 34 illustrates an example of a blank web image search screen.
[0039] Figs. 35A and 35B illustrate examples of web image search results.
[0040] Fig. 36 illustrates an example scenario of an image added using the "image search" option.
[0041] Fig. 37 illustrates an example scenario in which a user inputs text and links the image to text.
[0042] Fig. 38 illustrates an example scenario of a user inputting additional text after the linked image has been inserted within the compose message field.
[0043] Fig. 39 illustrates an example scenario of a user's ink set library.
[0044] Fig. 40 illustrates an example of the options available in creating a new ink set via the "create ink set" option. [0045] Fig. 41 illustrates an example scenario of an image having been added using a "take photo" option.
[0046] Fig. 42 illustrates an example scenario of inputting text to link an image.
[0047] Fig. 43 illustrates an example scenario of an image having been linked to text directly from the ink set library.
[0048] Fig. 44 illustrates an example scenario of the newly created ink set within the ink set library.
[0049] Figs. 45-53 illustrate an example scenario of how to edit an existing ink set.
[0050] Fig. 45 illustrates an example of a part of a user's ink set library.
[0051] Fig. 46 illustrates an example of an individual ink set.
[0052] Fig. 47 illustrates an example scenario of a user inputting additional text into an existing ink set.
[0053] Fig. 48 illustrates an example scenario where additional text has been added to an existing ink set.
[0054] Fig. 49 illustrates an example scenario of a user inserting an additional image into an existing ink set.
[0055] Fig. 50 illustrates an example scenario of a user taking a photo to add to an ink set.
[0056] Fig. 51 illustrates an example scenario in which a new image has been added to an existing ink set.
[0057] Figs. 52-57 illustrate an example scenario of an image being linked to an emoji via the ink set library.
[0058] Fig. 52 illustrates an example scenario of image having been input.
[0059] Fig. 53 illustrates an example scenario of a user opening the emoji keyboard.
[0060] Fig. 54 illustrates an example scenario of a user inputting an emoji to link with an image.
[0061] Fig. 55 illustrates an example scenario where a user is prompted to link the emoji.
[0062] Fig. 56 illustrates an example scenario in which the emoji is now linked to the image.
[0063] Fig. 57 illustrates an example scenario showing a newly created ink set included within a user's ink set library. [0064] Figs. 58-60 illustrate an example scenario of multiple emojis being input in combination along with alphanumeric keys, as the text against which to link an image.
[0065] Figs. 58-59 illustrate an example scenario of multiple emojis input as text.
[0066] Fig. 60 illustrates an example scenario of additional alphanumeric text input.
[0067] Fig. 61 illustrates an example scenario in which the text input is readied to link with an image.
[0068] Fig. 62 illustrates an example scenario of the method by which to source an image is chosen.
[0069] Fig. 63 illustrates an example scenario in which an image is chosen using a "photo library" option is used in Fig. 62.
[0070] Fig. 64 illustrates an example scenario of the text included within the compose message field converted to the linked image.
[0071] Fig. 65 illustrates an example scenario showing the message received on the recipient's device.
[0072] Fig. 66 illustrates an example scenario of the linked text revealed.
[0073] Fig. 67 illustrates an example scenario in which multiple images linked to emojis are used to compose a message.
[0074] Fig. 68 illustrates an example scenario showing the receipt of a message with text revealed.
[0075] Fig. 69 illustrates an example scenario showing a user inputting text.
[0076] Fig. 70 illustrates an example scenario in which text is being converted automatically to the default image included within an ink set.
[0077] Fig. 71 illustrates an example scenario of both the linked text along with any other images (variations) linked with the text being revealed.
[0078] Fig. 72 illustrates an example scenario in which a user taps on one of the variations to change the image.
[0079] Fig. 73 illustrates an example scenario in which a user, having changed the image, continues typing a message.
[0080] Fig. 74 illustrates an example scenario of a message having been sent.
[0081] Fig. 75 illustrates an example scenario in which the default image for the ink set has now changed. [0082] Fig. 76 illustrates an example scenario of a message including an ink set ready to be sent.
[0083] Fig. 77 illustrates an example scenario in which a user has tapped on the image included within the compose message field to convert it to the linked text.
[0084] Fig. 78 illustrates an example scenario in which the message has been sent as text only.
[0085] Fig. 79 illustrates an example scenario of an empty compose message field.
[0086] Fig. 80 illustrates an example scenario in which a user has turned off the automatic conversion of text to image functionality. The system is now in default text mode.
[0087] Fig. 81 illustrates an example scenario in which a message is composed with the system in default text mode. Text which has been linked to an image(s) appears within a border.
[0088] Fig. 82 illustrates an example scenario in which the image(s) linked to text included within a border are revealed.
[0089] Fig. 83 illustrates an example scenario of text being replaced by an image.
[0090] Fig. 84 illustrates an example scenario of a message sent.
[0091] Fig. 85 illustrates an example scenario in which an image has been manually converted back to text prior to sending.
[0092] Fig. 86 illustrates an example scenario of a message sent as text.
[0093] Fig. 87 illustrates an example scenario of a message sent.
[0094] Fig. 88 illustrates an example of a message received.
[0095] Fig. 89 illustrates an example scenario to copy or delete a message.
[0096] Fig. 90 illustrates an example scenario to "delete" a message.
[0097] Fig. 91 illustrates an example scenario of a message checked and ready to be deleted.
[0098] Fig. 92 illustrates an example scenario of a user being asked to confirm the delete request.
[0099] Fig. 93 illustrates a blank message thread.
[0100] Fig. 94 illustrates copying a message on an alternate platform.
[0101] Fig. 95 illustrates pasting text into an ink compose message field. [0102] Fig. 96 illustrates text pasted into an ink compose message field.
[0103] Fig. 97 illustrates a starting point for an image search within a personal ink set library.
[0104] Fig. 98 illustrates search results for images within a personal ink set library.
[0105] Fig. 99 illustrates options available to a user for image search results.
[0106] Fig. 100 illustrates an example scenario of linked text including multiple words.
[0107] Fig. 101 illustrates an example scenario of the automatic conversion of text to image with multiple words included within a single linked image.
[0108] Fig. 102 illustrates an example scenario of the image being sent and received in 1 : 1 aspect ratio.
[0109] Fig. 103 illustrates an example scenario in which the image expands to reveal the linked text.
[0110] Fig. 104 illustrates an example scenario of an ink set library.
[0111] Fig. 105 illustrates an example scenario in which a user places a check mark against some ink sets.
[0112] Fig. 106 illustrates an example scenario of a user's Friends screen.
[0113] Fig. 107 illustrates an example scenario of choosing Friends with whom to share ink sets.
[0114] Fig. 108 illustrates an example scenario of an ink set library.
[0115] Fig. 109 illustrates an example scenario of an individual ink set.
[0116] Fig. 1 10 illustrates an example scenario of editing an ink set.
[0117] Fig. 1 1 1 illustrates an example scenario where an image(s) and text(s) are selected to delete.
[0118] Fig. 1 12 illustrates an example scenario in which a user is prompted for confirmation.
[0119] Fig. 1 13 illustrates an example scenario showing selected items having been deleted.
[0120] Fig. 1 14 illustrates an example scenario of a blank compose message field.
[0121] Fig. 1 15 illustrates an example scenario of the system ready to accept dictation. [0122] Fig. 1 16 illustrates an example scenario in which a message is dictated into the device.
[0123] Fig. 1 17 illustrates an example scenario of the system automatically converting linked text(s) included within the message to image(s).
[0124] Fig. 1 18 illustrates an example scenario of a message created using voice recognition, having been sent.
[0125] Fig. 1 19 illustrates an example scenario showing the receipt of a message revealing linked text.
[0126] Figs. 120-126 illustrate an example scenario in which a message is created and shared on a platform outside ink.
[0127] Fig. 120 illustrates an example of a user's Friends list.
[0128] Fig. 121 illustrates an example scenario of a blank compose message field used to compose and share a message outside ink.
[0129] Fig. 122 illustrates an example scenario in which a message is created.
[0130] Figs. 123-124 illustrate examples of platforms on which a message can be sent/posted.
[0131] Fig. 125 illustrates an example of a rendered version of a message, ready to be sent via SMS.
[0132] Fig. 126 illustrates an example scenario of a rendered message having been received.
[0133] The process described in Figs. 120-126, can be replicated on various other platforms.
[0134] Fig. 127 illustrates an example scenario of a message received.
[0135] Fig. 128 illustrates an example scenario of an image enlarged.
[0136] Fig. 129 illustrates an example scenario of available options to share/save an image only or image plus linked text.
[0137] Fig. 130 illustrates an example of a user preparing to save an enlarged image received, along with the linked text, to their personal ink set library.
[0138] Fig. 131 illustrates an example of an ink set saved to a device.
[0139] Fig. 132 illustrates an example scenario in which a newly saved ink set is used in the creation of a message. [0140] Fig. 133 illustrates an example scenario in which a message is received and text revealed.
[0141] Fig. 134 illustrates an example scenario in which a recipient enlarges a message.
[0142] Fig. 135 illustrates an example scenario in which a user prepares to save the image to their personal ink set library. Both the image and its linked text are shown.
[0143] Fig. 136 illustrates an example scenario in which a user chooses to link an image to alternate text.
[0144] Fig. 137 illustrates an example scenario of an image saved with alternate linked text.
[0145] Fig. 138 illustrates an example scenario in which the image saved previously is used in the creation of a message.
[0146] Fig. 139 illustrates an example scenario in which the same image is shown in the body of messages linked to different text by separate users.
[0147] Fig. 140 illustrates an example scenario of a conversation thread between users in different languages.
[0148] Fig. 141 illustrates a sample paradigm in which acronyms are used in the creation of ink sets.
[0149] Fig. 142 illustrates an example scenario of a text abbreviation linked to an image.
[0150] Fig. 143 illustrates an example scenario in which a phrase is input for conversion.
[0151] Fig. 144 illustrates an example scenario in which a branded image linked to a phrase has been enlarged.
[0152] Fig. 145 illustrates an example scenario a GIF created by a user sent in a message and enlarged.
[0153] Figs. 146-149 illustrate examples of the ink software development kit being utilized on other messaging applications and social media platforms to compose messages.
[0154] Fig. 150 illustrates a sample paradigm in which a branded ink set is created.
[0155] Fig. 151 illustrates a sample paradigm of a personal ink set.
[0156] Fig. 152 illustrates an example scenario of a message composed using brand ink.
[0157] Fig. 153 illustrates an example scenario of a brand image enlarged.
[0158] Fig. 154 illustrates an example of various brand inks within a brand library. [0159] Fig. 155 illustrates an example scenario of search results from the ink brand library for specific text.
[0160] Fig. 156 illustrates an example of an individual branded ink set including a populated word set.
[0161] Fig. 157 illustrates an example of various brand inks.
[0162] Fig. 158 illustrates an example scenario in which a user chooses to download a brand ink.
[0163] Fig. 159 illustrates an example scenario of the composition of a message using branded ink sets.
[0164] Fig. 160 illustrates an example scenario of a message with text revealed.
[0165] Fig. 161 illustrates an example scenario of an enlarged image.
[0166] Fig. 162 illustrates an example scenario of a brand-designated web page, accessible via a hyperlink attached to a brand ink.
[0167] Figs. 163A-163B illustrate an example scenario of the use of a branded font.
[0168] Fig. 164 illustrates an example scenario of various brand inks downloaded to a device.
[0169] Fig. 165 illustrates an example scenario of various brand inks having been turned off.
[0170] Fig. 166 illustrates an example of a message thread composed with a brand ink.
[0171] Fig. 167 illustrates an example of a message thread composed with a brand ink with text revealed.
[0172] Fig. 168 illustrates an example of a brand homepage.
[0173] Fig. 169 illustrates an example scenario of a message sent from a brand.
[0174] Fig. 170 illustrates an example of an enlarged branded image.
[0175] Fig. 171 illustrates an example of the system being utilized on a smartwatch/smart device accessory.
[0176] Fig. 172 is a flowchart showing the steps of operation of a mobile application for generating linked images according to embodiments of the present disclosure.
[0177] Fig. 173 illustrates a diagrammatic representation of a computer system on which any system or device disclosed in the embodiments can be implemented. DETAILED DESCRIPTION
[0178] This application discloses a system and related methods that allow a user to link images from a personal album or camera, licensed image(s)/collections, public-domain sources, publically available images, as well as copyright protected images (with the permission of the copyright holder), to content items (e.g. a word, an expression, a collection of characters, a number, an alphanumeric character, or an emoji), and use the linked images in mobile messaging. As disclosed herein, a mobile application program running on a user's mobile device creates links between an image and a content item, thereby creating a linked image which is stored in a memory of the mobile device of the user. When a user inputs or enters the same content item (e.g., a word) used in creating a linked image, the mobile application program facilitates the automatic conversion of that word to the linked image, ready to be sent either in a text message by a text messaging application or via alternate messaging or social media platforms. In some embodiments, the disclosed mobile application allows toggling between the linked image and the content item when a user clicks on a linked image. Thus, for example, when a user enters a word in a text message, the word is automatically converted into a linked image and included in the text message. When the user clicks on the linked image, the linked image is converted into the word within the text message. As used herein, the term "text message" refers broadly to any kind of message, including and not limited to messages composed using components of short messaging service (SMS) and/or multimedia messaging service (MMS).
[0179] The disclosed mobile application enables images to be incorporated directly within the body of a message so that they appear in-line with other images and text. Images appear in sequence and are in context with the message body, rather than being sent separately. Images are sent along with linked text as part of the message. In addition to linked images, the message can also include additional images that are not necessarily linked to a content item.
[0180] The disclosed system also includes APIs (application programming interfaces) and an SDK (software development kit) to allow linked images to be utilized by other platforms and applications, including, but not limited to, keyboards, email, messaging applications, social media platforms, etc. The system also provides for a method of image indexing, storing and retrieval, allowing users to organize images by linking them with text, facilitating ease of access at speed.
[0181] In some embodiments, the mobile application allows the user to input text via dictation, which can be converted to linked images. The techniques disclosed in this application can be implemented on a computing device, including a mobile device, such as a cellular phone, a tablet, a laptop computer, or a wearable device.
[0182] In some embodiments, the present disclosure also discloses a method by which brands, organizations, and entities are able to establish their own image-based vocabulary, allowing customers or followers of a brand to communicate, (even within personalized settings), in branded images. This enables brands to become a part of the user narrative, to fit in with what the user wants to discuss, at the very moment the user wants to have the conversation. Specifically, the mobile application automatically converts content items into corresponding linked images for use in text message communication. Users and brands can communicate either (1 ) one to one, (2) one to group, or (3) one to many.
[0183] The present disclosure also provides APIs (application programming interfaces) and an SDK (software development kit) to allow linked images to be utilized by other platforms and applications, including, but not limited to, keyboards, email, messaging applications, social media platforms, etc. In some embodiments, the linked images can be included for use in a licensed API/SDK within other applications, platforms and operating systems. The use of linked images within an operating system's native messaging application and other messaging platforms is illustrated in the example scenarios in Figs. 146-149.
[0184] The terms used in this specification generally have their ordinary meanings in the art, within the context of this disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. Alternative language and synonyms can be used for any one or more of the terms discussed herein, and no special significance is to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification, including examples of any terms discussed herein , is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification.
[0185] An "image" refers to any image-based representation, including, but not limited to, figures, branded fonts, photos, ideograms, GIFs, videos, drawings, clip art, vectors, etc. A "personal image" refers to an image created by a user or taken from the public domain for use by the user. A "default image" refers to that image within an image set, which is automatically assigned for conversion. A "limited edition branded image" refers to an image with a set number of copies available. A "default text mode" refers to a functionality that temporarily disables the automatic text-to-image conversion. An "image set" refers to a set of associated images. An "image source" refers to any location from which an image is sourced. A "base word" is a word that is associated with one or more linked images (also referred to herein as an ink set of linked images). An "ink set" refers to a combination of a word set and its associated image set. A "personal ink" refers to a compilation of ink sets created using a user's content and/or various branded content (with the permission of the copyright holder). A "personal ink set library" is a collection of ink sets compiled by a user from various sources. "Linking," "link" or "linked" refers to an action whereby an image(s) is associated with text(s) (more generally, content items) in a way that facilitates automatic conversion between a linked image and an associated content item representative of that image. "Text," also referred to herein as a "content item" refers to any input inserted into a text field by use of but not limited to the following: a keyboard, voice recognition, copy and paste, etc., and that can take the form of words, text, phrases, terms, expressions, abbreviations, acronyms, ideograms, etc. In some embodiments, more than one base word can be associated with a linked image. A "brand" is a content provider for ink sets. A "content provider" refers to an individual or company that creates an image. A "data source" refers to a location from which text can be sourced. A "branded font" refers to a particular font used in conversion of a content item into a linked image and vice-versa. A "branded image" can be one or more copyright-protected images. A "brand ink" is a group of ink sets from the same content provider. A "branded ink set" refers to a combination of a one or more words and its associated branded image set. A "brand library" is an image repository that stores images (linked to text). In some embodiments, "communication" refers to an exchange of information through text and images typically in a mobile environment through text messaging, email, etc.
[0186] A "variation(s)" refers to those images included within an image set, other than the default image. A "user" refers to anyone utilizing the system. A "word frequency list" is a list of commonly used words in English and their synonyms). A "word set" refers to a set of words, text, phrases, terms, expressions, abbreviations, acronyms, ideograms, etc., compiled from various data sources. A word set can include as few as one or as many inputs as necessary.
Creation Of Ink Sets
[0187] Fig. 1 illustrates a block diagram showing a sequence of steps demonstrating how ink sets are generated. Content items (e.g., an abbreviation, an acronym, an ideogram, a word, a phrase, or an expression) received from a data source is used as a word set for generating ink sets. The word set, for example, can also include emojis, as shown in Fig. 1. In this example, the content items "Thanks," "Ta," "Thank you," "Grateful," "THX," "TY," "Cheers," and two emojis are associated with four images to create four linked images.
[0188] Figs. 2-9 illustrated example text messaging (or, specifically typing) interfaces including linked images or words/content items associated with linked images.
[0189] Figs. 10-15 illustrate an example linked image generated by a mobile application running on a user's mobile device. Specifically, in this example, the disclosed mobile application generates a linked image corresponding to (or, representative of) the word "dog" and the user captures an image of the dog using a camera on the user's mobile device. Fig. 10 illustrates a typing interface in which a user inputs text into the compose message field of a mobile application (e.g. , a text messaging application) . To link a word to an image, the user double taps on a word, for example, the word "dog." This action both highlights the text and displays additional menu options for the user to select, as indicated in Fig. 1 1 . By tapping "add image," the user is then prompted to choose a method for add an image to generate a linked image corresponding to the word "dog."
[0190] The example shown in Fig. 1 1 illustrates a user selecting the "take photo" option. When the user clicks on the "Take photo" option depicted in Fig. 1 1 , the camera application on the user's mobile device launches automatically (Fig. 12) . The user captures an image (in this example, a picture of a small dog) and is then able to crop the image (Fig. 13). Once cropped, the image is displayed (Fig. 14) within a pop-up window, along with the text previously highlighted (e.g. , the word "dog") . As disclosed herein, the pop-up window is generated by the disclosed mobile application and is overlaid on a text messaging interface (e.g. , shown in the background of the pop-up window in Fig. 14) of the text messaging application. In some embodiments, the text messaging application can be a default text messaging application associated with an operating system running on the user's mobile device. In some embodiments, the text messaging application can be a third party application running on the user's mobile device. Thus, the disclosed mobile application provides the functionality of integrating or communicating with one or more mobile applications running on the user's mobile device. In some embodiments, the disclosed mobile application is the text messaging application for composing text messages that include one or more linked images corresponding to content items. Tapping "done" in Fig . 14 saves the text and image together as an ink set (e.g. , a linked image). The user is then returned to the compose message field on the typing interface, where the text has been converted automatically to the linked image of the dog, as illustrated in Fig. 15. Fig. 15 also shows that the disclosed mobile application inserts a condensed version of the linked image into a text message composed by the user. The mobile application automatically converts the word "dog" to the linked image when the word "dog" is entered by the user in subsequent text messages composed by the user. If the user desires to use the word "dog" instead of the linked image in a text message, the user can click or tap on the linked image to toggle between the linked image and the word corresponding to the linked image. This example demonstrates that linked images can be generated dynamically at the time of composing a text message, and are not necessarily pre-stored in a memory of the user's mobile device. Furthermore, the images used in linked images can also be captured in real time, (e.g., when creating a linked image).
[0191] In some embodiments, images that are used in linked images can also be received from online image libraries. Figs. 16-20 illustrate an example linked image generated by a mobile application running on a user's mobile device. Specifically, in this example, the image is received from an online image library. In such embodiments, the mobile application can access the online images when a user clicks on the "image search" option shown in Fig. 1 1 . The user highlights the text (e.g., the word "run") to link (Fig. 16) on a typing interface of a text messaging application, taps "add image" and selects the "image search" option (Fig. 1 1). As illustrated in Fig. 17, the search box is automatically populated with the highlighted text and image search results (e.g., received from an online library) corresponding to the highlighted text. Figs. 18-19 illustrate the user choosing an image from the search results (in the example, a person running outdoors) to link with the highlighted text. Once linked, the user is automatically taken back to the compose message field on the typing interface of the text messaging application wherein the highlighted text is automatically converted to the linked image, as illustrated in Fig. 20. Fig. 20 also demonstrates that the disclosed mobile application inserts a condensed version of the linked image into a text message composed by the user using the typing interface. This example demonstrates that linked images can be generated dynamically at the time of composing a text message, and are not necessarily pre-stored in a memory of the user's mobile device. Furthermore, an image used in generating a linked image can be received in real time from an online repository or library of images.
[0192] In some embodiments, a linked image can be generated using an image (e.g., a personal image) stored in a photo library of the user's mobile device. Fig. 21 illustrates an example scenario of a user inputting a collection of characters (e.g., the abbreviation "thx") as text/content items. Figs. 22-24 illustrate an example scenario in which the user taps on the "photo library" option (displayed in Fig. 1 1), chooses an image (in this example, a man smiling and giving a thumbs up gesture), links the image and is taken back to the compose message field, with the linked image in place, as illustrated in Fig. 24. A condensed version of the linked image is included in a text message composed by the user on the typing interface. [0193] The three examples (using "take photo," "image search," and "photo library' options in Fig. 1 1) described above can also be referred to as "creating an ink on the fly." These examples demonstrate that in disclosed embodiments content items and images can be linked while a message is being composed.
[0194] In some embodiments, ink sets (e.g., a collection of linked images and content items corresponding to the linked images) can also be created "on the fly" by choosing an image prior to inputting text on a typing interface (e.g., as demonstrated by the examples in Figs 10-24). In such embodiments, a user chooses an image prior by tapping on the camera icon (Fig. 25) to open options (shown in Fig. 26). In this example, the user chooses the "take photo" option depicted in Fig. 26. The user captures an image (in this example, a flower), using a camera on the user's device, then crops the image and is prompted to "add text" to link the image as shown in Fig. 27. In this example, the user inputs the word "flower" (Fig. 28) via a pop-up window generated by the disclosed mobile application. Once the linked image is saved, the user is taken back to a compose message field on a typing interface of a text messaging application, with a condensed version of the linked image inserted in the typing interface (Fig. 29). The user can then continue composing the message, as illustrated in Fig. 30, "flower in the garden." Fig. 31 indicates that the message is sent by the text messaging application. The text (e.g., content item) linked to the image can be revealed by swiping over the sent message as illustrated in Fig. 32. The procedure described in Figs. 26-32 is also applicable when a user chooses to input an image from their photo library, with a user choosing the "photo library" option and the image being sourced from the devices library.
[0195] Figs. 33-34 illustrate an example scenario in which the "image search" option of Fig. 26 is selected. The user is taken to the image search screen to insert text in the search field, as illustrated in Fig. 34. The user then inputs the required image search information (in this example "Bondi beach") as illustrated in Fig. 35a, and chooses an image from the image search results shown in Fig. 35b. The image search results in Fig. 35 can be received from an online library of images. The user then crops the image and is prompted to "add text" to link the image. In Fig. 37, the word "Bondi" is entered by the user and linked to the image. Once linked, the user is taken back to the compose message field on the typing interface of the messaging application, with a condensed version of the linked image inserted in the text message. The user continues composing the message, "at Bondi today," as illustrated in Fig. 38 and hit "send". Once sent, the linked image can be viewed by a recipient of the message included in the text message composed by the user.
[0196] Fig. 39 shows an alternate method of creating an ink set (a collection of linked images) is via the user's personal ink library. The user taps "create new ink set" and is offered options to add an image (Fig. 40). After the user selects an image (in this example, an image of various fruits), the user is prompted to add a content item to create a link between the image and the content item. For example, Fig. 41 illustrates an "add a word or emoji," typing box. After the user enters the word "Fruit" (as illustrated in Fig. 42) and taps "done," a link is created between the content item "fruit" and the image. The linked image and associated content item are shown linked, as illustrated in Fig. 43. The new linked image is now included in the user's personal ink set library (Fig. 44) , and is available for use in composition of messages. The procedure described in Figs. 39-44 is also applicable when a user chooses to create a new ink set directly from the ink set library using the "photo library" or "image search" options.
[0197] In some embodiments, ink sets can be edited within a personal ink set library. To edit an ink set, a user opens the personal ink set library (Fig. 45) and taps on an ink set/linked image. Fig. 45 shows an interface with four linked images/ink sets saved in the user's library. Clicking on a linked image (e.g. , "Daddy") of Fig. 45 causes the disclosed mobile application to display an "Edit Ink Set" screen. In this screen, a user can add/delete images and/or content items to the ink set "Daddy". Figs. 47-48 illustrate an example scenario of a user inputting and linking additional text. Thus, an existing linked image "Daddy" can be additionally linked to the content item "Me" when the user enters the content item "Me," as illustrated in Fig. 47. The linked image, the content item "Daddy," and the content item "Me" are defined as a set. According to disclosed embodiments, editing an existing linked image by including additional content items does not necessarily involve adding extra images to the set.
[0198] In some embodiments, a user can edit an existing ink set to include additional images. Figs. 49-51 illustrate an example scenario of a user selecting the "take photo" option to add an image (in this example, a photo of the same man as seen in Fig. 46 smiling) , and adding it to the existing ink set. Thus, the ink set includes two images and two content items, "Me" and "Daddy."
[0199] Figs. 52-68 illustrate example scenarios in which emojis are used as content items in the creation of an ink set. Fig. 52 is an image of a guitar included within a user's personal ink set library. When the user clicks on the "Add a word or emoji" box, the user is given the option to select an emoji from the emojis displayed on the emoji keyboard as illustrated in Fig. 53. After the user selects an emoji representing a "guitar" (Fig. 54), a link is created between the emoji (content item) and the image of the guitar. After the user taps "done," shown on the interfaces illustrated in Figs. 55-56, the newly created ink set is now included within the user's personal ink set library and is available for use in the composition of messages, as illustrated in Fig. 57. [0200] Figs. 58-64 illustrate an example scenario in which an ink set is created "on the fly" using a combination of emoji and alphanumeric keys as inputs for text. In this example, the user enters emojis for a "strawberry, grapes and a peach," along with the word "salad." Figs. 63-64 illustrate an example of the user choosing an image from their photo library (in this example, a box of fruit) to link with the highlighted text appearing automatically within the text field of the pop-up (Fig. 63). Once linked, the user is automatically taken back to the compose message field of the typing interface with the highlighted text converted to the linked image, as illustrated in Fig. 64. Swiping over the message overlays the image with the linked text as illustrated in Fig. 66.
[0201] Figs. 67-68 illustrate the use of multiple ink sets including images linked to emojis, in the body of a single message. In this example, the user inputs the text: "Playing the guitar (represented by an emoji of a guitar) and watching the sunset (represented by an emoji of the sun setting)." The procedure described in Figs. 52-68 is also applicable when a user chooses to create a new ink set using emojis, using the "take photo" option or "image search" option displayed in Fig. 62.
[0202] In some embodiments, an ink set can include multiple linked images associated or linked with a content item. Figs. 69-75 illustrate an example scenario in which the variations (e.g., multiple linked images corresponding to a content item) included within an ink set are revealed during the composition of a message via a messaging application. Fig. 70 illustrates an example of a default image for the text "please." To reveal variations, the user taps on the default image (Fig. 70). Tapping on the default image results in the variations included within the ink set, along with the default image, appearing above the compose message field (Fig. 71). In some embodiments, the content item is displayed on the typing interface as being included inside a region defined by a border (Fig. 71). A user is then able to view and change between images included within the ink set. To select a different image, the user taps on it. In some embodiments, tapping on the default image can also results in the default image being converted to the linked content item which was entered during the composition of the message (Fig. 72). As illustrated in Fig. 73, the image selected then forms part of the message included within the message field (in the example, the image is changed from a guinea pig depicted in Fig. 70 to a sea otter depicted in Fig. 73). The user can then continue composing the message, as illustrated in Fig. 73. The message is then ready to be sent with the alternate image (e.g., the sea otter) selected. Fig. 74 illustrates the message is sent by the messaging application.
[0203] In order to accommodate for the various personas or avatars a user might have when communicating with different people, the disclosed mobile application allows for different default images to appear depending on who the recipient of the text message is. Accordingly, the disclosed mobile application is automatically aware of the linked image that was last used for a content item in a message composed for a specific contact or group. When a user inputs the name of a contact to whom a message is to be sent, the system "recalls" the most recently used images for each content item used in communicating with that contact. The last linked image is offered as the default linked image for that content item when the user enters the same content item again in subsequent text messages to the same contact. Fig. 75 illustrates an example scenario in which the default linked image (e.g., the guinea pig) for the ink set shown in Fig. 72 has now changed to the linked image (e.g., the sea otter) selected and sent in Fig. 74.
[0204] Figs. 76-86 illustrate example scenarios in which a user can send a text message without including a linked image, and wherein the content item is included in place of the linked image. Fig. 76 illustrates an example scenario in which the mobile application converts a linked image to a content item during the composition of a message. When a user taps on a linked image, the linked image is replaced by the content item (e.g., the content item is shown as included in a region within a border as illustrated in Fig. 77). The text message can then be sent with the content item and not include the linked image. As illustrated in Fig. 78, the recipient of the message now receives the message with the word "please," formatted in text, instead of an image.
[0205] An alternate method, known as the "default text" option is illustrated in Figs. 79- 86. This method enables a user to compose an entire message including content items only, regardless of whether any of the content items has been linked to linked images. As explained below, the "default text" option prevents the linked image from being included in a message. Fig. 79 illustrates a starting point to compose a message. In Fig. 79, the button shown to the right of the compose message field of the messaging application is modified by the disclosed mobile application to correspond to features of the disclosed mobile application. By tapping on the button, a user is able to disable the automatic conversion of content items to linked images functionality. As illustrated in Fig. 80, tapping on the button repeatedly toggles this feature, turning the "default text" option off and on.
[0206] When a user composes a message with the conversion functionality disabled, content items corresponding to linked images is differentiated from unlinked content items using a border. In the examples shown, content items corresponding to linked images are displayed in a region included within a border, while unlinked content items are displayed in a regular manner. In the example shown in Fig. 81 , the user inputs the message: "Want to go to Bondi beach for a swim (illustrated by an emoji depicting a person swimming) today?" The message is displayed in text format, with the words "Bondi beach" and the emoji for "swim" shown within borders inside a region. When a user taps on a content item included within a border, the user can view the linked images corresponding to the content item, as illustrated in Fig. 82. In this example, when the user taps on the content item "Bondi beach," the linked image corresponding to the content item "Bondi Beach" is displayed. When the user taps on the linked image, the content item "Bondi Beach" is replaced by the linked image, ready to be sent as a text message, as illustrated in Fig. 83. Fig. 84 illustrates the message sent. The example scenario in Figs. 85-86 illustrate an example in which the text message is composed and sent as regular text without including linked images corresponding to content items in the text message.
[0207] Figs. 87-96 illustrate example scenarios in which messages are copied and deleted. Fig. 87 illustrates an example scenario of a message already sent. Holding down on a message displays a "copy/delete" tab (Fig. 89). Fig. 90 illustrates an example in which a user has chosen the delete option. The user can select single or multiple messages to delete (Fig. 91 ) . Prior to deleting a message(s), the user is prompted to confirm the request, as illustrated in Fig. 92. The disclosed mobile application provides a delete functionality according to which if a user chooses to delete a message(s) the message is deleted not only at the mobile device of the user but is also deleted from the device(s) of the recipient(s) of the message. Thus, according to embodiments as disclosed herein, the disclosed mobile application is in electronic communication over a network with a remote server. This server can receive a request for deleting a text message from the user via the disclosed mobile application. In response to the request for deleting the text message, the remote server can verify that the text message is received by the recipient at the recipient's mobile device. In some embodiments, the remote server can delete the text message sent by the user at the recipient's mobile device. In some embodiments, the disclosed mobile application can delete a text message (sent by the user) from the user's mobile device. Fig. 88 illustrates an example screen of a message as received on a recipient's device. Fig. 93 illustrates the same screen on the recipient's device, after the message was deleted by the sender.
[0208] Figs. 94-96 illustrate an example scenario in which text composed or received on alternate apps/platforms, etc. , can be copied and pasted into the system on a user's device. Fig. 94 illustrates an example in which a user copies the content of a message received (e.g. , via SMS) on his or her device. In the example, the user copies the message (received from "Dave") and pastes the message into a new message (composed for "Bianca"), as illustrated in Figs. 95-96. The content items "Haha," "hi," and an emoji in the message received from Dave was in the form of text and a thumbs-up emoji. The disclosed mobile application detects that these content items are associated with linked images (e.g. , stored in a designated folder accessible by the disclosed mobile application) on the user's mobile device. Upon detection of content items that correspond to linked images, the disclosed mobile application automatically converts the content items into the linked images when a user pastes an incoming text message into the composition field of an outgoing text message (e.g. , for another user or a group of users).
[0209] In some embodiments, the disclosed system provides for a method of indexing, storing and retrieving text messages, enabling users to organize images for fast retrieval, by linking them with content items. Linked images and the associated content items, saved in a user's personal ink set library, are made available to view in alphabetical order, or they can be searched by a search query. Fig. 97 illustrates an example scenario of a starting point for an image search within a user's personal ink set library. Fig. 98 illustrates an example scenario in which a user has entered the word "boat" into the search field. The disclosed mobile application searches the user's mobile device (e.g. , a designated folder or ink set library) and displays linked images corresponding to the content item "boat." The user is then able to either save or share the linked image(s) , either as a stand-alone image, or as a linked image, illustrated in Fig. 99 as "Ink I mage".
[0210] In some embodiments, a repository of linked images can be a user's personal ink set library. In some embodiments, a repository of linked images can belong to a brand (e.g. , representing icons, avatars, emblems, or logos identifying a sports team, a beverage maker, or a car manufacturer). Such a repository is a core component of the disclosed system and is available within a software development kit (SDK) for use on alternate applications, platforms and services. Such a SDK can be downloadable from an entity or organization's web portal. In some embodiments, the SDK is downloadable from a mobile application marketplace (e.g. , App Store™, Chrome Store™, or Google Play™) .
[0211] In some embodiments, linked images are sent and received in a 1 : 1 aspect ratio. For example, linked images can be sent or received by the disclosed mobile application on a user's mobile device. Linked images can also be sent or received by a server that is in electronic communication, and for example, manages/stores an online library of linked images. When a user inputs content items in the compose message field, linked images are shown cropped to allow for text to be displayed, as shown in the example scenario illustrated in Figs. 100-101 . In this example, the user inputs "United States of America." The image linked image corresponding to this content item (in this example, the flag of the United States of America) is displayed cropped to allow for the full text to be displayed. Fig. 102 illustrates the message sent and received in 1 : 1 aspect ratio. When the sender (e.g. , user), taps on/swipes right a linked image included in a text message, the linked image gets reformatted to reveal the corresponding content item. For example, tapping on the linked image (flag of the United States) included in a text message depicted in Fig. 102 causes reformatting of the linked image by the disclosed mobile application thereby revealing the content item (the words "United States of America" in Fig. 103).
[0212] Figs. 104-107 illustrate an example scenario in which a user selects various ink sets (linked images and their corresponding content items) from his or her personal ink set library to share with other users. After receiving an ink set, a recipient can save the ink sets to his or her own personal ink set library.
[0213] Figs. 108-1 13 illustrate an example scenario in which a user can select linked images and content items included within an ink set to delete. In some embodiments, the disclosed mobile application allows a user to delete linked images and/or the corresponding content items.
[0214] In some embodiments, the disclosed mobile application allows a user to enter a text message via "voice recognition" API, as illustrated in Figs. 1 14-1 19. To utilize this functionality, a user talks into the device, the voice recognition API converts voice to text, and the system then converts the text to linked images. Thus, the disclosed mobile application performs transcription (of voice to text), detection of content items in the text, and conversion of the detected content items into their corresponding linked images. Fig. 1 16 illustrates an example scenario in which a user dictates the message "feel like sushi tonight" into their device. Fig. 1 17 illustrates an example of the system automatically detecting the word "sushi" in the transcribed text, and converting the word "sushi" into a linked image. Figs. 1 18-1 19 illustrate the messages as sent and received.
[0215] In some embodiments, text messages composed and/or received using the disclosed mobile application, can be shared or posted on other social media platforms and communications platforms outside the system. After a user taps on a "share a message" button illustrated in Fig. 120, the user can compose the message on an interface displayed in Fig. 121 . In this example, a user composes the message "getting a shave ice at the beach. " Once ready to send, the user taps the "share" button as illustrated in Fig. 122. A pop-up then prompts the user to select a platform/service on which to share the message, as displayed in Figs. 123-124. The example shown illustrates a user choosing to share the message as a SMS message with a recipient. If the recipient's mobile device does not have the disclosed mobile application installed, the recipient receives a specialized message including a rendered image along with a link to download the disclosed mobile application, as illustrated in Figs. 125 and 126.
[0216] Figs. 127-139 illustrate example scenarios in which ink sets are created by saving images received in communications with other users. These newly-created ink sets can be in addition to the user's personal ink sets. Fig. 127 illustrates an example scenario in which a user receives a message from "Julie" that states "wish we were in Hawaii." The user taps on the message from Julie to enlarge the linked image, as illustrated in Fig. 128. Tapping on the share icon located in the top right-hand corner of the pop-up in Fig. 128 reveals an option to "save Ink to my inks," as illustrated in Fig. 129. Tapping "done" in Fig. 130 saves the image along with the linked text to the user's personal ink set library, as illustrated in Fig. 131 . For example, the user's personal ink set library now includes one newly-added linked image (from Julie) corresponding to the content item "Hawaii," identified as a relevant content item by the user and by Julie. The user is then able to use the linked image in text messages (Fig. 132).
[0217] Figs. 133-139 illustrate an example scenario in which a user replaces the content item corresponding to a linked image (received in a text message from another user) with a different content item. Fig. 133 shows a text message "I will meet you there" received by a user from Julie. The text message includes a linked image corresponding to the word "I." The user expands the linked image in Fig. 134, and reviews the content item "I" corresponding to the linked image. Fig. 136 illustrates an example scenario in which the user changes the content item "I" sent by Julie to a different content item "Jules." Tapping "done" results in the creation of a new linked image being saved to the recipient's personal ink set library, as illustrated in Fig. 137. The ink set is now available for use in the composition of a message. In the example illustrated in Fig. 138, the user inputs the message "Hi Jules" and the system automatically converts the text to the linked image. The new ink set is shown sent as part of a message thread in Fig. 139, along with the same linked image but corresponding to a different content item.
[0218] Ink sets (linked images) can be used to break language barriers, unifying people across geographical regions. For example, linked images can be sent in text messages composed using different languages, enabling people to bridge language barriers. Fig. 140 illustrates a sample message thread in which a conversation takes place between users in English and Chinese. The conversation begins in English "So hot!!," is replied to in Chinese "beach? " with the response in English "OK."
[0219] Fig. 141 illustrates an example scenario in which acronyms (content items) are linked to images. The example shown illustrates the acronyms: TMI for "too much information," OTP for "on the phone," and SITD for "still in the dark."
[0220] Fig. 142 illustrates an example scenario in which SMS abbreviations (content items) for sorry "SOZ," has been entered as text and converted to a linked image of Puss in Boots®. [0221] In addition to abbreviations and acronyms, ink sets can also be created for terms and phrases. Images used in creating linked images can be GIF images. Figs. 143-144 illustrate an example scenario in which a GIF image is linked to a phrase (e.g., a content item). In this example, a GIF image from the movie Taxi Driver has been linked to the generic phrase "You talking to me?"
[0222] In some embodiments, GIF images can be created by a user. Fig. 145 illustrates an example of a GIF, created by a user and then linked by the disclosed mobile application to an emoji for use in text messages.
[0223] Figs. 146-149 illustrate examples in which an SDK is utilized to enable users access to their ink sets in the composition of messages while on other messaging or social media platforms/services/apps, or within an operating system's native messaging platform. Users create, store and manage their ink sets via the disclosed mobile application on their device and activate the "linked image functionality" on alternate platforms/services/apps to enable the automatic conversion of content items to linked images within these platforms/services/apps.
Branded Inks
[0224] In addition to the methods already described, users can view and download additional ink sets corresponding to brands of organizations from a brand library - an online image repository that stores images (linked to text) under license, or from external sources. Brands create and attach relevant image sets to content items in order to create their own ink sets. Ink sets from the same content provider form a brand ink, also referred to herein as branded linked images. A brand in this context refers not only to a typical "brand" for a product or services, it can also correspond to a person or a team, such as a sporting team or an individual player, Hollywood celebrity, artist, etc., or any other content provider.
[0225] Fig. 150 illustrates a sample paradigm in which branded ink sets are created. Brands distribute linked images, i.e., images linked to a specific content item (base word). In Fig. 150, the base word is "thanks." The brand then creates a word set with additional words on top of the base word. The word set is linked to images. The text in base words can be provided to a brand by an entity (that provides the disclosed mobile application and the SDK/APIs). The text can include commonly used abbreviations, acronyms and words used on various social media platforms, SMS abbreviations and acronyms, ideograms such as emojis, as well as a word frequency list. Brands can also include additional text to link to an image. The example shown illustrates four separate images, provided by various brands, linked to text. [0226] Users can download specific branded ink sets. Once downloaded, these ink sets are made available to users within their own personal inks, appearing as either variations or default images. Fig. 151 illustrates a sample paradigm of a personal ink for the text "touchdown." In this example, images are sourced from a user's personal images, as well as images from multiple branded ink sets. Branded images from various brands have been downloaded to a user's personal ink set library along with the "user's personal images." When a user inputs the word "touchdown," one of these images appears as the default image and the others as variations within the image set.
[0227] Depending on the age of a user (e.g., based on login details provided when initially downloading the Ink app) the system can restrict certain brand inks being made available for viewing and downloading. Brand inks can be allocated a rating based on the current MPAA rating system. For example, a brand ink may have a rating of NC-17. In this case the brand ink would not appear as an option available for download by a user under the age of 17.
[0228] Within the brand library, brands can also promote generic words that have contextual relevance to the brand(s). For example, Budweiser can promote "bud" along with "buddy," "friend," or "pal," and Coca Cola can promote "drink" or "thirsty." When a user inputs text that a brand has linked to an image in this regard, the image can appear as either the default image or have preferred placement among the variations in the relevant image set. This type of brand promotion is illustrated in the example scenario shown in Figs. 152- 153, where a user has input the message, "hey buddy what's new."
[0229] Users are also encouraged to upload images of their own to the brand library, to be considered for inclusion within the system's brand ink. This is a compilation of images sourced from users and uploaded to an online library can be made available to other users for download. Fig. 154 illustrates an example of a brand library for the selection of various brand inks. Users scroll through each row to display additional brands. In this example, the upper and lower rows display advertising space for various brands, with the three middle rows divided between "favorite celebrities," "favorite sports," and "favorite brands." Users can download a brand ink, by tapping on their chosen brand.
[0230] Fig. 155 illustrates an example scenario of various sports team ink sets. In the example, the user taps on the NFL® logo in Fig. 154. This opens brand inks associated with the NFL. Tapping on the New England Patriots® logo, the user is prompted to download the team brand ink, as illustrated in Fig. 156. Once downloaded, all ink sets included within the brand ink are added to a user's personal ink sets and made available in the composition of messages. Figs. 157-158 illustrate an example in which the message: "touchdown patriots" is input and sent using images supplied by the brand. Images in the brand library can be searched for and viewed, as illustrated in Fig. 159. In this example, the user has searched for branded images linked to the text "Hey. " The image search results represent those images supplied by various brands and linked with "Hey. " Tapping on individual images displays a pop-up, as illustrated in Fig. 160, including all text linked to the image. In this example, an image of a penguin from the movie, Penguins of Madagascar®, is shown along with the text: "Hey Hi, Hello, Greetings and an emoji of a hand waving." Users can save the ink set as is, or they can add to or delete text to further personalize. Once the linked image is saved, the ink set forms a part of the user's personal ink set library and is available for the composition of messages.
[0231] Brands can also attach hyperlinks to images, redirecting users to a brand's chosen web site. Fig. 162 illustrates an example brand-designated web page, accessed by holding down on the branded image shown in Fig. 161 . Brands can upload their own branded font (a font particular to a specific brand), in which text is displayed when a brand ink is used. Figs. 163A- 163B illustrate an example of branded font using a Coca Cola® font. In the example, a user has chosen to incorporate a Coca-Cola® branded ink set linked to the text: "the Beach. "
[0232] Users can manage which brand inks are displayed in the composition of messages (either as default images or variations) by turning specific brand inks on or off, as illustrated in Figs. 164-165. Brand inks that are "on" are illustrated by the purple Ink logo. In this way, a user can limit the type of images used in the composition of a message by turning off various brand inks. In the example in Fig. 165, a user has chosen to turn off all downloaded ink sets except the Star Wars® ink. Figs. 166- 167 illustrate an example of a series of messages composed using the Star Wars® ink only.
[0233] Brands can enhance ink sets through direct customer engagement. Brands can request fans and followers to comment on branded images, submit additional images for possible inclusion within a brand ink, or indicate what other text they would like to have images for.
[0234] By virtue of these features, brands are now a part of a user's conversations, integrated into the native format rather than being in competition with it for space. As such, they neither disrupt the aesthetics nor interrupt a user's chain of thought, but enhance the experience through a chosen engagement. In addition, rather than a brand simply inserting native content that tells a story, a brand's ink set enables the brand's content to be inserted by a user to tell a story, one that the user wants to tell, at the very moment they want to tell it and pass it on. [0235] Fig. 168 illustrates an example of a brand homepage, with a Fan forum in which a brand's fans can chat using the brand's ink sets and a brand Feed can relate to a fan discussion, help spark a conversation between brand loyalists, display trending brand- related topics, make an announcement, etc. The forum can also include additional brand- related materials, such as advertisements.
[0236] Brands can communicate with followers/fans via the system using one-to-many messaging, or via brand chats within their brand forum. A brand chat can be started either by a user/group of users or by a brand itself. Brand chats can be marketed and sourced by various titles, such as topic, geographic area, etc. Where a brand creates a chat, parameters (as broad or specific as required) can be established by the brand for the chat, such as limiting the chat to users of a certain age group and/or location. A time frame for the chat can also be set. An example brand chat may have the following requirements:
Topic: new Star Wars® Movie, user Location; California, United States; user age group: 13-18 years (or use a MPAA rating such as PG-13); chat start date: 12/15/15; time: 17:00-20:00 P.S.T.
[0237] Sport teams and celebrities can create "event-related" chats. For example, a pre/post- match chat between a team's supporters and a player. As such, a brand can engage with specific users and enable a brand's fans/followers to interact with the brand as well as with one another.
[0238] Brands can also send image-based push notifications as illustrated in the example in Figs. 169-170, in which Katy Perry sends fans/followers a message to inform them that they can meet her in Times Square.
Smart Devices
[0239] Embodiments of the present disclosure can be utilized across a variety of devices. This includes various mobile devices, such as wearable devices (smart watches, glasses, headsets, etc., as illustrated in Fig. 171), smart phones, laptop computers, tablets, etc. When viewed on a smartwatch, a recipient can view multiple linked images included within a text message by tapping or swiping movements on linked images to open a next linked image. In some embodiments, the linked images can be displayed one after the other automatically in a loop once a text message has been opened by a recipient.
Acquisition And Trading Of Ink Sets
[0240] Ink sets can be acquired (including, but not limited to, for free, purchased, won, or traded). Brands can offer free ink sets for a user to be able to start communicating in the brand ink. Brands can create and offer premium ink sets for sale. Premium ink sets can range from as few as one image to as many as the brand chooses.
[0241] The system also allows for brands to create activities, games and competitions for users to earn or win premium ink sets. The system include mechanisms to authenticate original branded images through the association of both visual and digital authentication measures, such as a holographic stamp. Authentic original branded images can be distributed as authenticated limited editions. Limited editions can range from as few as one image to as many as the brand chooses and can be individually numbered. Brands that are artists can create authentic original ink (including, but not limited to artwork) for use within messaging that can be sold and then become the user's property.
[0242] The system provides a platform for collecting images. Virtual albums can be created by the branded ink set. Using a virtual platform, a user can track and view which linked images he or she acquired as well as those linked images that he or she desires to complete the branded ink set. On completion of a branded ink set, the user may be rewarded by the brand with a "gift." Users can acquire multiple versions of the same image. The system also facilitates include an ink trading floor where users can trade linked images they have acquired in a marketplace. Furthermore, the system provides a mechanism to verify the authenticity of the linked images being traded.
Backend Architecture
[0243] The disclosed mobile application is a mobile messaging application that, at its core, allows users to convert text into images and GIFs and send them to other users. Users associate images to text and save them in collections called ink sets for later use. When the user types the text, programming algorithms are used to detect and match the text to the associated image(s) and convert them into an image that can then be sent to another user in a message. In some embodiments, in addition to the disclosed (client) application, a back-end remote server component is present.
[0244] Back-end architecture and infrastructure can include a Ruby on Rails (RoR) web application that handles the web admin panel and logic for storing and retrieving client app user data, admin user authentication, ink uploads via admin panel, and other functionality. This RoR application and its accompanying database can be hosted on cloud servers and can automatically scale to support a heavy load. In some embodiments, linked images sent in text messages are sent as metadata including the URL of the image and not the image itself in order to optimize performance.
[0245] On the iOS™ client app, chat messaging is provided by the Layer SDK (https://iayer.com ), which provides many basic messaging functionalities such as transmitting text and images between devices, group messaging, and other functionalities that would otherwise need to be built from scratch. For performance and data usage optimization, downloaded images are cached on the device until the cache is full or until the images are replaced by newly downloaded images and GIFs. Furthermore, images and GIFs uploaded by users can be first cropped into a 1 :1 aspect ratio, and then resized to 512 x 512 pixels. User authentication is provided by Digits (https:;7qet.diqits.comA and ties a user's identity to their phone number. Both client side and server side applications use secure technologies for user authentication. All network messages are securely sent between iOS and the back-end through TLS 1.2 encryption.
[0246] Fig. 172 is a flowchart 300 showing the steps implemented by a mobile application for generating linked images according to embodiments of the present disclosure. The mobile application is the disclosed mobile application (e.g., a first mobile application) and runs on the user's mobile device (e.g., a first mobile device). At step 302, the first mobile application receives an image and a content item representative of the image from a user. The content item can be a word, an expression, a collection of characters, or an emoji. In response to receiving the image and the content item representative of the image, the first mobile application generates (at step 304) a linked image by creating a link between the image and the content item. At step 306, the linked image is stored in a memory of the first mobile device. The user composes a text message via a typing interface of a second (e.g., messaging) application on the first mobile device. The messaging application can be a text messaging application associated with an operating system on the first mobile device. Alternately, the messaging application can be a third party application. In some embodiments, the messaging application is the disclosed mobile application, and no other messaging application is necessarily involved. Upon detecting that a candidate content item entered by the user on the typing interface matches with the content item representative of the image, the disclosed mobile application retrieves (step 308) the linked image from the memory of the first mobile device, converts (step 310) the content item into the linked image, inserts (step 312) a condensed version of the linked image into the typing interface of the second mobile application, and sends (step 314), via the second mobile application program, the linked image to a second mobile device, for example, to an intended recipient. In some embodiments, a candidate content item entered by the user item on the typing interface matches with the content item representative of the image according to one of the following matching conditions: (i) the candidate content item is identical to the content item, or (ii) at least a portion of the candidate content item is identical with at least a portion of the content item representative of the image. Svstemization
[0247] Fig. 173 shows a diagrammatic representation of a computer system on which any system or device disclosed in the embodiments can be implemented. The computer system 7800 generally includes a processor 7805, main memory 7810, non-volatile memory 7815, and a network interface device 7820. Various common components (e.g., cache memory) are omitted for illustrative simplicity. The computer system 7800 is intended to illustrate a hardware device on which any of the components and methods described above can be implemented. The computer system 7800 can be of any applicable known or convenient type. The components of the computer system 7800 can be coupled together via a bus 7825 or through some other known or convenient device.
[0248] The processor 7805 may be, for example, a conventional microprocessor such as an Intel® Pentium® microprocessor or Motorola® PowerPC® microprocessor. One of skill in the art will recognize that the terms "computer system-readable (storage) medium" or "computer-readable (storage) medium" include any type of device that is accessible by the processor.
[0249] The memory 7810 is coupled to the processor 7805 by, for example, the bus 7825 such as a PCI bus, SCSI bus, or the like. The memory 7810 can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory 7810 can be local, remote, or distributed.
[0250] The bus 7825 also couples the processor 7805 to the non-volatile memory 7815 and a drive unit 7845. The non-volatile memory 7815 is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD- ROM, EPROM, or EEPROM, a magnetic or optical card, SD card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software in the computer system 7800. The nonvolatile memory 7815 can be local, remote, or distributed. The non-volatile memory 7515 can be optional because systems can be created with all applicable data available in memory. A typical computer system usually includes at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor.
[0251] Software is typically stored in the non-volatile memory 7815 and/or the drive unit 7845. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory 7810 in this disclosure. Even when software is moved to the memory for execution, the processor 7805 typically makes use of hardware registers and local cache to store values associated with the software. Ideally, this serves to speed up execution. As used herein, a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as "implemented in a computer-readable medium." A processor is considered to be "configured to execute a program" when at least one value associated with the program is stored in a register readable by the processor.
[0252] The bus 7825 also couples the processor 7805 to the network interface device 7820. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system 7800. The interface can include an analog modem, ISDN modem, cable modem, token ring interface, satellite transmission interface (e.g., "direct PC"), or other interfaces for coupling a computer system to other computer systems. The interface can include one or more input and/or output devices 7835. The I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, speaker, DVD/CD-ROM drives, disk drives, and other input and/or output devices, including a display device. The display device 7830 can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), LED display, a projected display (such as a heads-up display device), a touchscreen or some other applicable known or convenient display device. The display device 7830 can be used to display text and graphics.
[0253] In operation, the computer system 7800 can be controlled by operating system software that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® and its associated file management systems. Another example of operating system software with its associated file management system software is the Linux® operating system and its associated file management system. The file management system is typically stored in the non-volatile memory 7815 and/or drive unit 7845 and causes the processor 7805 to execute the various acts required by the operating system to input and output data and to store data in the memory 7810, including storing files on the non-volatile memory 7815 and/or drive unit 7845.
[0254] From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention.

Claims

1. A method implemented by a first mobile application running on a first mobile device of the user, the method comprising:
receiving, at the first mobile device, an image and a content item representative of the image;
in response to receiving the image and the content item representative of the image, generating a linked image by creating a link between the image and the content item;
storing the linked image in a memory of the mobile device; and
upon detecting that a candidate content item entered by the user on a typing
interface matches with the content item representative of the image, the typing interface included in a second mobile application running on the first mobile device of the user:
retrieving the linked image from the memory of the first mobile device;
converting the candidate content item into the linked image;
inserting a condensed version of the linked image into the typing interface of the second mobile application, and
sending, via the second mobile application, the linked image to a second mobile device.
2. The method of claim 1 , further comprising:
integrating with the second mobile application;
modifying automatically, by the first mobile application, the typing interface of the second mobile application to include one or more user interface features corresponding to the first mobile application; and
providing a designated folder for storing the linked image in the memory of the mobile device.
3. The method of claim 1 , wherein the content item is an emoji of a plurality of predefined emojis provided by the second mobile application.
4. The method of claim 1 , wherein the content item includes one or more ideograms, one or more alphabets, one or more numbers, or a combination of ideograms, alphabets and numbers.
The method of claim 1 , further comprising:
receiving, at the first mobile device, a text message including a respective content item;
detecting, by the first mobile application, that the text message includes the
respective content item identified as a relevant content item by the first mobile application;
automatically converting, by the first mobile application, the respective content item into the respective linked image corresponding to the respective content item; replacing the respective content item in the text message with the respective linked image;
displaying the text message, via the second mobile application, the text message including the linked image.
The method of claim 1 , the sending further comprising:
including the linked image in a text message composed using the second mobile application by replacing the content item in the text message with the linked image.
7. The method of claim 1 , wherein the image is received from at least one of the following: a library storing personal images of the user on the first mobile device, an image captured by a camera application running on the first mobile device, or an online library of images accessible by the first application.
8. The method of claim 1 , wherein the image is a first image, the linked image is a first linked image, further comprising:
receiving a second image and the content item representative of the second image; in response to receiving the second image and the content item representative of the second image, generating a second linked image by creating a link between the second image and the content item;
storing the linked image in a memory of the mobile device;
upon detecting a candidate content item entered by the user item on a typing
interface matches with the content item representative of the image, the typing interface included in the second mobile messaging application running on the mobile device of the user:
displaying the first linked image and the second linked image for selection by the user and receiving the selection of the first linked image or the second image for
insertion in the typing interface.
9. The method of claim 1 , wherein the second mobile application is a messaging application.
10. The method of claim 1 , wherein the content item is a first content item, the link is a first link, further comprising:
receiving a second content item representative of the image;
in response to receiving the second content item, generating a second link between the second content item and the linked image without adding an additional image, wherein the first content item, the second content item, and the linked image are defined as a single set.
1 1 . The method of claim 1 , wherein the image is a first image, the linked image is a first linked image, and the link is a first link, further comprising:
receiving a second image corresponding to the content item;
in response to receiving the second image, generating a second linked image by creating a second link between the second image and the content item associated with the first linked image without adding an additional content item, wherein the first linked image, the second linked image, and the content item are defined as a single set.
12. The method of claim 1 , the detecting further comprising:
receiving, from the user, text entries in addition to the content item;
inserting a condensed version of the linked image and the text entries on the typing interface of the second mobile application, wherein the first mobile application is different from the second mobile application, the text entries positioned in close proximity to the linked image; and
sending, via the second mobile application, the linked image to a second mobile device.
13. The method of claim 1 , wherein the candidate content item is highlighted by the user for emphasis on the typing interface.
14. The method of claim 1 , wherein the candidate content item entered by the user item on the typing interface matches with the content item representative of the image according to one of the following matching conditions: (i) the candidate content item is identical to the content item, or (ii) at least a portion of the candidate content item is identical with at least a portion of the content item representative of the image.
15. A non-transitory computer-readable medium comprising a set of instructions associated with a first mobile application of a first user that, when executed by one or more processors, cause a mobile device to perform the operations of:
identifying in real time, via a text messaging application, that a message is composed for a second user by the first user, the second user included in a contact list stored on the mobile device of the first user;
upon detecting a candidate content item entered by the first user on a typing
interface of the text messaging application matches with the content item representative of the image:
retrieving a plurality of linked images from the memory of the mobile device, wherein the plurality of linked images are generated by creating links between the content item and each linked image of the plurality of linked images,
inserting in the message composed for the second user by the first user, a condensed version of a default linked image, wherein the default linked image corresponds to the most recently used linked image by the first user for communicating with the second user, and sending, via the text messaging application, the linked image in the message composed for the second user by the first user.
16. The non-transitory computer-readable medium of claim 15, wherein the set of instructions associated with the first mobile application of the first user, when executed by the one or more processors, further cause the machine to perform the operations of:
in response to detecting a tap on the default linked image on the typing interface, displaying condensed versions of the plurality of linked images on the typing interface.
17. The non-transitory computer-readable medium of claim 15, wherein the set of instructions associated with the first mobile application of the first user, when executed by the one or more processors, further cause the machine to perform the operations of:
in response to detecting a tap on the default linked image on the typing interface, displaying condensed versions of the plurality of linked images on the typing interface;
in response to selection of a condensed version of a linked image in the plurality of linked images displayed on the typing interface, inserting the condensed version of the linked image in the message composed for the second user by the first user;
sending to the second user, via the text messaging application, the linked image in the message composed for the second user by the first user.
18. The non-transitory computer-readable medium of claim 17, wherein the set of instructions associated with the first mobile application of the first user, when executed by the one or more processors, further cause the machine to perform the operations of:
toggle the displayed linked image to the content item upon detecting a tap on the linked image on the typing interface; and
sending, to the second user, via the text messaging application, the content item and not including any linked image in the plurality of linked images.
19. A system comprising:
a first mobile application running on a first mobile device of the user, wherein the first mobile application is configured for:
receiving an image and a content item representative of the image;
in response to receiving the image and the content item representative of the image, generating a linked image by creating a link between the image and the content item,
storing the linked image in a memory of the first mobile device; upon detecting that a candidate content item entered by the user in a text message matches with the content item representative of the image, the text message intended for a recipient at a second mobile device, the text message included in a second mobile application running on the first mobile device of the user: retrieving the linked image from the memory of the mobile device;
converting the content item into the linked image;
inserting the linked image into the text message, wherein the first mobile application is different from the second mobile application program; sending, via the second mobile application program, the text message
including the linked image to the recipient at the second mobile device;
receiving a request for deleting the text message from the user; and in response to the request for deleting the text message from the user: deleting the text message at the first mobile device of the user; and communicating the request to a remote server;
the remote server in electronic communication with the first mobile application over a communication network, wherein the server is configured for: upon detecting the request for deleting the text message from the user: verifying that the text message is received by the recipient at the second mobile device; and
deleting the text message at the second mobile device.
20. The system of claim 19, wherein the image is received from at least one of the following: a library storing personal images of the user on the first mobile device, an image captured by a camera application running on the first mobile device, or an online library of images accessible by the first application.
PCT/US2016/030402 2015-05-01 2016-05-02 Personalized image-based communication on mobile platforms WO2016179087A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/799,897 US20180054405A1 (en) 2015-05-01 2017-10-31 Personalized image-based communication on mobile platforms

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201562155537P 2015-05-01 2015-05-01
US62/155,537 2015-05-01
US201662291564P 2016-02-05 2016-02-05
US62/291,564 2016-02-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/799,897 Continuation-In-Part US20180054405A1 (en) 2015-05-01 2017-10-31 Personalized image-based communication on mobile platforms

Publications (1)

Publication Number Publication Date
WO2016179087A1 true WO2016179087A1 (en) 2016-11-10

Family

ID=57218233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/030402 WO2016179087A1 (en) 2015-05-01 2016-05-02 Personalized image-based communication on mobile platforms

Country Status (2)

Country Link
US (1) US20180054405A1 (en)
WO (1) WO2016179087A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018147741A1 (en) * 2017-02-13 2018-08-16 Slegers Teun Friedrich Jozephus System and device for personal messaging
CN108989554A (en) * 2018-06-29 2018-12-11 维沃移动通信有限公司 A kind of information processing method and terminal
CN110313165A (en) * 2017-02-13 2019-10-08 特温·弗里德里希·约瑟菲斯·什莱格斯 System and equipment for personal messages transmitting
CN112466118A (en) * 2020-11-25 2021-03-09 武汉光庭信息技术股份有限公司 Vehicle driving behavior recognition method, system, electronic device and storage medium
US11155752B2 (en) 2017-04-18 2021-10-26 Jiangsu Hecheng Display Technology Co., Ltd. Liquid crystal composition and display device thereof

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138255B2 (en) * 2017-09-27 2021-10-05 Facebook, Inc. Providing combinations of pre-generated and dynamic media effects to create customized media communications
US10970329B1 (en) * 2018-03-30 2021-04-06 Snap Inc. Associating a graphical element to media content item collections
US10855686B2 (en) 2018-04-09 2020-12-01 Bank Of America Corporation Preventing unauthorized access to secure information systems using multi-push authentication techniques
US11310176B2 (en) * 2018-04-13 2022-04-19 Snap Inc. Content suggestion system
US10902659B2 (en) 2018-09-19 2021-01-26 International Business Machines Corporation Intelligent photograph overlay in an internet of things (IoT) computing environment
JP2020086558A (en) * 2018-11-16 2020-06-04 大日本印刷株式会社 Display mode changing device, display mode changing program, and display mode changing method
US11252274B2 (en) * 2019-09-30 2022-02-15 Snap Inc. Messaging application sticker extensions
US11082375B2 (en) * 2019-10-02 2021-08-03 Sap Se Object replication inside collaboration systems
CN111562865B (en) * 2020-04-30 2022-04-29 维沃移动通信有限公司 Information sharing method and device, electronic equipment and storage medium
US11302048B2 (en) * 2020-08-31 2022-04-12 Yahoo Assets Llc Computerized system and method for automatically generating original memes for insertion into modified messages
US11463389B1 (en) * 2021-05-05 2022-10-04 Rovi Guides, Inc. Message modification based on device compatability
US11563701B2 (en) 2021-05-05 2023-01-24 Rovi Guides, Inc. Message modification based on message format
US11562124B2 (en) * 2021-05-05 2023-01-24 Rovi Guides, Inc. Message modification based on message context

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050156873A1 (en) * 2004-01-20 2005-07-21 Microsoft Corporation Custom emoticons
US20100153376A1 (en) * 2007-05-21 2010-06-17 Incredimail Ltd. Interactive message editing system and method
US20130080927A1 (en) * 2002-05-31 2013-03-28 Aol Inc. Multiple personalities in chat communications
US20140032304A1 (en) * 2012-07-27 2014-01-30 Google Inc. Determining a correlation between presentation of a content item and a transaction by a user at a point of sale terminal
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100584319B1 (en) * 2003-12-08 2006-05-26 삼성전자주식회사 Mobile phone for deleting short message stored in receiving part and method for transmitting and deleting short message using the same
US20090005032A1 (en) * 2007-06-28 2009-01-01 Apple Inc. Viewing Digital Content on a Mobile Device
US8584031B2 (en) * 2008-11-19 2013-11-12 Apple Inc. Portable touch screen device, method, and graphical user interface for using emoji characters
WO2015164823A1 (en) * 2014-04-25 2015-10-29 Fisher Timothy Isaac Messaging with drawn graphic input

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130080927A1 (en) * 2002-05-31 2013-03-28 Aol Inc. Multiple personalities in chat communications
US20050156873A1 (en) * 2004-01-20 2005-07-21 Microsoft Corporation Custom emoticons
US20100153376A1 (en) * 2007-05-21 2010-06-17 Incredimail Ltd. Interactive message editing system and method
US20140032304A1 (en) * 2012-07-27 2014-01-30 Google Inc. Determining a correlation between presentation of a content item and a transaction by a user at a point of sale terminal
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018147741A1 (en) * 2017-02-13 2018-08-16 Slegers Teun Friedrich Jozephus System and device for personal messaging
CN110313165A (en) * 2017-02-13 2019-10-08 特温·弗里德里希·约瑟菲斯·什莱格斯 System and equipment for personal messages transmitting
US11155752B2 (en) 2017-04-18 2021-10-26 Jiangsu Hecheng Display Technology Co., Ltd. Liquid crystal composition and display device thereof
CN108989554A (en) * 2018-06-29 2018-12-11 维沃移动通信有限公司 A kind of information processing method and terminal
CN112466118A (en) * 2020-11-25 2021-03-09 武汉光庭信息技术股份有限公司 Vehicle driving behavior recognition method, system, electronic device and storage medium

Also Published As

Publication number Publication date
US20180054405A1 (en) 2018-02-22

Similar Documents

Publication Publication Date Title
US20180054405A1 (en) Personalized image-based communication on mobile platforms
US10812429B2 (en) Systems and methods for message communication
US10565268B2 (en) Interactive communication augmented with contextual information
US20150277686A1 (en) Systems and Methods for the Real-Time Modification of Videos and Images Within a Social Network Format
US10866720B2 (en) Data entry systems
CN117634495A (en) Suggested response based on message decal
US20210081103A1 (en) Data entry systems
US20160132233A1 (en) Data entry systems
US20140115070A1 (en) Apparatus and associated methods
US20110258556A1 (en) Social home page
US20120271718A1 (en) Method and system for providing background advertisement of virtual key input device
US20140163957A1 (en) Multimedia message having portions of media content based on interpretive meaning
CN102426607A (en) Extensible search term suggestion engine
US20140164371A1 (en) Extraction of media portions in association with correlated input
US20130012245A1 (en) Apparatus and method for transmitting message in mobile terminal
CN105045800A (en) Information search system and method
US20120297182A1 (en) Cipher and annotation technologies for digital content devices
US10432572B2 (en) Content posting method and apparatus
Behrens “Unknown Symbols”: Online Legal Research in the Age of Emoji
US20210118012A1 (en) User-customizable, user-personalizable and user compensable keyboard providing system and method
US20170364955A1 (en) Method and system for providing background advertisement of virtual key input device
Samal Utilization of accredited social health activists in the community-based assessment of noncommunicable diseases: Experiences from a tribal district of Chhattisgarh
Vandome iPhone for Seniors in easy steps: For all models of iPhone with iOS 11
Vandome Get going with hudl2 in easy steps
Smith Sams teach yourself iPad in 10 minutes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16789886

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16789886

Country of ref document: EP

Kind code of ref document: A1