US20170085504A1 - Method and Apparatus for the Automated Response Capture using Text Messaging - Google Patents

Method and Apparatus for the Automated Response Capture using Text Messaging Download PDF

Info

Publication number
US20170085504A1
US20170085504A1 US15/069,857 US201615069857A US2017085504A1 US 20170085504 A1 US20170085504 A1 US 20170085504A1 US 201615069857 A US201615069857 A US 201615069857A US 2017085504 A1 US2017085504 A1 US 2017085504A1
Authority
US
United States
Prior art keywords
message
text
video
text message
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/069,857
Inventor
James D Logan
Richard A BAKER, JR.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twin Harbor Labs LLC
Original Assignee
Twin Harbor Labs LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twin Harbor Labs LLC filed Critical Twin Harbor Labs LLC
Priority to US15/069,857 priority Critical patent/US20170085504A1/en
Publication of US20170085504A1 publication Critical patent/US20170085504A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/14Arrangements for monitoring or testing data switching networks using software, i.e. software packages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/58Message adaptation for wireless communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • H04M1/72555
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00281Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a telecommunication apparatus, e.g. a switched network of teleprinters for the distribution of text-based information, a selective call terminal
    • H04N1/00307Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a telecommunication apparatus, e.g. a switched network of teleprinters for the distribution of text-based information, a selective call terminal with a mobile telephone apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32106Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file
    • H04N1/32117Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file in a separate transmission or protocol signal prior to or subsequent to the image data transmission, e.g. in digital identification signal [DIS], in non standard setup [NSS] or in non standard field [NSF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • H04W4/14Short messaging services, e.g. short message services [SMS] or unstructured supplementary service data [USSD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/08Annexed information, e.g. attachments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0084Digital still camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3261Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of multimedia information, e.g. a sound signal
    • H04N2201/3266Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of multimedia information, e.g. a sound signal of text or character information, e.g. text accompanying an image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3278Transmission

Definitions

  • the present invention is directed to text messaging applications and is more specifically related to the exchange of photographs and video using such text messaging applications.
  • Text messages have been sent since 3 Dec. 1992, when Neil Papworth, a test engineer for Sema Group in the UK used a personal computer to send the text message “Merry Christmas” via the Vodafone network to the phone of Richard Jarvis.
  • SMS messages billions of SMS messages have been transferred between phones, messages that first contained text messages, then photos, and later videos. In each case the communications has been driven by the sender (the “texter”) sending a message to the receiver, who must respond to the text when appropriate. SMS messages are limited to 160 characters, so messages are often abbreviated and the emotions of both the texter and the receiver are often lost in the brevity of the message.
  • EmoText a MIT Computer Science 441 project in the spring of 2014 discusses the use of facial recognition to interpret the emotion of the receiver of a text message. The emotion is interpreted and an avatar is sent back to the original texter with the emotions.
  • EmoText only describes the computer interpretation of the emotion of the receiver of the text message. The actual emotion is lost in the medium of the SMS message. The receiver's face is not seen by the texter.
  • the present invention referred to herein by the short-hand expression “FaceBack”, eliminates the issues articulated above as well as other issues with the currently known products.
  • One aspect of the present invention takes the form of a method for receiving text message whereby the text message is scanned for a metacharacter, set of metacharacters, or other delineating information, dictating that the camera on the receiving device should take a photograph, and if the metacharacter is detected, taking a photograph using the phone's camera of the face of the recipient of the message, the response photo, and then sending the photograph back, using a text message, to the address of the device that sent the original text message.
  • the photograph could be replaced with a brief video, operated, in some embodiments, as just described for photographs.
  • Another further feature could have an eye tracking system determine when the receiver's eyes read the area of the text associated with the metacharacter and take either the photograph or the video in the approximate time when the text is read.
  • a further aspect of this method provides a time delay before taking the picture or stopping the video.
  • a further aspect of the method incudes the steps of requesting permission from the receiver before sending the photograph or video.
  • Another aspect of the present invention takes the form of an apparatus for processing text messages containing a phone, a camera, a network interface, and a screen, where the apparatus is configured to receive electronically a text message from the network and display the message on the screen, activate the camera to take a picture of the user when the text message is on the screen, and then send this response picture in reply to the text message.
  • the photograph could be replaced with a brief video.
  • Another further feature could have an eye tracking system determine when the receiver's eyes read the area of the text of interest to the texter as indicated by the placement of one or more metacharacters and take either the photograph or the video when the text is read.
  • a further aspect of this apparatus involves a time delay before taking the picture or stopping the video.
  • a further aspect of the apparatus incudes a permission apparatus that requests permission from the receiver before sending the photograph or video.
  • FIG. 1 is a diagram of the FaceBack system showing the text messages flowing between the phones.
  • FIG. 2 is a flow chart of a possible implementation of the FaceBack system on the receiving cell phone.
  • FIG. 3 is an example of a SMS packet containing the commands to take start a video and stop taking the video and decoded details of the message.
  • the present invention addresses the limitations of text messages regarding the transmission of emotions while messaging with SMS messages or similar systems, and particularly address the issue of allowing the texter to see the receiver's emotions when the message, or pertinent part of a message, is read. Since the receiver is rarely in the same location as the texter, the texter cannot see how the text message is received by the receiver using existing SMS messaging techniques. If the text messaging applications are modified to allow metacharacters to be inserted in SMS messages, a texter could direct the cell phone camera on the receiver's cell phone to take a photograph or a video of the receiver, and then transmit the photo back to the texter, allowing the texter to see the receiver's face when the message is read.
  • Additional permissions may be implemented by the receiver to always allow, to allow only for designated messages, or to always deny permission to take and send the photo or video.
  • Such permissions may be texter-specific, that is, applying to certain senders of messages but not others. Permissions to take the photograph may be implemented in the setup of the text application, and modified as needed thereafter.
  • video could include the recording of a moving image in 2D or 3D, recording of sound, or the recording of both image and sound, or simply could be a still photograph or a series of photographs.
  • audio generated by the recipient could be processed through a voice recognition program to convert the audio into text, and incorporating such text along with the video or photograph being returned to the texter.
  • any device that sends and receives an SMS message, or similar message could be used, such as a cell phone, a smart phone, a tablet, a laptop, a personal computer, smart watch, and any other similar device.
  • the text message could be replaced with an email, chat, Viber, iMessage, WhatApp, Snapchat messages (video, photo, or text), instant, video or voice messaging systems, or voicemail.
  • the messages could include embedded metacharacters or other similar mechanism or protocols that cause a photo or video to be recorded at the time, or slightly thereafter, that a message or relevant part of a message as delineated by metacharacters is read and then returned to the sender.
  • a tone could be used in one embodiment instead of the metacharacter to indicate when to take the photo.
  • SMS is meant to include either SMS, XMPP, or MMS or other protocols.
  • the texter's phone 101 runs a standard text message creation application that has been modified to allow metacharacters to be inserted into SMS messages 102 .
  • a command to start taking a video on the receiver's phone 103 could be encoded as hexadecimal character 0x13 and to turn off the video on the receiver's phone 103 as hexadecimal character 0x14.
  • the text message application on the texter's phone 101 may have an icon of a green video camera that inserts 0x13 into the SMS message 102 when the texter selects the icon.
  • a red video camera icon could indicate the insertion of 0x14 into the SMS message 102 .
  • a function key or ALT key combination with a second character could be used to insert the metacharacters.
  • an icon of a camera could indicate that metacharacter 0x14 is inserted in the SMS message 102 .
  • the specific icon or metacharacters could be implemented using other icons and character choices without departing from the spirit of this invention.
  • the texter could delineate the portion of the text of interest by highlighting such text and perhaps pressing a key representing the metacharacter insertion.
  • the camera would focus on filming the recipient while reading this segment including a time period after the reading was finished.
  • the SMS message 102 is sent to the receiver's phone 103 .
  • the SMS message 102 is stored until the receiver reads the message.
  • the video is turned on when the receiver's eyes reach the location in the text message where the command to turn on the video is located and then turned off when the receiver's eyes reach the location of the video off command.
  • the text application on the receiver's phone 103 assembles the video into an SMS message 104 for transmission back to the texter's phone 105 (and 101 ).
  • the receiver's text message application may require that the receiver grant permission before hand to allow videos to be taken. This permission may be based for all texters or only for certain texters. Therefore, texters may ask specific friends and contacts to opt in to the feature. Permission may be granted by the receiver to specific texters and not to others. Or the text message application may be set up to deny permission to take any videos regardless of the texter.
  • the UI for the texting application could make it convenient for a recipient to temporarily turn off the feature for a given texter whenever that was desired.
  • a visual indicator would be apparent and associated with a specific incoming text to let the recipient know that a response text has been requested.
  • the indicator could also show where in such an incoming text the response photo has been requested to be generated.
  • the recipient is being asked to generate a facial expression in regards to a specific text or text part, rather than a candid photo or video being generated.
  • a visual indicator would be apparent, perhaps in each message and or in the general contacts list, to remind recipients that they had given permission for a specific texter to be sent response texts.
  • the response video is displayed on the receiver's phone and the receiver is prompted to give permission to send the message back to the texter.
  • the prompts could be send, deny, or redo. Redo allows the receiver to record a new video or to replace the video with a saved video (the receiver could also incorporate an HTML link or a link to a video in the response).
  • the receiver could also be prompted to comment on the video, in text or audio, before returning the message. Or such annotations could be a normal part of the process and require no prompting.
  • the receiver may choose which one to return to the texter. Or the receiver may choose to edit the video before returning.
  • the SMS message 104 is then sent to the texter's phone 101 and 105 .
  • the texter can then watch the video to see the receiver as the text message is read using a standard text message application.
  • the original recipient could activate a request to see the original texter's response when that person saw the video that was sent back after seeing the original text.
  • a series of ping-pong response videos could be generated and sent.
  • the original text message could be returned with the video to the texter so that the texter know which message the video relates to. This may be important in the case where the texter has sent multiple messages to the receiver before the receiver has read the first message, to allow the texter to understand which message the video relates to.
  • the problem of identifying to which text message the video applies could be solved by inserting the reaction video back into the stream of texts sent that appear on the texter's phone, thus leaving the original text and the reaction video in the chronological order of the texter's message stream.
  • the texter indicates that a video is to be taken when the receiver reads a text message by setting a flag in the header of the text message indicating that the receiver's video should be taken.
  • the texter enters a time after the receiver starts reading the message to take the video or a percentage of the message to be read before taking the video. This percentage or time value are placed in to the header of the text message.
  • the receiver's text message application will wait the specified amount of time and then take the video. Or the receiver's text message application will wait until the receiver has read the specified percentage of the message, as determined via eye-tracking or head orientation software, before taking the video. The reaction video is then returned to the texter.
  • the photos could be analyzed by software on the receiver's phone to select the best photo to use. Or a video could be analyzed for the best still image, or segment of video to use. Alternatively, the receiver could be prompted to choose the best photo in the series (or video) to send to the texter.
  • the software on the receiver's phone could analyze the video and edit the photo to crop out background and center the video on the receiver's face. If multiple faces are seen, the software would focus on the one oriented to be reading the text (in case the phone is sitting on a table and multiple faces appear in the video). If multiple faces appear in the video and all seem to be looking at the text, then the software may capture all of the faces in the video. Alternatively, the software could compare the video to a known picture of the receiver, and crop out all other faces except the receiver's.
  • FIG. 2 we see an algorithm for the implementation of the invention on the receiver's phone 103 .
  • a modification to the standard text message application receives the text message 201 .
  • the message 102 is then parsed by the text message application to see if there is a metacharacter indicating that the video is to be turned on 202 . If the video metacharacter is not present in the message 203 , then the message 102 is processed as a normal SMS message 102 .
  • the video metacharacter is present 205
  • the user-facing camera on the phone 103 is turned on and algorithms on the phone are activated to track the receiver's eyes 206 .
  • Eye tracking algorithms can be found on phones such as the Samsung Galaxy S4.
  • the text message application could turn on the video once the video metacharacter is displayed on the screen and could continue taking the video until the video off metacharacter reaches the top of the screen or until the text message is no longer displayed on the phone 103 screen.
  • the receiver's eyes are matched to the text 207 until the receiver's eyes reach the video metacharacter 208 . While waiting until the eyes reach that portion of the text 209 , the algorithm loops around matching the receiver's eyes to the text.
  • the user facing camera on the phone starts recording the video 211 .
  • the video continues recording, and the algorithm enters another loop seeking the point where the receiver's eyes see the video stop metacharacter.
  • the recording stops.
  • Some implementations may require a time delay between the reading of the video stop metacharacter and the actual stop of the recording to account for the delay in the receiver's brain from the time the eye sees the text and the time when the brain responds to what is read.
  • the receiver's phone 103 will package the video in an SMS message 104 and send the message 212 .
  • the algorithm then returns to the normal SMS message processing 213 .
  • the metacharacter could be displayed in the text message for the recipient to see or it could be hidden, depending upon the configuration of the text message processing application.
  • the text message is visible on the lock screen of the phone, in the form of a notification, for instance, as soon as it arrives.
  • the receiver text software could monitor the phone's accelerometers to detect the movement from its resting place to a point where the user can view the screen, and which point the motion of the phone stabilizes while it is being held at a reading angle. On stability is noticed by the accelerometers, the video can start with the assumption being that the text is being read at that point.
  • the phone would not be picked up when the video-requested text comes in because it is lying on a table and the text is clearly visible without picking it up.
  • the phone's camera would then turn on in an attempt to capture the image of the recipient looking at the phone and reading the text without picking it up. If a metacharacter was embedded deeper in the text, such an image might not be taken as the sought after emotion could be captured later when the recipient picked up the phone and looked at the whole message.
  • the recipient's camera could turn on and try to use eye-tracking to discern if the message is being read by the recipient.
  • the front-facing camera could be used to take the reaction-video even at such an oblique angle if necessary. If such eye-tracking software is not available, or if trying to track eye movements are too difficult to track at that angle, then head-tracking software might be employed that would look for the general head orientation to determine if the user is looking at the screen as an indication of when to start recording.
  • the receiver's phone has facial recognition software operating.
  • Facial recognition software is available on the market, for instance FaceReader 6 from Noldus Information Technology could provide software that detects emotions and changes in emotions.
  • the facial recognition software observes the receiver's face as he reads the text message, and captures the video when the receiver's face reacts to the text message.
  • the camera in this embodiment would ideally watch the user's face during the entire period that the text is being read.
  • the facial recognition software would review the facial expressions, watching the eyes, mouth, and other facial features to determine changes in expression as the text was read.
  • the software When the expression changes markedly, the software would excise such segments or images and use them in constructing the response text to be sent back to the texter. During the reading of a single text message, there may be several changes in expression; the software could collect each change or could chose the photo with the most significant change to the expression.
  • the recipient would be allowed to review the images generated and pick one or more photos, or some or all of the video segment to send back to the texter.
  • the metacharacter could indicate what type of expression the texter was looking for (a laugh, for instance) and the software could pull out the best instance of such an expression.
  • Such specification of desired responses could be set for a specific text, or a specific recipient, or for all texts sent by a given texter. Response specifications could include types or intensities of emotions.
  • the software could look for any change in emotion when the recipient read the text content associated with the metacharacter.
  • the texter could have set a minimum threshold for a given expression. If the facial recognition software did not deem that a reaction met that threshold of reaction then no reaction text would be sent back to the texter.
  • Different expressions could have different thresholds for a given texter. And different recipients could have different thresholds for a given texter or different thresholds for a given emotion.
  • the texter may indicate, perhaps by tapping with a thumbs-up or down icon that that was the reaction the texter was looking for. With such a feedback loop the texter could help calibrate the system to better filter out expressions desired by the texter.
  • the response text would be generated and sent to the server, however not sent on to the texter.
  • the texter could “retrieve” the response images later via a “pull” process. That is, the texter could specify the text to which a response image was desired and the response image could then be sent to that texter. Such retrieval might be through viewing on a website, via an email, or it could be in the form of a text overlaid on the text conversation with the recipient.
  • the recipient would see some indication on their display that a certain section of a text had been designated for a response text.
  • an indication might be a flash of some sort associated with the metacharacter, such flash resembling or alluding to a camera's flash going off.
  • Another method might be to bold or flash the text.
  • Such indicators might come slightly before the material was read (indicating it's importance to the texter and that a response is expected) or the indication might come after reading giving this information to the recipient after the relevant material had already been read.
  • the system could provide summary data to the texter as to what types and numbers of reactions he or she was getting from any given recipient and how that might be changing over time.
  • AI software could be employed to analyze the content of texts sent and then associate such content with the reactions received.
  • no metacharacters would be used at all. All facial reactions would be recorded for all received texts and the facial recognition software would be used to cull out all those reaction expressions that met certain thresholds and use them as the basis for texts to be sent back to the texter.
  • the invention requires both texter and recipient to opt in to the application, presumably downloading a new text application to set up both sides of the exchange of texts and responses to said texts.
  • the invention could also be implemented, in part by a willing recipient.
  • Such recipient could employ a text application with a facial emotion recognition component. Every time such a recipient received a text—from anybody or only certain persons or classes of persons—the front facing camera would turn on and start to record looking for an outstanding emotion of some type. When found, such images, photos or videos or both, could be used to automatically construct a reply text.
  • the original texter would not need to have installed the app used by the recipient to receive and enjoy such a reply text.
  • the recipient could have the option to decline sending such a text or could add additional text to it and annotate the text in other ways before sending.
  • reply text images could be annotated would be to use photo-editing techniques to change the photo. For instance, if the text caught the person with unkempt hair, perhaps a hat could be added to the picture.
  • the reply text could also include a note advertising the app that produced such a text and in such way make the spread of the app that much more viral.
  • FIG. 3 shows a sample SMS message 301 encoded to take a video.
  • the “*” character is used to both turn on and turn off the video recorder.
  • a single metacharacter will be used to both turn on and turn off the video camera.
  • the metacharacter may or may not be visible to the recipient when the text comes in but would be readable by the texting app on that person's phone.
  • FIG. 3 displays a sample SMS message 301 message in hexadecimal format. This SMS message 301 is then decoded in the table below. With the exception of the video metacharacter, this is a standard SMS message.
  • the SMS message 301 starts with the length of the SMSC information 310 .
  • the type of the SMSC number 311 and the SMSC number 312 itself represent the destination phone number. This is where the SMS message 301 is to be sent to.
  • the SMSC length 310 is an 8 bit value specifying the length of the next two fields, in this case 7 bytes (a byte is 8 bits in length).
  • the SMSC type 311 is 0x91, specifying an international format phone number. Other types of numbers can be found in the GSM 03.40 standard.
  • the SMSC number 312 is 6 bytes long, and has the nibbles reversed, commonly called Little-endian format.
  • the sender address 316 specifies the address of the SMS message 301 sender.
  • the protocol identifier 317 either refers to the higher layer protocol being used, indicates interworking with a certain type of telematic device (like fax, telex, pager, teletex, e-mail), specifies replace type of the message or allows download of configuration parameters to the SIM card. Plain SMS messages have protocol identifier set to 0x00. Further details can be found in the found in the GSM 03.40 standard.
  • the data encoding scheme 318 specifies the data encoding of the message.
  • Typical Latin based language SMS messages use a 7 bit encoding and insert 0x00 in this field.
  • Other options are 8 bit encoding and 16 bit encoding, typically used for Chinese, Korean or Japanese languages.
  • the next part of the SMS message 301 is the time stamp 319 . This is followed by the length of the SMS data 320 . This is a count of the number of characters of user data 321 . Note that this is not a count of the bytes in the user data, as the characters are packed using 7 bits per character, so each byte contains bits for two characters.
  • the user data 321 contains the user's text message packed into 7 bit format in this example, although the message could be coded in 8 bit or 16 bit format as described above. Standard algorithms for packing 7 bit SMS messages can be found on the internet.
  • the notable distinction here is that the user data 321 contains two “*” characters which, for this embodiment, we have specified as the metacharacter for remotely turning on and off the user facing video camera on the receiver's phone 103 .
  • the SMS header will be modified to indicate that a video is to be taken. This modification could be the addition of a bit that indicates that a video is to be taken. In another embodiment.
  • the SMS header could add a field indicating when to start the video. This value could indicate how many seconds of delay should occur before the video starts, or the percentage of message is to be read before starting the video.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The FaceBack system incorporates an automated mechanism for remotely capturing the reaction of the receiver of a text message. This mechanism is started when the texter incorporates instructions into a text message to take a photo or record a video when the text is read by a receiver. The receiving device scans the incoming text message and takes a photo or records a video of the receiving user when the user reads the text message. Once the video is recorded or the picture is taken, the video or picture is returned to the texter through the SMS system.

Description

    RELATED APPLICATIONS
  • This patent application claims the benefit of U.S. Provisional Application Ser. No. 62/052,172, filed Sep. 18, 2014, entitled “FACEBACK: AUTOMATED RESPONSE CAPTURE USING TEXT MESSAGING”, the entire provisional patent application of which is herein incorporated by reference.
  • BACKGROUND OF INVENTION Field of the Invention
  • The present invention is directed to text messaging applications and is more specifically related to the exchange of photographs and video using such text messaging applications.
  • Description of the Related Art
  • Text messages have been sent since 3 Dec. 1992, when Neil Papworth, a test engineer for Sema Group in the UK used a personal computer to send the text message “Merry Christmas” via the Vodafone network to the phone of Richard Jarvis. In the ensuing decades, billions of SMS messages have been transferred between phones, messages that first contained text messages, then photos, and later videos. In each case the communications has been driven by the sender (the “texter”) sending a message to the receiver, who must respond to the text when appropriate. SMS messages are limited to 160 characters, so messages are often abbreviated and the emotions of both the texter and the receiver are often lost in the brevity of the message.
  • This lack of emotion in text messages was first addressed with the addition of photographs and later videos to the SMS protocol and to the text messaging applications. But the video and photos are often separated from text, thus blunting the effect.
  • To resolve this, a number of new applications have arisen in recent years to add emoticons to text messages. A group in Sweden created eMoto to bring emoticons to text messages, and similar work has been published by groups from the Hungarian Academy of Science, AT&T, Docomo Communications Laboratories Europe GmbH, Hiroshima City University, and React Limited. While each of these applications allow the user to insert emotions into their text message through avatars, none show the true emotion on the face of the recipient as the message is read.
  • EmoText, a MIT Computer Science 441 project in the spring of 2014 discusses the use of facial recognition to interpret the emotion of the receiver of a text message. The emotion is interpreted and an avatar is sent back to the original texter with the emotions.
  • But EmoText only describes the computer interpretation of the emotion of the receiver of the text message. The actual emotion is lost in the medium of the SMS message. The receiver's face is not seen by the texter.
  • The present invention, referred to herein by the short-hand expression “FaceBack”, eliminates the issues articulated above as well as other issues with the currently known products.
  • SUMMARY OF THE INVENTION
  • One aspect of the present invention takes the form of a method for receiving text message whereby the text message is scanned for a metacharacter, set of metacharacters, or other delineating information, dictating that the camera on the receiving device should take a photograph, and if the metacharacter is detected, taking a photograph using the phone's camera of the face of the recipient of the message, the response photo, and then sending the photograph back, using a text message, to the address of the device that sent the original text message.
  • In a further feature of the method, the photograph could be replaced with a brief video, operated, in some embodiments, as just described for photographs. Another further feature could have an eye tracking system determine when the receiver's eyes read the area of the text associated with the metacharacter and take either the photograph or the video in the approximate time when the text is read. A further aspect of this method provides a time delay before taking the picture or stopping the video. A further aspect of the method incudes the steps of requesting permission from the receiver before sending the photograph or video.
  • Another aspect of the present invention takes the form of an apparatus for processing text messages containing a phone, a camera, a network interface, and a screen, where the apparatus is configured to receive electronically a text message from the network and display the message on the screen, activate the camera to take a picture of the user when the text message is on the screen, and then send this response picture in reply to the text message.
  • In a further feature of the apparatus, the photograph could be replaced with a brief video. Another further feature could have an eye tracking system determine when the receiver's eyes read the area of the text of interest to the texter as indicated by the placement of one or more metacharacters and take either the photograph or the video when the text is read. A further aspect of this apparatus involves a time delay before taking the picture or stopping the video. A further aspect of the apparatus incudes a permission apparatus that requests permission from the receiver before sending the photograph or video.
  • BRIEF DESCRIPTION OF FIGURES
  • FIG. 1 is a diagram of the FaceBack system showing the text messages flowing between the phones.
  • FIG. 2 is a flow chart of a possible implementation of the FaceBack system on the receiving cell phone.
  • FIG. 3 is an example of a SMS packet containing the commands to take start a video and stop taking the video and decoded details of the message.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention addresses the limitations of text messages regarding the transmission of emotions while messaging with SMS messages or similar systems, and particularly address the issue of allowing the texter to see the receiver's emotions when the message, or pertinent part of a message, is read. Since the receiver is rarely in the same location as the texter, the texter cannot see how the text message is received by the receiver using existing SMS messaging techniques. If the text messaging applications are modified to allow metacharacters to be inserted in SMS messages, a texter could direct the cell phone camera on the receiver's cell phone to take a photograph or a video of the receiver, and then transmit the photo back to the texter, allowing the texter to see the receiver's face when the message is read. Additional permissions may be implemented by the receiver to always allow, to allow only for designated messages, or to always deny permission to take and send the photo or video. Such permissions may be texter-specific, that is, applying to certain senders of messages but not others. Permissions to take the photograph may be implemented in the setup of the text application, and modified as needed thereafter.
  • For convenience and readability of the following text, we will use the term video to describe the taking of the image of the receiver, although the inventors envision that a photograph could be taken as well as a video, or even both. The term video could include the recording of a moving image in 2D or 3D, recording of sound, or the recording of both image and sound, or simply could be a still photograph or a series of photographs. In one embodiment, audio generated by the recipient could be processed through a voice recognition program to convert the audio into text, and incorporating such text along with the video or photograph being returned to the texter. Furthermore, when we discuss a phone in the following text, any device that sends and receives an SMS message, or similar message, could be used, such as a cell phone, a smart phone, a tablet, a laptop, a personal computer, smart watch, and any other similar device. While the description in this document describes a text message based system, the text message could be replaced with an email, chat, Viber, iMessage, WhatApp, Snapchat messages (video, photo, or text), instant, video or voice messaging systems, or voicemail. The messages could include embedded metacharacters or other similar mechanism or protocols that cause a photo or video to be recorded at the time, or slightly thereafter, that a message or relevant part of a message as delineated by metacharacters is read and then returned to the sender. For video or audio messaging, a tone could be used in one embodiment instead of the metacharacter to indicate when to take the photo. Throughout this document the use of SMS is meant to include either SMS, XMPP, or MMS or other protocols.
  • Turning to FIG. 1, we see the path of SMS messages 102 between the phones 101 and 103. The texter's phone 101 runs a standard text message creation application that has been modified to allow metacharacters to be inserted into SMS messages 102. For instance, a command to start taking a video on the receiver's phone 103 could be encoded as hexadecimal character 0x13 and to turn off the video on the receiver's phone 103 as hexadecimal character 0x14. The text message application on the texter's phone 101 may have an icon of a green video camera that inserts 0x13 into the SMS message 102 when the texter selects the icon. A red video camera icon could indicate the insertion of 0x14 into the SMS message 102. Alternatively, a function key or ALT key combination with a second character could be used to insert the metacharacters.
  • In a photograph implementation, an icon of a camera could indicate that metacharacter 0x14 is inserted in the SMS message 102. However, the specific icon or metacharacters could be implemented using other icons and character choices without departing from the spirit of this invention.
  • Alternatively, the texter could delineate the portion of the text of interest by highlighting such text and perhaps pressing a key representing the metacharacter insertion. The camera would focus on filming the recipient while reading this segment including a time period after the reading was finished.
  • Once the texter completes the SMS message 102 using the text message application on the texter's phone 101, the SMS message 102 is sent to the receiver's phone 103. When the message is received by the receiver's phone 103, the SMS message 102 is stored until the receiver reads the message. As the receiver reads the message using a text message application on the receiver's phone 103, the video is turned on when the receiver's eyes reach the location in the text message where the command to turn on the video is located and then turned off when the receiver's eyes reach the location of the video off command.
  • When the video is turned off, the text application on the receiver's phone 103 assembles the video into an SMS message 104 for transmission back to the texter's phone 105 (and 101).
  • There may be some concern about the taking of videos without the user's permission or at times that are not appropriate. The receiver's text message application may require that the receiver grant permission before hand to allow videos to be taken. This permission may be based for all texters or only for certain texters. Therefore, texters may ask specific friends and contacts to opt in to the feature. Permission may be granted by the receiver to specific texters and not to others. Or the text message application may be set up to deny permission to take any videos regardless of the texter. While giving permission to have one's video taken would make less sense on a text-by-text basis (as such an action would impact the candidness of a given shot), the UI for the texting application could make it convenient for a recipient to temporarily turn off the feature for a given texter whenever that was desired.
  • In another embodiment, a visual indicator would be apparent and associated with a specific incoming text to let the recipient know that a response text has been requested. The indicator could also show where in such an incoming text the response photo has been requested to be generated. In such an embodiment, the recipient is being asked to generate a facial expression in regards to a specific text or text part, rather than a candid photo or video being generated.
  • In another embodiment, a visual indicator would be apparent, perhaps in each message and or in the general contacts list, to remind recipients that they had given permission for a specific texter to be sent response texts.
  • In one embodiment, the response video is displayed on the receiver's phone and the receiver is prompted to give permission to send the message back to the texter. The prompts could be send, deny, or redo. Redo allows the receiver to record a new video or to replace the video with a saved video (the receiver could also incorporate an HTML link or a link to a video in the response). The receiver could also be prompted to comment on the video, in text or audio, before returning the message. Or such annotations could be a normal part of the process and require no prompting. In the case where multiple photographs are taken, the receiver may choose which one to return to the texter. Or the receiver may choose to edit the video before returning.
  • The SMS message 104 is then sent to the texter's phone 101 and 105. The texter can then watch the video to see the receiver as the text message is read using a standard text message application. In a further enhancement, the original recipient could activate a request to see the original texter's response when that person saw the video that was sent back after seeing the original text. With this enhancement, a series of ping-pong response videos could be generated and sent.
  • In one embodiment, the original text message could be returned with the video to the texter so that the texter know which message the video relates to. This may be important in the case where the texter has sent multiple messages to the receiver before the receiver has read the first message, to allow the texter to understand which message the video relates to. In another implementation the problem of identifying to which text message the video applies could be solved by inserting the reaction video back into the stream of texts sent that appear on the texter's phone, thus leaving the original text and the reaction video in the chronological order of the texter's message stream.
  • In other embodiments, the texter indicates that a video is to be taken when the receiver reads a text message by setting a flag in the header of the text message indicating that the receiver's video should be taken. In another embodiment, the texter enters a time after the receiver starts reading the message to take the video or a percentage of the message to be read before taking the video. This percentage or time value are placed in to the header of the text message. When received, the receiver's text message application will wait the specified amount of time and then take the video. Or the receiver's text message application will wait until the receiver has read the specified percentage of the message, as determined via eye-tracking or head orientation software, before taking the video. The reaction video is then returned to the texter.
  • If the video consists of a series of still photographs, the photos could be analyzed by software on the receiver's phone to select the best photo to use. Or a video could be analyzed for the best still image, or segment of video to use. Alternatively, the receiver could be prompted to choose the best photo in the series (or video) to send to the texter.
  • In another embodiment, the software on the receiver's phone could analyze the video and edit the photo to crop out background and center the video on the receiver's face. If multiple faces are seen, the software would focus on the one oriented to be reading the text (in case the phone is sitting on a table and multiple faces appear in the video). If multiple faces appear in the video and all seem to be looking at the text, then the software may capture all of the faces in the video. Alternatively, the software could compare the video to a known picture of the receiver, and crop out all other faces except the receiver's.
  • In FIG. 2, we see an algorithm for the implementation of the invention on the receiver's phone 103. When the SMS message 102 is received by the text message application on the receiver's phone 103, a modification to the standard text message application receives the text message 201.
  • The message 102 is then parsed by the text message application to see if there is a metacharacter indicating that the video is to be turned on 202. If the video metacharacter is not present in the message 203, then the message 102 is processed as a normal SMS message 102.
  • However, if the video metacharacter is present 205, the user-facing camera on the phone 103 is turned on and algorithms on the phone are activated to track the receiver's eyes 206. Eye tracking algorithms can be found on phones such as the Samsung Galaxy S4.
  • If the receiver's phone 103 does not have an eye tracking feature, the text message application could turn on the video once the video metacharacter is displayed on the screen and could continue taking the video until the video off metacharacter reaches the top of the screen or until the text message is no longer displayed on the phone 103 screen.
  • If eye tracking is supported, the receiver's eyes are matched to the text 207 until the receiver's eyes reach the video metacharacter 208. While waiting until the eyes reach that portion of the text 209, the algorithm loops around matching the receiver's eyes to the text.
  • Once the eyes see the metacharacter 210, the user facing camera on the phone starts recording the video 211. The video continues recording, and the algorithm enters another loop seeking the point where the receiver's eyes see the video stop metacharacter. When the eye reads the video stop metacharacter, the recording stops. Some implementations may require a time delay between the reading of the video stop metacharacter and the actual stop of the recording to account for the delay in the receiver's brain from the time the eye sees the text and the time when the brain responds to what is read.
  • If the video stop metacharacter is not seen by the eyes before the text message is removed from the screen, then the video recording will stop at that point. This covers the case where the user switches the screen before finishing the reading of the message or when the video stop metacharacter is missing from the text message.
  • Once the video has stopped, the receiver's phone 103 will package the video in an SMS message 104 and send the message 212. The algorithm then returns to the normal SMS message processing 213.
  • The metacharacter could be displayed in the text message for the recipient to see or it could be hidden, depending upon the configuration of the text message processing application.
  • On some phones, the text message is visible on the lock screen of the phone, in the form of a notification, for instance, as soon as it arrives. On these phones, the receiver text software could monitor the phone's accelerometers to detect the movement from its resting place to a point where the user can view the screen, and which point the motion of the phone stabilizes while it is being held at a reading angle. On stability is noticed by the accelerometers, the video can start with the assumption being that the text is being read at that point.
  • Alternatively, under certain circumstances the phone would not be picked up when the video-requested text comes in because it is lying on a table and the text is clearly visible without picking it up. In this embodiment, when the phone was lying flat on a surface and a text message appears on the lock screen of the phone in the form similar to a notification, the phone's camera would then turn on in an attempt to capture the image of the recipient looking at the phone and reading the text without picking it up. If a metacharacter was embedded deeper in the text, such an image might not be taken as the sought after emotion could be captured later when the recipient picked up the phone and looked at the whole message.
  • Alternatively, to further refine when an image should be captured, the recipient's camera could turn on and try to use eye-tracking to discern if the message is being read by the recipient. The front-facing camera could be used to take the reaction-video even at such an oblique angle if necessary. If such eye-tracking software is not available, or if trying to track eye movements are too difficult to track at that angle, then head-tracking software might be employed that would look for the general head orientation to determine if the user is looking at the screen as an indication of when to start recording.
  • In another embodiment, the receiver's phone has facial recognition software operating. Facial recognition software is available on the market, for instance FaceReader 6 from Noldus Information Technology could provide software that detects emotions and changes in emotions. The facial recognition software observes the receiver's face as he reads the text message, and captures the video when the receiver's face reacts to the text message. The camera in this embodiment would ideally watch the user's face during the entire period that the text is being read. Shortly after reading the text, or parts of the text surrounding the metacharacter, the facial recognition software would review the facial expressions, watching the eyes, mouth, and other facial features to determine changes in expression as the text was read. When the expression changes markedly, the software would excise such segments or images and use them in constructing the response text to be sent back to the texter. During the reading of a single text message, there may be several changes in expression; the software could collect each change or could chose the photo with the most significant change to the expression.
  • Alternatively, the recipient would be allowed to review the images generated and pick one or more photos, or some or all of the video segment to send back to the texter.
  • Alternatively, the metacharacter could indicate what type of expression the texter was looking for (a laugh, for instance) and the software could pull out the best instance of such an expression. Such specification of desired responses could be set for a specific text, or a specific recipient, or for all texts sent by a given texter. Response specifications could include types or intensities of emotions.
  • If no specific type of emotion was being looked for by the texter, the software could look for any change in emotion when the recipient read the text content associated with the metacharacter. There could exist specified thresholds of change that needed to be exceeded for a response to be worthy of being sent back. Such thresholds could vary by recipient and be set by the texter.
  • In another embodiment, the texter could have set a minimum threshold for a given expression. If the facial recognition software did not deem that a reaction met that threshold of reaction then no reaction text would be sent back to the texter. Different expressions could have different thresholds for a given texter. And different recipients could have different thresholds for a given texter or different thresholds for a given emotion.
  • Furthermore, when the texter receives such response images, the texter may indicate, perhaps by tapping with a thumbs-up or down icon that that was the reaction the texter was looking for. With such a feedback loop the texter could help calibrate the system to better filter out expressions desired by the texter.
  • In one embodiment, the response text would be generated and sent to the server, however not sent on to the texter. In this embodiment, the texter could “retrieve” the response images later via a “pull” process. That is, the texter could specify the text to which a response image was desired and the response image could then be sent to that texter. Such retrieval might be through viewing on a website, via an email, or it could be in the form of a text overlaid on the text conversation with the recipient.
  • In another implementation, the recipient would see some indication on their display that a certain section of a text had been designated for a response text. Such an indication might be a flash of some sort associated with the metacharacter, such flash resembling or alluding to a camera's flash going off. Another method might be to bold or flash the text. Such indicators might come slightly before the material was read (indicating it's importance to the texter and that a response is expected) or the indication might come after reading giving this information to the recipient after the relevant material had already been read.
  • Over time, the system could provide summary data to the texter as to what types and numbers of reactions he or she was getting from any given recipient and how that might be changing over time. AI software could be employed to analyze the content of texts sent and then associate such content with the reactions received.
  • In one embodiment of the invention, no metacharacters would be used at all. All facial reactions would be recorded for all received texts and the facial recognition software would be used to cull out all those reaction expressions that met certain thresholds and use them as the basis for texts to be sent back to the texter.
  • As described thus far, the invention requires both texter and recipient to opt in to the application, presumably downloading a new text application to set up both sides of the exchange of texts and responses to said texts. However, the invention could also be implemented, in part by a willing recipient. Such recipient could employ a text application with a facial emotion recognition component. Every time such a recipient received a text—from anybody or only certain persons or classes of persons—the front facing camera would turn on and start to record looking for an outstanding emotion of some type. When found, such images, photos or videos or both, could be used to automatically construct a reply text. The original texter would not need to have installed the app used by the recipient to receive and enjoy such a reply text. The recipient could have the option to decline sending such a text or could add additional text to it and annotate the text in other ways before sending.
  • One such way that the reply text images could be annotated would be to use photo-editing techniques to change the photo. For instance, if the text caught the person with unkempt hair, perhaps a hat could be added to the picture. The reply text could also include a note advertising the app that produced such a text and in such way make the spread of the app that much more viral.
  • FIG. 3 shows a sample SMS message 301 encoded to take a video. In this example, the “*” character is used to both turn on and turn off the video recorder. In some embodiments, it is envisioned that a single metacharacter will be used to both turn on and turn off the video camera. As discussed, the metacharacter may or may not be visible to the recipient when the text comes in but would be readable by the texting app on that person's phone.
  • FIG. 3 displays a sample SMS message 301 message in hexadecimal format. This SMS message 301 is then decoded in the table below. With the exception of the video metacharacter, this is a standard SMS message.
  • The SMS message 301 starts with the length of the SMSC information 310. The type of the SMSC number 311 and the SMSC number 312 itself represent the destination phone number. This is where the SMS message 301 is to be sent to. The SMSC length 310 is an 8 bit value specifying the length of the next two fields, in this case 7 bytes (a byte is 8 bits in length). The SMSC type 311 is 0x91, specifying an international format phone number. Other types of numbers can be found in the GSM 03.40 standard. The SMSC number 312 is 6 bytes long, and has the nibbles reversed, commonly called Little-endian format.
  • The sender address 316, type of sender address 315 and length of sender address 314 follow the same rules as for the SMSC address. The sender address 316 specifies the address of the SMS message 301 sender.
  • The protocol identifier 317 either refers to the higher layer protocol being used, indicates interworking with a certain type of telematic device (like fax, telex, pager, teletex, e-mail), specifies replace type of the message or allows download of configuration parameters to the SIM card. Plain SMS messages have protocol identifier set to 0x00. Further details can be found in the found in the GSM 03.40 standard.
  • The data encoding scheme 318 specifies the data encoding of the message. Typical Latin based language SMS messages use a 7 bit encoding and insert 0x00 in this field. Other options are 8 bit encoding and 16 bit encoding, typically used for Chinese, Korean or Japanese languages.
  • The next part of the SMS message 301 is the time stamp 319. This is followed by the length of the SMS data 320. This is a count of the number of characters of user data 321. Note that this is not a count of the bytes in the user data, as the characters are packed using 7 bits per character, so each byte contains bits for two characters.
  • The user data 321 contains the user's text message packed into 7 bit format in this example, although the message could be coded in 8 bit or 16 bit format as described above. Standard algorithms for packing 7 bit SMS messages can be found on the internet. The notable distinction here is that the user data 321 contains two “*” characters which, for this embodiment, we have specified as the metacharacter for remotely turning on and off the user facing video camera on the receiver's phone 103.
  • In another embodiment, the SMS header will be modified to indicate that a video is to be taken. This modification could be the addition of a bit that indicates that a video is to be taken. In another embodiment. The SMS header could add a field indicating when to start the video. This value could indicate how many seconds of delay should occur before the video starts, or the percentage of message is to be read before starting the video.
  • The foregoing devices and operations, including their implementation, will be familiar to, and understood by, those having ordinary skill in the art.
  • The above description of the embodiments, alternative embodiments, and specific examples, are given by way of illustration and should not be viewed as limiting. Further, many changes and modifications within the scope of the present embodiments may be made without departing from the spirit thereof, and the present invention includes such changes and modifications.

Claims (22)

1-18. (canceled)
19. A method for capturing a reaction to an incoming text message, the method comprising:
receiving the incoming text message on a receiving device from a sending device;
scanning a user data field of the incoming message for an instruction;
the instruction instructing a camera on the receiving device to capture an user image using the camera,
if the instruction is included in the incoming text message,
capturing the user image using the camera on the receiving device, and
sending the user image, using an outgoing text message, to the sending device.
20. The method of claim 19 wherein the text message is an iMessage message.
21. The method of claim 19 wherein the text message is a Viber message.
22. The method of claim 19 wherein the text message is a WhatsApp message.
23. The method of claim 19 wherein the user image is a portion of a video.
24. The method of claim 19 wherein the user image is one of a series of photographs.
25. The method of claim 19 further comprising delaying the capture of the user image for a period of time.
26. The method of claim 19 further comprising delaying the capture of the user image until a user's eye reaches a specified point the incoming text message.
27. The method of claim 19 further comprising capturing the user's image for a period of time and reviewing, under software control, the user's images to identify an emotional image that represents the most significant change in the user's facial expression.
28. The method of claim 19 further comprising prompting a user before sending the outgoing text message.
29. An apparatus for capturing a reaction to an incoming text message, the apparatus comprising:
a receiving device;
a processor on the receiving device;
a memory connected to the processor;
a camera on the receiving device electrically coupled to the processor, the camera responsive to an instruction from the processor to collect an user image and store the user image in the memory;
a network interface on the receiving device electrically connected to the processor, the network interface configured to receive incoming text messages and sending outgoing text messages;
a screen on the receiving device electrically connected to the processor, configured to display incoming text messages;
software stored in the memory of the receiving device for scanning a user data section of the incoming text messages for the instruction to take the user image, the software further configured to instruct the camera to collect the user image and to send outgoing text messages containing the user image through the network interface.
30. The apparatus of claim 29 wherein the text message is an iMessage message.
31. The apparatus of claim 29 wherein the text message is a Viber message.
32. The apparatus of claim 29 wherein the text message is a WhatApp message.
33. The apparatus claim 29 wherein the user image is part of a video.
34. The apparatus of claim 29 wherein the user image is one of a series of photographs.
35. The apparatus of claim 29 wherein the instruction is located in a header of the incoming text message.
36. The apparatus of claim 29 wherein the software further includes an eye tracking algorithm.
37. The apparatus of claim 29 wherein the software further includes facial recognition software.
38. The apparatus of claim 29 wherein the software further edits the images.
39. The apparatus of claim 29 wherein the software further compares the image and determines which image to send in the outgoing text message.
US15/069,857 2014-09-18 2016-03-14 Method and Apparatus for the Automated Response Capture using Text Messaging Abandoned US20170085504A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/069,857 US20170085504A1 (en) 2014-09-18 2016-03-14 Method and Apparatus for the Automated Response Capture using Text Messaging

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201462052172P 2014-09-18 2014-09-18
US14/857,777 US9288303B1 (en) 2014-09-18 2015-09-17 FaceBack—automated response capture using text messaging
US15/069,857 US20170085504A1 (en) 2014-09-18 2016-03-14 Method and Apparatus for the Automated Response Capture using Text Messaging

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/857,777 Continuation US9288303B1 (en) 2014-09-18 2015-09-17 FaceBack—automated response capture using text messaging

Publications (1)

Publication Number Publication Date
US20170085504A1 true US20170085504A1 (en) 2017-03-23

Family

ID=55450301

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/857,777 Expired - Fee Related US9288303B1 (en) 2014-09-18 2015-09-17 FaceBack—automated response capture using text messaging
US15/069,857 Abandoned US20170085504A1 (en) 2014-09-18 2016-03-14 Method and Apparatus for the Automated Response Capture using Text Messaging

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/857,777 Expired - Fee Related US9288303B1 (en) 2014-09-18 2015-09-17 FaceBack—automated response capture using text messaging

Country Status (1)

Country Link
US (2) US9288303B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390986A (en) * 2017-07-05 2017-11-24 Tcl移动通信科技(宁波)有限公司 A kind of mobile terminal cuts out figure control method, storage device and mobile terminal
US20230195218A1 (en) * 2021-12-21 2023-06-22 Lenovo (Singapore) Pte. Ltd. Indication of key information apprisal
US20240187363A1 (en) * 2022-12-02 2024-06-06 At&T Intellectual Property I, L.P. Apparatuses and methods for monitoring and managing messages and messaging content

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9374480B2 (en) * 2014-08-08 2016-06-21 Kabushiki Kaisha Toshiba Image processing apparatus and system and method for transmitting an image
US9762517B2 (en) * 2015-03-24 2017-09-12 Unrapp LLC System and method for sharing multimedia recording in a gift receiving event
US9794202B1 (en) * 2016-08-25 2017-10-17 Amojee, Inc. Messaging including standard and custom characters
US10298522B2 (en) 2017-04-10 2019-05-21 Amojee, Inc. Messaging including custom characters with tags localized to language of user receiving message
US10652183B2 (en) * 2017-06-30 2020-05-12 Intel Corporation Incoming communication filtering system
US10706271B2 (en) * 2018-04-04 2020-07-07 Thomas Floyd BRYANT, III Photographic emoji communications systems and methods of use
US20200099634A1 (en) * 2018-09-20 2020-03-26 XRSpace CO., LTD. Interactive Responding Method and Computer System Using the Same
WO2020147945A1 (en) * 2019-01-16 2020-07-23 Unify Patente Gmbh & Co. Kg Method for marking text in electronic messages, communication system, and communication device
CN112584280B (en) * 2019-09-27 2022-11-29 百度在线网络技术(北京)有限公司 Control method, device, equipment and medium for intelligent equipment
CN112422406A (en) * 2020-10-27 2021-02-26 刘鹏飞 Automatic reply method and device for intelligent terminal, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140372540A1 (en) * 2013-06-13 2014-12-18 Evernote Corporation Initializing chat sessions by pointing to content
US9591117B1 (en) * 2014-11-21 2017-03-07 messageLOUD LLC Method and system for communication

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6990452B1 (en) 2000-11-03 2006-01-24 At&T Corp. Method for sending multi-media messages using emoticons
US20020072952A1 (en) * 2000-12-07 2002-06-13 International Business Machines Corporation Visual and audible consumer reaction collection
US6999989B2 (en) * 2001-03-29 2006-02-14 At&T Corp. Methods for providing video enhanced electronic mail return receipts
US20020194006A1 (en) 2001-03-29 2002-12-19 Koninklijke Philips Electronics N.V. Text to visual speech system and method incorporating facial emotions
DE10127558A1 (en) 2001-06-06 2002-12-12 Philips Corp Intellectual Pty Operation of interface systems, such as text synthesis systems, for provision of information to a user in synthesized speech or gesture format where a user profile can be used to match output to user preferences
AU2002950502A0 (en) 2002-07-31 2002-09-12 E-Clips Intelligent Agent Technologies Pty Ltd Animated messaging
US20080274798A1 (en) * 2003-09-22 2008-11-06 Walker Digital Management, Llc Methods and systems for replaying a player's experience in a casino environment
JP2005115896A (en) 2003-10-10 2005-04-28 Nec Corp Communication apparatus and method
US8210848B1 (en) 2005-03-07 2012-07-03 Avaya Inc. Method and apparatus for determining user feedback by facial expression
KR100678209B1 (en) 2005-07-08 2007-02-02 삼성전자주식회사 Method for controlling image in wireless terminal
CN101087332A (en) 2006-06-09 2007-12-12 北京安捷乐通信技术有限公司 A SMS sending system of mobile phone and its method
KR20080020714A (en) 2006-08-24 2008-03-06 삼성전자주식회사 Method for transmission of emotional in mobile terminal
US8098273B2 (en) 2006-12-20 2012-01-17 Cisco Technology, Inc. Video contact center facial expression analyzer module
US7756536B2 (en) 2007-01-31 2010-07-13 Sony Ericsson Mobile Communications Ab Device and method for providing and displaying animated SMS messages
CN102104658A (en) 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
US8655965B2 (en) * 2010-03-05 2014-02-18 Qualcomm Incorporated Automated messaging response in wireless communication systems
US8588825B2 (en) 2010-05-25 2013-11-19 Sony Corporation Text enhancement
CN101883339A (en) 2010-06-22 2010-11-10 宇龙计算机通信科技(深圳)有限公司 SMS communication method, terminal and mobile terminal
US8989786B2 (en) 2011-04-21 2015-03-24 Walking Thumbs, Llc System and method for graphical expression during text messaging communications
US9060107B2 (en) * 2011-11-23 2015-06-16 Verizon Patent And Licensing Inc. Video responses to messages
US20130147933A1 (en) 2011-12-09 2013-06-13 Charles J. Kulas User image insertion into a text message
ES1078883Y (en) * 2012-11-20 2013-06-25 Crambo Sa COMMUNICATIONS DEVICE WITH AUTOMATIC RESPONSE TO AN INPUT MESSAGE
US20150031342A1 (en) * 2013-07-24 2015-01-29 Jose Elmer S. Lorenzo System and method for adaptive selection of context-based communication responses
US9516259B2 (en) * 2013-10-22 2016-12-06 Google Inc. Capturing media content in accordance with a viewer expression
CN103647870A (en) 2013-11-27 2014-03-19 宇龙计算机通信科技(深圳)有限公司 Terminal and terminal expression display method
CN104412258A (en) 2014-05-22 2015-03-11 华为技术有限公司 Method and device utilizing text information to communicate

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140372540A1 (en) * 2013-06-13 2014-12-18 Evernote Corporation Initializing chat sessions by pointing to content
US9591117B1 (en) * 2014-11-21 2017-03-07 messageLOUD LLC Method and system for communication

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390986A (en) * 2017-07-05 2017-11-24 Tcl移动通信科技(宁波)有限公司 A kind of mobile terminal cuts out figure control method, storage device and mobile terminal
US20230195218A1 (en) * 2021-12-21 2023-06-22 Lenovo (Singapore) Pte. Ltd. Indication of key information apprisal
US11720172B2 (en) * 2021-12-21 2023-08-08 Lenovo (Singapore) Pte. Ltd. Indication of key information apprisal
US20240187363A1 (en) * 2022-12-02 2024-06-06 At&T Intellectual Property I, L.P. Apparatuses and methods for monitoring and managing messages and messaging content

Also Published As

Publication number Publication date
US9288303B1 (en) 2016-03-15
US20160088144A1 (en) 2016-03-24

Similar Documents

Publication Publication Date Title
US9288303B1 (en) FaceBack—automated response capture using text messaging
KR102143270B1 (en) Emojicon puppeting
CN105095873B (en) Photo be shared method, apparatus
KR102174086B1 (en) Systems and methods for ephemeral group chat
CN105847597B (en) Method for transmitting and reproducing handwriting message
US9060107B2 (en) Video responses to messages
KR20170091913A (en) Method and apparatus for providing video service
US20070257982A1 (en) Systems and methods for remotely controlling mobile stations
KR102094114B1 (en) Message transmitting method, message processing method and terminal
US9955134B2 (en) System and methods for simultaneously capturing audio and image data for digital playback
US20150207764A1 (en) Method and device for sharing data
US20160294750A1 (en) Electronic Message Slide Reveal System and Method
TW201540115A (en) Communication event history
CN106888150B (en) Instant message processing method and device
EP3272127B1 (en) Video-based social interaction system
CN110892411A (en) Detecting popular faces in real-time media
EP3125587B1 (en) Information transmitting method and device and information receiving method and device
US20100141749A1 (en) Method and apparatus for information processing
KR102058190B1 (en) Apparatus for providing character service in character service system
CN112217714B (en) Method, device, server, client, terminal and storage medium for deleting information in two directions in instant communication session
WO2016177033A1 (en) Loss prevention method and device for intelligent terminal
JP2006526330A (en) Multimedia communication device for capturing and inserting multimedia samples
WO2016044113A1 (en) Video picker
US11044216B2 (en) Method and device for processing a multimedia object
CN116719495A (en) Computer-implemented method of displaying content on a screen of an electronic processing device

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION