WO2018175989A1 - Commande et manipulation de signaux vidéo dans des réseaux de communication publics et privés - Google Patents

Commande et manipulation de signaux vidéo dans des réseaux de communication publics et privés Download PDF

Info

Publication number
WO2018175989A1
WO2018175989A1 PCT/US2018/024184 US2018024184W WO2018175989A1 WO 2018175989 A1 WO2018175989 A1 WO 2018175989A1 US 2018024184 W US2018024184 W US 2018024184W WO 2018175989 A1 WO2018175989 A1 WO 2018175989A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
video
device user
user
video sequence
Prior art date
Application number
PCT/US2018/024184
Other languages
English (en)
Inventor
John P. NAUSEEF
Brian T. FAUST
Matthew J. Farrell
Christopher S. Wire
Thomas P. THISTLETON
Justin P. COUCHOT
Luke C. MODERWELL
Patrick Ryan RADASZEWSKI
Ruzanna B. ROZMAN
Original Assignee
Krush Technologies, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Krush Technologies, Llc filed Critical Krush Technologies, Llc
Publication of WO2018175989A1 publication Critical patent/WO2018175989A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N2007/145Handheld terminals

Definitions

  • This disclosure is directed to control and manipulation of video signals in public and private communication networks.
  • Video teleconferencing allows remote parties to participate in a conversation. Voice, video, and other data are transmitted between parties over a communication network, such as the Internet. The parties are able to see, speak to, and hear each other, as well as share other data. There is a need to provide better control and manipulation of video signals in order for transmission and reception over public and private communication networks.
  • the apparatus comprises a computing device processor for establishing a public video communication network for a mobile device user to interact with a video sequence created by a second mobile device user not connected to the mobile device user on the public video communication network, the interacting with the video sequence comprising adding video messages to the video sequence and viewing video messages from the video sequence; and establishing a private video communication network for the mobile device user to interact with a third mobile device user connected to the mobile device user on the public video communication network, the interacting with the third mobile device user comprising transmitting video messages to the third mobile device user and receiving video messages from the third mobile device user.
  • the computing device processor is further for: receiving a captured video from the mobile device user; receiving, from the mobile device user, a request to create a second video sequence; receiving information associated with the second video sequence, the information comprising an activatable search parameter; creating the second video sequence using the captured video and the information; and adding the second video sequence to the public video communication network, the second being video sequence being discoverable using the activatable search parameter.
  • the computing device processor is further for receiving information associated with the captured video, the information comprising a location associated with the captured video.
  • the computing device processor is further for receiving, from a fourth mobile device user, a request to add a second video message to the second video sequence; determining whether the fourth mobile device user is connected to at least one mobile device user that transmitted a video message to the video sequence; and in response to determining the fourth mobile device user is connected to at least one mobile device user that transmitted a video message to the video sequence, adding the second video message to the second video sequence.
  • the computing device processor is further for receiving, from a fourth mobile device user, a request to add a second video message to the second video sequence; determining whether the fourth mobile device user is connected to at least one mobile device user that transmitted a video message to the video sequence; and in response to determining the fourth mobile device user is not connected to at least one mobile device user that transmitted a video message to the video sequence, rejecting the request to add the second video message to the second video sequence.
  • the third mobile device user comprises a group of mobile device users.
  • the computing device processor is further for establishing, via the private communication network, a video conference between the mobile device user and the third mobile device user. [00010] In some embodiments, the computing device processor is further for receiving, from the mobile device user, selection of a live effect; and applying the live effect to the video conference.
  • the computing device processor is further for receiving, from the mobile device user, selection of a live effect; and applying the live effect to a video message associated with the mobile device user.
  • the computing device processor is further for: receiving, from the mobile device user, a request to block a video message in the public video communication network; blocking the video message for the mobile device user such that the mobile device user cannot view the video message; receiving a request from a fourth mobile device user to view the video message; and enabling the fourth mobile device user to view the video message.
  • the computing device processor is further for receiving, from the mobile device user, a request to view the video sequence in birdseye view; and providing, to the mobile device user, a birdseye view of the video sequence, the birdseye view of the video sequence showing multiple video messages of the video sequence on a mobile device screen.
  • the computing device processor is further for: receiving, from the mobile device user, a request to view a private video sequence between the mobile device user and the third mobile device user in birdseye view; and providing, to the mobile device user, a birdseye view of the private video sequence, the birdseye view of the video sequence showing multiple video messages of the video sequence on a mobile device screen.
  • the computing device processor is further for: receiving, from a fifth mobile device user, a connection request for the mobile device user; transmitting the connection request to the mobile device user; and in response to determining acceptance of the connection request by the mobile device user, connecting, on the public video communication network, the fifth mobile device user with the mobile device user.
  • the computing device processor is further for: establishing the private video communication network for the mobile device user to interact with a fourth mobile device user connected to the user on the public video communication network, the interacting with the fourth mobile device user comprising transmitting video messages to the fourth mobile device user and receiving video messages from the fourth mobile device user; and establishing the private video communication network for the third mobile device user to interact with the fourth mobile device user, the interacting between the third mobile device user and the fourth mobile device user comprising exchanging, on the private video communication network, video messages between the third mobile device user and the fourth mobile device user, the third mobile device user not being connected to the fourth mobile device user on the public video communication network.
  • a method for control and manipulation of video signals in public and private video communication networks.
  • the method comprises: establishing a public video communication network for a mobile device user to interact with a video sequence created by a second mobile device user not connected to the mobile device user on the public video communication network, the interacting with the video sequence comprising adding video messages to the video sequence and viewing video messages from the video sequence; and establishing a private video communication network for the mobile device user to interact with a third mobile device user connected to the mobile device user on the public video communication network, the interacting with the third mobile device user comprising transmitting video messages to the third mobile device user and receiving video messages from the third mobile device user.
  • a computer-readable medium for control and manipulation of video signals in public and private video communication networks.
  • the computer-readable medium comprises code configured for: establishing a public video communication network for a mobile device user to interact with a video sequence created by a second mobile device user not connected to the mobile device user on the public video communication network, the interacting with the video sequence comprising adding video messages to the video sequence and viewing video messages from the video sequence; and establishing a private video communication network for the mobile device user to interact with a third mobile device user connected to the mobile device user on the public video communication network, the interacting with the third mobile device user comprising transmitting video messages to the third mobile device user and receiving video messages from the third mobile device user.
  • FIG. 1 shows a pictorial illustration of a system for video teleconferencing between client devices, in accordance with some embodiments of the invention.
  • FIG. 2A shows a flow chart that schematically represents a method for initiating video conferencing on the mobile client device by background and foreground components, in accordance with some embodiments of the invention.
  • FIG. 2B illustrates a schematic diagram of an example of a mobile client device, in accordance with some embodiments of the invention.
  • FIG. 3A shows a schematic illustration of a system that bundles three available network data connection channels available to be used in a video conferencing session, in accordance with some embodiments of the invention.
  • FIG. 3B shows a functional block diagram of a video communication server for control and manipulation of video signals in public and private communication networks, in accordance with some embodiments of the invention.
  • FIG. 4 shows a flow chart that represents a method for application level hand over of a mobile video call, in accordance with some embodiments of the invention.
  • FIG. 5 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 6 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 7 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 8 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 9 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 10 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 11 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 12 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 13 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 14 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 15 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 16 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 17 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 18 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 19 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 20 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 21 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 22 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 23 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 24 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 25 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 26 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 27 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 28 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 29 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 30 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 31 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 32 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 33 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 34 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 35 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 36 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 37 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 38 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 39 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 40 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 41 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 42 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 43 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 44 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 45 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 46 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 47 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 48 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 49 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 50 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 51 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 52 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 53 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 54 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 55 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 56 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 57 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 58 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 59 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 60 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 61 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 62 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 63 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 64 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 65 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 66 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 67 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 68 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 69 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 70 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 71 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 72 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 73 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 74 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 75 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 76 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 77 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 78 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 79 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 80 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 81 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 82 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 83 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 84 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 85 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 86 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 87 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 88 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 89 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 90 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 91 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 92 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 93 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 94 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 95 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 96 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 97 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 98 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 99 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 100 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 101 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 102 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 103 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 104 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 105 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 106 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 107 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 108 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 109 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 110 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. I l l is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 112 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 113 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 114 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 115 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 116 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 117 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 118 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 119 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 120 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 121 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 122 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 123 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 124 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 125 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 126 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 127 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 128 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 129 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 130 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 131 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 132 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 133 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 134 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 135 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 136 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 137 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 138 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 139 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 140 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 141 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 142 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 143 is a method for control and manipulation of video signals in public and private communication networks, in accordance with some embodiments of the invention.
  • FIG. 144 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 145 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 146 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • FIG. 147 is another user interface of a video control application in a complex computing network, in accordance with some embodiments of the invention.
  • the various technologies described herein generally relate to video teleconferencing and more specifically to methods and systems for video conferencing over fixed and packet networks with end points, such as personal computers and mobile devices. Specifically, this disclosure is directed to an apparatus for control and manipulation of video signals in public and private communication networks.
  • the apparatus comprises a computing device processor for establishing a public video communication network for a mobile device user to interact with a video sequence created by a second mobile device user not connected to the user on the public video communication network, the interacting with the video sequence comprising adding video messages to the video sequence and viewing video messages from the video sequence; and establishing a private video communication network for the mobile device user to interact with a third mobile device user connected to the user on the public video communication network, the interacting with the third mobile device user comprising transmitting video messages to the third mobile device user and receiving video messages from the third mobile device user.
  • a private communication network may also be referred to as a private channel if the network is between two users.
  • the video sequence might also be referred to as a video chain, a video collage, a video tree, a video concatenation, etc.
  • a video message may also be referred to as a video, a video signal, a video stream, an audiovisual stream, a link, a video link, a message, etc.
  • an application may also be referred to as a video control application, a video manipulation application, a video conferencing application, a video messaging application, etc. Any communication, either video or data communication, between mobile device may be execute through any server described herein.
  • FIG. 1 illustrates a high-level schematic of a video teleconferencing system 10 for control and manipulation of video signals in public and private communication networks, such as for example one or more mobile client devices 11 a, 11 b (generally referred to as 11) and one or more personal computer (PC) client devices 13.
  • the mobile client devices 11 may send data and communicate over the Internet 15 with other devices 11, 13 via a wireless communications network 17.
  • the wireless communications network 17 may be a 3G network or 4G network, WiFi or any other network protocol or by a combination of networks. In the case of some wireless communications networks, a wireless network tower may be used.
  • each device may communicate with each other by sending data to and receiving data from a server 19.
  • the server 19 may be an audio server, a video server, or an audio/video server. It is also understood that the system may include one or more dedicated servers, such as a dedicated audio server and a dedicated video server.
  • Each of the mobile client device 11a, the mobile client device l ib, the PC client device 13, and the server 19, may comprise a processor such as a digital signal processor or microprocessor for performing the various methods described in this specification, a memory unit, an input/output (I/O) unit, and a communication unit.
  • the term "signal" may refer to "data” or "information.” Any reference to signals may also include references to the contents of the signals, e.g., signal attributes. Any signals as described herein may be electronic or electromagnetic signals.
  • any signals may be either be transitory or non-transitory signals.
  • the terms system, apparatus, device, etc., may be used interchangeably.
  • a method is provided for performing the various steps performed by any computing device described herein.
  • a non-transitory computer-readable medium comprising code is provided for causing any mobile device, computing device, or server to perform the various methods described herein.
  • FIG. 2A illustrates a flow chart that schematically represents a method for initiating video conferencing on the mobile client device 11 by background and foreground components.
  • a mobile client background component is loaded into the mobile device's processor memory and starts execution 23.
  • This mobile client background component 25 connects to a communication server over the IP layer provided by any available data channel of the underlying wireless network, such as 3G, 4G and WiFi.
  • the mobile client device 11 needs to possess the necessary credentials on the target video conferencing system to connect to the communications server. Once connected, the mobile client is considered to have successfully logged onto the video communication service. The server will then notify other clients connected to the service that this mobile client is available for video communication 27.
  • the mobile client sends periodic keep-alive packets to the communication server to maintain the connection's valid open status 29.
  • the frequency of such keep-alive packets are configured with a goal to minimize battery usage, for example, set at every 30 minutes. However, it is understood that other time periods may also be used.
  • a remote client initiates a video call request to the mobile client, the request is routed via the communication server and sent down to the mobile client via the above described connection maintained between the server and the background component of the mobile client 31.
  • the background component Upon receiving an incoming call message, the background component then launches the foreground component via mechanisms supported by the particular mobile operating system the mobile device is operated on 33.
  • the foreground component Once loaded into the processor memory, the foreground component presents a Graphic User Interface on the mobile device's display screen through which the full functionality of the mobile video conferencing client can be accessed by the user 33. Upon completion of call handling, the foreground component closes down 35.
  • FIG. 2B illustrates a schematic diagram of an example of a mobile client device 11.
  • the mobile device may include different schematic layers, including a mobile device hardware layer 37, one ore more device drivers 39, a mobile operating system 41, a video conferencing client background component 43 and a video conferencing client foreground components 45.
  • the video conferencing components operate as described above.
  • the mobile client device 11 may include a mobile client user interface 47, a multimedia processing layer 49, and a unified network layer 51.
  • the mobile client user interface layer 47 handles interaction with users and allows a user to control the operation of the client program. It also renders on the device received audiovisual signal of remote video conferencing participants.
  • the multimedia processing layer 49 manages the capturing and encoding of audio and video signals from the device hardware, as well as the decoding and rendering of audio and video signals received from remote conference participants.
  • the unified network layer 51 handles the packaging and transmission of the encoded audio and video data, together with auxiliary information, down to an abstract interface representing the underlining data networking connection(s).
  • the unified network layer 51 also collects data packets received from the underlining data networking connections and presents the data up to the media processing layer as a single logical network interface.
  • any mobile client device may also be referred to as a mobile device.
  • the mobile client device may also include 3G, 4G, and WiFi interfaces 53, 55, 57 for interfacing with 3G, 4G, and WiFi networks.
  • the mobile client device 11 may sense the type of data connections available on the mobile client device 11.
  • the available data connections may include, for example, 3G, 4G, and WiFi connections, if only one connection is detected, the mobile client device 11 uses this single data channel to send and receive audio and video data. If multiple connections are detected, the mobile client device 11 establishes independent transmission channels to the video conferencing server on each of the available communication networks.
  • the characteristics of the available channels are analyzed to determine how much bandwidth of each channel is available for audio and video transmission while meeting a predetermined minimum quality of service level. Net aggregated bandwidth is then reported to the mobile client application layer as the current available transmission bandwidth which is then used by the mobile client to determine the actual transmission bit rate of the audio and video signals, as well as which data connection will be used for certain data. For example, video data may be sent via the WiFi connection, audio data via the 4G network and error protection data may be sent via the 3G network.
  • FIG. 3B illustrates exemplary a functional block diagram of a video communication server 300 for control and manipulation of video signals in public and private communication networks.
  • Any units and/or subunits described herein with reference to the video communication server 300 of FIG. 3B may be included in one or more elements of FIGS. 1 to 4 such as the server 19, the mobile client device 11a, and the mobile client device 1 lb.
  • the video communication server of FIG. 3B and the server 19 of FIG. 1 are the same server.
  • the video communication server 300 and/or any of its units and/or subunits described herein may include general hardware, specifically-purposed hardware, and/or software.
  • the video communication server 300 may include, among other elements, a processor 302, a memory unit 304, an input/output (I/O) unit 306, and/or a communication unit 308.
  • processor 302, the memory unit 304, the I/O unit 306, and/or the communication unit 308 may include and/or refer to a plurality of respective units, subunits, and/or elements.
  • processor 302, the memory unit 304, the I/O unit 306, and/or the communication unit 308 may be operatively and/or otherwise communicatively coupled with each other so as to facilitate the video communication and analysis techniques described herein.
  • the processor 302 may control any of the one or more units 304, 306, 308, as well as any included subunits, elements, components, devices, and/or functions performed by the units 304, 306, 308 included in the video communication server 300.
  • the described sub-elements of the video communication server 300 may also be included in similar fashion in any of the other units and/or devices included in the system 10 of FIG. 1.
  • any actions described herein as being performed by a processor may be taken by the processor 302 alone and/or by the processor 302 in conjunction with one or more additional processors, units, subunits, elements, components, devices, and/or the like. Additionally, while only one processor 302 may be shown in FIG.
  • multiple processors may be present and/or otherwise included in the video communication server 300 or elsewhere in the overall system (e.g., system 10 of FIG. 1).
  • instructions may be described as being executed by the processor 302 (and/or various subunits of the processor 302), the instructions may be executed simultaneously, serially, and/or otherwise by one or multiple processors 302.
  • the processor 302 may be implemented as one or more computer processor (CPU) chips and/or graphical processor (GPU) chips and may include a hardware device capable of executing computer instructions.
  • the processor 302 may execute instructions, codes, computer programs, and/or scripts.
  • the instructions, codes, computer programs, and/or scripts may be received from and/or stored in the memory unit 304, the I/O unit 306, the communication unit 308, subunits and/or elements of the aforementioned units, other devices and/or computing environments, and/or the like.
  • the processor 302 may include, among other elements, subunits such as a profile management unit 310, a content management unit 312, a location determination unit 314, a graphical processor (GPU) 316, a private network unit 318, a public network unit 320, and/or a resource allocation unit 324.
  • subunits such as a profile management unit 310, a content management unit 312, a location determination unit 314, a graphical processor (GPU) 316, a private network unit 318, a public network unit 320, and/or a resource allocation unit 324.
  • Each of the aforementioned subunits of the processor 302 may be communicatively and/or otherwise operably coupled with each other.
  • the profile management unit 310 may facilitate generation, modification, analysis, transmission, and/or presentation of a user profile associated with a user. For example, the profile management unit 310 may prompt a user via a user device to register by inputting authentication credentials, personal information (e.g., an age, a gender, and/or the like), contact information (e.g., a phone number, a zip code, a mailing address, an email address, a name, and/or the like), and/or the like. The profile management unit 310 may also control and/or utilize an element of the I/O unit 306 to enable a user of the user device to take a picture of herself/himself.
  • personal information e.g., an age, a gender, and/or the like
  • contact information e.g., a phone number, a zip code, a mailing address, an email address, a name, and/or the like
  • the profile management unit 310 may also control and/or utilize an element of the I/O unit 306 to enable a
  • the profile management unit 310 may receive, process, analyze, organize, and/or otherwise transform any data received from the user and/or another computing element so as to generate a user profile of a user that includes personal information, contact information, user preferences, photos and/or videos, a history of user activity, user preferences, settings, and/or the like. Any reference to videos may also include associated audio.
  • the content management unit 312 may facilitate generation, modification, analysis, transmission, and/or presentation of media content.
  • the content management unit 312 may control the audio-visual environment and/or appearance of application data during execution of various processes.
  • Media content for which the content management unit 312 may be responsible may include advertisements, images, text, themes, audio files, video files, documents, and/or the like.
  • the content management unit 312 may also interface with a third-party content server and/or memory location.
  • the location determination unit 314 may facilitate detection, generation, modification, analysis, transmission, and/or presentation of location information.
  • Location information may include global positioning system (GPS) coordinates, an Internet protocol (IP) address, a media access control (MAC) address, geolocation information, an address, a port number, a zip code, a server number, a proxy name and/or number, device information (e.g., a serial number), and/or the like.
  • the location determination unit 314 may include various sensors, a radar, and/or other specifically-purposed hardware elements for enabling the location determination unit 314 to acquire, measure, and/or otherwise transform location information.
  • the GPU unit 316 may facilitate generation, modification, analysis, processing, transmission, and/or presentation of visual content (e.g., media content described above).
  • the GPU unit 316 may be utilized to render visual content for presentation on a user device, analyze a live streaming video feed for metadata associated with a user and/or a user device responsible for generating the live video feed, and/or the like.
  • the GPU unit 316 may also include multiple GPUs and therefore may be configured to perform and/or execute multiple processes in parallel.
  • the private network unit 318 may process or enable operation of any methods associated with the user's private network accessible by the "friends" option in FIG. 5.
  • the public network unit 320 may process or enable operation of any methods associated with the user's public network accessible by the "chains" option in FIG. 5.
  • These are custom-built units that enable separate processing of the user's public and private networks that help to protect the user's privacy.
  • the video communication server 300 may not include a generic computing system, but instead may include a customized computing system designed to perform the various methods described herein.
  • the resource allocation unit 324 may facilitate the determination, monitoring, analysis, and/or allocation of computing resources throughout the video communication server 300 and/or other computing environments.
  • the video communication server 300 may facilitate a high volume of (e.g., multiple) video communication connections between a large number of supported users and/or associated user devices.
  • computing resources of the video communication server 300 utilized by the processor 302, the memory unit 304, the I/O unit, and/or the communication unit 308 (and/or any subunit of the aforementioned units) such as processing power, data storage space, network bandwidth, and/or the like may be in high demand at various times during operation.
  • the resource allocation unit 324 may be configured to manage the allocation of various computing resources as they are required by particular units and/or subunits of the video communication server 300 and/or other computing environments.
  • the resource allocation unit 324 may include sensors and/or other specially-purposed hardware for monitoring performance of each unit and/or subunit of the video communication server 300, as well as hardware for responding to the computing resource needs of each unit and/or subunit.
  • the resource allocation unit 324 may utilize computing resources of a second computing environment separate and distinct from the video communication server 300 to facilitate a desired operation.
  • the resource allocation unit 324 may determine a number of simultaneous video communication connections and/or incoming requests for establishing video communication connections. The resource allocation unit 324 may then determine that the number of simultaneous video communication connections and/or incoming requests for establishing video communication connections meets and/or exceeds a predetermined threshold value.
  • the resource allocation unit 324 may determine an amount of additional computing resources (e.g., processing power, storage space of a particular non-transitory computer-readable memory medium, network bandwidth, and/or the like) required by the processor 302, the memory unit 304, the I/O unit 306, the communication unit 308, and/or any subunit of the aforementioned units for enabling safe and efficient operation of the video communication server 300 while supporting the number of simultaneous video communication connections and/or incoming requests for establishing video communication connections.
  • the resource allocation unit 324 may then retrieve, transmit, control, allocate, and/or otherwise distribute determined amount(s) of computing resources to each element (e.g., unit and/or subunit) of the video communication server 300 and/or another computing environment.
  • factors affecting the allocation of computing resources by the resource allocation unit 324 may include the number of ongoing video communication connections and/or other communication channel connections, a duration of time during which computing resources are required by one or more elements of the video communication server 300, and/or the like.
  • computing resources may be allocated to and/or distributed amongst a plurality of second computing environments included in the video communication server 300 based on one or more factors mentioned above.
  • the allocation of computing resources of the resource allocation unit 324 may include the resource allocation unit 324 flipping a switch, adjusting processing power, adjusting memory size, partitioning a memory element, transmitting data, controlling one or more input and/or output devices, modifying various communication protocols, and/or the like.
  • the resource allocation unit 324 may facilitate utilization of parallel processing techniques such as dedicating a plurality of GPUs included in the processor 302 for processing a high-quality video stream of a video communication connection between multiple units and/or subunits of the video communication server 300 and/or other computing environments.
  • the memory unit 304 may be utilized for storing, recalling, receiving, transmitting, and/or accessing various files and/or information during operation of the video communication server 300.
  • the memory unit 304 may be utilized for storing video streams, storing, recalling, and/or updating user profile information, and/or the like.
  • the memory unit 304 may include various types of data storage media such as solid state storage media, hard disk storage media, and/or the like.
  • the memory unit 304 may include dedicated hardware elements such as hard drives and/or servers, as well as software elements such as cloud-based storage drives.
  • the memory unit 304 may include various subunits such as an operating system unit 326, an application data unit 328, an application programming interface (API) unit 330, a profile storage unit 332, a content storage unit 334, a video storage unit 336, a secure enclave 338, and/or a cache storage unit 340.
  • an operating system unit 326 an application data unit 328
  • an application programming interface (API) unit 330 an application programming interface (API) unit 330
  • profile storage unit 332 a profile storage unit 332
  • a content storage unit 334 a video storage unit 336
  • secure enclave 338 a secure enclave 338
  • the memory unit 304 and/or any of its subunits described herein may include random access memory (RAM), read only memory (ROM), and/or various forms of secondary storage.
  • RAM may be used to store volatile data and/or to store instructions that may be executed by the processor 302.
  • the data stored may be a command, a current operating state of the video communication server 300, an intended operating state of the video communication server 300, and/or the like.
  • data stored in the memory unit 304 may include instructions related to various methods and/or functionalities described herein.
  • ROM may be a non-volatile memory device that may have a smaller memory capacity than the memory capacity of a secondary storage. ROM may be used to store instructions and/or data that may be read during execution of computer instructions.
  • Secondary storage may be comprised of one or more disk drives and/or tape drives and may be used for non-volatile storage of data or as an over-flow data storage device if RAM is not large enough to hold all working data. Secondary storage may be used to store programs that may be loaded into RAM when such programs are selected for execution.
  • the memory unit 304 may include one or more databases for storing any data described herein. Additionally or alternatively, one or more secondary databases located remotely from the video communication server 300 may be utilized and/or accessed by the memory unit 304.
  • the operating system unit 326 may facilitate deployment, storage, access, execution, and/or utilization of an operating system utilized by the video communication server 300 and/or any other computing environment described herein.
  • the operating system may include various hardware and/or software elements that serve as a structural framework for enabling the processor 302 to execute various operations described herein.
  • the operating system unit 326 may further store various pieces of information and/or data associated with operation of the operating system and/or the video communication server 300 as a whole, such as a status of computing resources (e.g., processing power, memory availability, resource utilization, and/or the like), runtime information, modules to direct execution of operations described herein, user permissions, security credentials, and/or the like.
  • the application data unit 328 may facilitate deployment, storage, access, execution, and/or utilization of an application utilized by the video communication server 300 and/or any other computing environment described herein (e.g., a user device). For example, users may be required to download, access, and/or otherwise utilize a software application on a user device such as a smartphone in order for various operations described herein to be performed. As such, the application data unit 328 may store any information and/or data associated with the application. Information included in the application data unit 328 may enable a user to execute various operations described herein.
  • the application data unit 328 may further store various pieces of information and/or data associated with operation of the application and/or the video communication server 300 as a whole, such as a status of computing resources (e.g., processing power, memory availability, resource utilization, and/or the like), runtime information, modules to direct execution of operations described herein, user permissions, security credentials, and/or the like.
  • computing resources e.g., processing power, memory availability, resource utilization, and/or the like
  • runtime information e.g., modules to direct execution of operations described herein, user permissions, security credentials, and/or the like.
  • the API unit 300 may facilitate deployment, storage, access, execution, and/or utilization of information associated with APIs of the video communication server 300 and/or any other computing environment described herein (e.g., a user device).
  • video communication server 300 may include one or more APIs for enabling various devices, applications, and/or computing environments to communicate with each other and/or utilize the same data.
  • the API unit 330 may include API databases containing information that may be accessed and/or utilized by applications and/or operating systems of other devices and/or computing environments.
  • each API database may be associated with a customized physical circuit included in the memory unit 304 and/or the API unit 330.
  • each API database may be public and/or private, and so authentication credentials may be required to access information in an API database.
  • the profile storage unit 332 may facilitate deployment, storage, access, and/or utilization of information associated with user profiles of users by the video communication server 300 and/or any other computing environment described herein (e.g., a user device).
  • the profile storage unit 332 may store one or more user's contact information, authentication credentials, user preferences, user history of behavior, personal information, and/or metadata.
  • the profile storage unit 332 may communicate with the profile management unit 310 to receive and/or transmit information associated with a user's profile.
  • the content storage unit 334 may facilitate deployment, storage, access, and/or utilization of information associated with requested content by the video communication server 300 and/or any other computing environment described herein (e.g., a user device such as a mobile device).
  • the content storage unit 334 may store one or more images, text, videos, audio content, advertisements, and/or metadata to be presented to a user during operations described herein.
  • the content storage unit 334 may communicate with the content management unit 312 to receive and/or transmit content files.
  • the video storage unit 336 may facilitate deployment, storage, access, analysis, and/or utilization of video content by the video communication server 300 and/or any other computing environment described herein (e.g., a user device).
  • the video storage unit 336 may store one or more live video feeds transmitted during a video communication connection. Live video feeds of each user transmitted during a video communication connection may be stored by the video storage unit 336 so that the live video feeds may be analyzed by various components of the video communication server 300 both in real time and at a time after receipt of the live video feeds.
  • the video storage unit 336 may communicate with the GPUs 316, the private network unit 318 and/or the public network unit 320 to facilitate any of the processes described herein.
  • video content may include audio, images, text, video feeds, and/or any other media content.
  • the secure enclave 338 may facilitate secure storage of data.
  • the secure enclave 338 may include a partitioned portion of storage media included in the memory unit 304 that is protected by various security measures.
  • the secure enclave 338 may be hardware secured.
  • the secure enclave 338 may include one or more firewalls, encryption mechanisms, and/or other security-based protocols. Authentication credentials of a user may be required prior to providing the user access to data stored within the secure enclave 338.
  • the cache storage unit 340 may facilitate short-term deployment, storage, access, analysis, and/or utilization of data.
  • the cache storage unit 340 may be utilized for storing numerical values associated with users' recognized facial gestures for computing a compatibility score immediately after termination of a video communication connection.
  • the cache storage unit 340 may serve as a short-term storage location for data so that the data stored in the cache storage unit 340 may be accessed quickly.
  • the cache storage unit 340 may include RAM and/or other storage media types that enable quick recall of stored data.
  • the cache storage unit 340 may included a partitioned portion of storage media included in the memory unit 304.
  • the I O unit 306 may include hardware and/or software elements for enabling the video communication server 300 to receive, transmit, and/or present information.
  • elements of the I/O unit 306 may be used to receive user input from a user via a user device, present a live video feed to the user via the user device, and/or the like. In this manner, the I/O unit 306 may enable the video communication server 300 to interface with a human user.
  • the I/O unit 306 may include subunits such as an I/O device 342, an I/O calibration unit 344, and/or video driver 346.
  • the I/O device 342 may facilitate the receipt, transmission, processing, presentation, display, input, and/or output of information as a result of executed processes described herein.
  • the I/O device 342 may include a plurality of I/O devices.
  • the I/O device 342 may include one or more elements of a user device, a computing system, a server, and/or a similar device.
  • the I/O device 342 may include a variety of elements that enable a user to interface with the video communication server 300.
  • the I/O device 342 may include a keyboard, a touchscreen, a button, a sensor, a biometric scanner, a laser, a microphone, a camera, and/or another element for receiving and/or collecting input from a user.
  • the I/O device 342 may include a display, a screen, a sensor, a vibration mechanism, a light emitting diode (LED), a speaker, a radio frequency identification (RFID) scanner, and/or another element for presenting and/or otherwise outputting data to a user.
  • LED light emitting diode
  • RFID radio frequency identification
  • the I/O device 342 may communicate with one or more elements of the processor 302 and/or the memory unit 304 to execute operations described herein.
  • the I/O device 342 may include a display, which may utilize the GPU 316 to present video content stored in the video storage unit 336 to a user of a user device during a video communication connection.
  • the I/O calibration unit 344 may facilitate the calibration of the I/O device 342. For example, the I/O calibration unit 344 may detect and/or determine one or more settings of the I/O device 342, and then adjust and/or modify settings so that the I/O device 342 may operate more efficiently.
  • the I/O calibration unit 344 may utilize a video driver 346 (or multiple video drivers) to calibrate the I O device 342.
  • the video driver 346 may be installed on a user device so that the user device may recognize and/or integrate with the I/O device 342, thereby enabling video content to be displayed, received, generated, and/or the like.
  • the I/O device 342 may be calibrated by the I/O calibration unit 344 by based on information included in the video driver 346.
  • the communication unit 308 may facilitate establishment, maintenance, monitoring, and/or termination of communications (e.g., a video communication connection) between the video communication server 300 and other devices such as user devices, other computing environments, third party server systems, and/or the like.
  • the communication unit 308 may further enable communication between various elements (e.g., units and/or subunits) of the video communication server 300.
  • the communication unit 308 may include a network protocol unit 348, an API gateway 350, an encryption engine 352, and/or a communication device 354.
  • the communication unit 308 may include hardware and/or software elements.
  • the network protocol unit 348 may facilitate establishment, maintenance, and/or termination of a communication connection between the video communication server 300 and another device by way of a network.
  • the network protocol unit 348 may detect and/or define a communication protocol required by a particular network and/or network type.
  • Communication protocols utilized by the network protocol unit 348 may include Wi-Fi protocols, Li-Fi protocols, cellular data network protocols, Bluetooth® protocols, WiMAX protocols, Ethernet protocols, powerline communication (PLC) protocols, and/or the like.
  • facilitation of communication between the video communication server 300 and any other device, as well as any element internal to the video communication server 300 may include transforming and/or translating data from being compatible with a first communication protocol to being compatible with a second communication protocol.
  • the network protocol unit 348 may determine and/or monitor an amount of data traffic to consequently determine which particular network protocol is to be used for establishing a video communication connection, transmitting data, and/or performing other operations described herein.
  • the API gateway 350 may facilitate the enablement of other devices and/or computing environments to access the API unit 330 of the memory unit 304 of the video communication server 300.
  • a user device may access the API unit 330 via the API gateway 350.
  • the API gateway 350 may be required to validate user credentials associated with a user of a user device prior to providing access to the API unit 330 to the user.
  • the API gateway 350 may include instructions for enabling the video communication server 300 to communicate with another device.
  • the encryption engine 352 may facilitate translation, encryption, encoding, decryption, and/or decoding of information received, transmitted, and/or stored by the video communication server 300. Using the encryption engine, each transmission of data may be encrypted, encoded, and/or translated for security reasons, and any received data may be encrypted, encoded, and/or translated prior to its processing and/or storage. In some embodiments, the encryption engine 352 may generate an encryption key, an encoding key, a translation key, and/or the like, which may be transmitted along with any data content.
  • the communication device 354 may include a variety of hardware and/or software specifically purposed to enable communication between the video communication server 300 and another device, as well as communication between elements of the video communication server 300.
  • the communication device 354 may include one or more radio transceivers, chips, analog front end (AFE) units, antennas, processors, memory, other logic, and/or other components to implement communication protocols (wired or wireless) and related functionality for facilitating communication between the video communication server 300 and any other device.
  • AFE analog front end
  • the communication device 354 may include a modem, a modem bank, an Ethernet device such as a router or switch, a universal serial bus (USB) interface device, a serial interface, a token ring device, a fiber distributed data interface (FDDI) device, a wireless local area network (WLAN) device and/or device component, a radio transceiver device such as code division multiple access (CDMA) device, a global system for mobile communications (GSM) radio transceiver device, a universal mobile telecommunications system (UMTS) radio transceiver device, a long term evolution (LTE) radio transceiver device, a worldwide interoperability for microwave access (WiMAX) device, and/or another device used for communication purposes.
  • a radio transceiver device such as code division multiple access (CDMA) device, a global system for mobile communications (GSM) radio transceiver device, a universal mobile telecommunications system (UMTS) radio transceiver device, a long term evolution (LTE)
  • FIG. 4 illustrates a flow chart that schematically represents a method for application level hand over of a mobile video call.
  • a mobile client device 11 is shown connected to multiple data connections and the mobile video conferencing client is transmitting audio and video signals of the ongoing conferencing over the multiple connections concurrently.
  • the mobile client device 11 senses such change and in accordance will switch to send data originally scheduled for the unavailable channel over other still available channel(s) as well as any newly available channel(s).
  • the client will adjust the overall data transmission rate, if necessary, to make the data rate match the available bandwidth provided the currently available channels.
  • the available networks may only include a 4G network.
  • the mobile client device 11 may be transmitting audio and video data on a 4G data connection 55.
  • a first network switch event may occur such that in a second instance 63, both a 3G and 4G network connection may be available.
  • the mobile client device 11 may transmit different types of data on each of the different networks. For example, audio data may be transmitted on the 3G network through the 3G data connection 53 and video data may be transmitted on the 4G network through the 4G data connection 55.
  • a second network switch event may occur resulting in a third instance 65 in which the 4G network is unavailable and only the 3G network is available. In the third instance, both audio and video data may be transmitted on the 3G network through the 3G data connection 53.
  • a first mobile client device 11 a may initiate a video call request to a second mobile client device l i b.
  • the second mobile client device l i b may accept the call.
  • the first and second mobile client devices are connected and proceed to exchange encoded real time audiovisual data via an audio video server 19.
  • the first mobile client device 11 a receives audiovisual data from the second mobile client device l i b and decodes and displays the video on the screen of the first mobile client device 11 a.
  • the first mobile client device 11 a also plays the audio from the second mobile client device l i b.
  • the second mobile client device l i b receives audiovisual data from the first mobile client device 11 a and decodes and displays the video on the screen of the second mobile client device l i b.
  • the second mobile client device l i b plays the audio from the first mobile client device 11 a.
  • the first mobile client device 11 a may initiate a video call request to a third mobile client device (not shown) for joining the ongoing video conference between the first mobile client device 11 a and the second mobile client device l i b.
  • the third mobile client device may accept the call from the first mobile client device 11 a.
  • the third mobile client device proceeds to connect to the same audio video server 19 that the first and second mobile client devices 11 a, 11 b are connected to for exchanging media.
  • the third mobile client may sends its encoded audiovisual data to the audio video server 19.
  • the audio video server 19 receives audiovisual data streams from the first mobile client device 11 a, the second mobile client device l i b, and the third mobile client device.
  • the audio video server 19 may decode all three audio streams and mix "n- 1" individually mixed audio streams.
  • the "n-1" individually mixed audio streams is a mix of all audio streams except the one stream generated from the client to which the particular mixed stream is going to be sent.
  • the individually mixed audio stream sent to the first mobile client device 11 a may include audio streams from the second mobile client device l i b and the third mobile client device.
  • the individually mixed audio stream sent to the third mobile client device may include audio streams from the first mobile client device 11 a and the second mobile client device l i b.
  • the audio video server 19 may then send the individually mixed audio streams to the corresponding mobile clients. It is preferred that each mobile client device is able to hear all other mobile clients during a multipoint call.
  • the audio video server 19 may send a video data stream generated from the third mobile client device to the first mobile client device 11 a and the second mobile client device l i b and send a video stream generated by the first client mobile device 11 a to the third client mobile device.
  • the first mobile client device 11 a and the second mobile client device l i b decode and display video from the third client mobile device and the third mobile client device decodes and displays video from the first client mobile device 11 a.
  • the mobile client device 11 may display a visual cue, such as an icon, to indicate the call now contains more than 2 participants.
  • the user of the mobile client device 11 may be able to act upon the visual cue, such as clicking or touching the conference icon, to see who is on the call.
  • the user of the mobile client device 11 may be presented with textual description (such as name), graphic icons (such as avatar), or image thumbnails (such as user's profile thumbnail) to identify the other participants.
  • a user can select a visual cue on the screen of the mobile client device 11 that relates to one of the participants on the call to see live video from the corresponding participant's mobile client device 11.
  • a user on first mobile client device 11 a selects a user associated with a second mobile client device l i b from the participant list.
  • a request is sent from first mobile client device 11 a to the audio video server 19 which then stops sending video stream generated from third mobile client device and starts to send video stream generated from the second mobile client device 11 b to the first mobile client device 11 a.
  • the first mobile client device 11 a is now displaying live video from the second mobile client device l i b.
  • the second mobile client device 11 a continues to display video from the third mobile client device, and the third mobile client device continues to display video from the first mobile client device 11 a.
  • the audio video server 19 may continuously check to determine if there is no receiving request for a particular video upstream. If so, the server 19 may instruct the corresponding client to pause sending its video stream. If the stream is requested again by a mobile client device 11, the server 19 may instruct the corresponding client to resume sending its video stream.
  • the decision as of which participant's video to display may be based not on manual selection by user but may be on algorithmically determined by the audio video server 19 based on a conferencing policy implemented on the audio video server 19.
  • a conferencing policy implemented on the audio video server 19.
  • one policy could be to display video associated with the current active speaker on all participants' client devices.
  • another policy could be to display video based on selection made by an external conferencing management module (e.g., a moderator selects whose video to display).
  • the number of live video window on the mobile display can be more than one.
  • the system 10 can be configured to display more than one live video from remote conference participants, while still representing the rest of participants with thumbnail.
  • a multipoint video call can contain clients with heterogeneous display capability and processing power. Some end points (e.g., a PC client) may be able to display all live video streams in a call while others (e.g., mobile client) may be limited to display one or a subset of video windows.
  • Some end points e.g., a PC client
  • others e.g., mobile client
  • the functions of the device may be embodied in software, for example programmable instructions in memory, or hardware, or a combination of hardware and software.
  • the mobile device may have a microprocessor in a portable arrangement, such as, for example, a mobile phone, a laptop, a personal digital assistant, an audio/video playing device, a wearable device such as a watch, a headset, a tablet computing device, a computing device installed into a motor vehicle, an e-reader, eyewear, headgear, etc.
  • the device maybe a personal computer (PC) based implementation of a central control processing system.
  • PC personal computer
  • the mobile client devices may run a variety of applications programs and store data, enabling one or more interactions via the user interface provided, and/or over one or more networks to implement the desired processing.
  • Software, code or a program may take the form of code or executable instructions for causing a device, or other programmable equipment to perform the relevant data processing steps, where the code or instructions are carried by or otherwise embodied in a medium readable by a mobile device or other device. Instructions or code for implementing such operations may be in the form of computer instruction in any form (e.g., source code, object code, interpreted code, etc.) stored in or carried by any readable medium.
  • FIG. 5 is a screenshot of a home screen of a mobile device application.
  • the background of the home screen may be a photo selected by a user of the mobile device application.
  • the home screen 501 may be a profile photo of the user or may be a live video capture of the user.
  • the user is presented with an option 502 to change the camera (e.g., front or back camera) for capturing video.
  • the application may have previously prompted for and received permission from the user to access one or more cameras of the mobile device and to capture photos, audio, or live video.
  • the live video capture of the front camera may be flipped (e.g., using a special processing technique) such that the user is looking at a flipped mirrored video capture.
  • FIG. 5 has options to access the user's private area 502 or network (friends area) of the application, and the user's public area 503 or network (tree area or chains area) of the application.
  • the home screen is presented as a screen of a mobile device, the screen may be a screen on any non-mobile computing device as well.
  • any videos that are shared with friends, groups, or chains may also be posted to one or more social networks.
  • a video is captured as presented in FIG. 6.
  • the video may be captured using one or more cameras, e.g., the camera on the front side, i.e., the side with the screen, of the mobile device, the camera on the back side of the mobile device, a camera on the side edge of the mobile device, etc.
  • the user may select an option 601 to stop capturing the video.
  • the video capture may automatically stop after a predetermined period (e.g., 15 seconds).
  • the user is presented with the screen in FIG. 7.
  • the user is provided with options to not continue with sending the video or deleting the video 701, and associating the video with text 702.
  • the user is presented with information 703 associated with a location of the video capture, e.g., a city where the mobile device was location before, during, or after the video capture.
  • the location information may be more specific, e.g., a particular establishment in the city such as a particular coffee shop, departmental store, public park, etc.
  • the location may be determined using network information and/or global positioning system (GPS) information and/or cellular tower information using a positioning system in the mobile device.
  • GPS global positioning system
  • the network information and the cellular tower information may be associated with the network and the cellular tower or base station that enables data access or network access to the mobile device.
  • the user is presented with an option to create a new chain 801 or to send the video to a friend 802.
  • the user may also select an option to search for a friend 803. For example, when the user searches for a friend (e.g., "Ann"), the user is presented names of friends that have the letters "Ann" in them.
  • the user selects the option to create a new chain 801
  • the user is presented with the screen in FIG. 9.
  • the user may add a title for the chain in the search box 901.
  • the user may input a chain title 1001 as presented in FIG. 10.
  • the user may input "I'm heading to #chicago for the #weekend.”
  • the application may limit the chain title to a certain number of characters or words.
  • the terms preceded by hashtags are searchable parameters. For example, when another user searches for the term "Chicago" in chains, the other user may be presented with this new chain in the search results. In some embodiments, the other user may be presented with this new chain in the search results even if the other user is not a friend of the user. Therefore, the chain is presented in the search results and available for access by any registered user of the application. In some alternate embodiments, the other user may be presented with this new chain the search results if the other user is a friend or a friend of a friend of the user that created the new chain.
  • FIG. 10 there are currently 0 videos 1002 on the new chain.
  • the user hits "enter” in FIG. 10 or chooses an option to indicate that the user has finished inputting the chain title, e.g., by selecting "Done," 1003 the user is presented with the screen in FIG. 11.
  • the appearance of the terms preceded by hashtags is different from the other terms.
  • the terms preceded by hashtags are activated such that they are searchable on this application and/or in other social networks.
  • the user may select an option to continue creating the new chain 1101.
  • the user selects the option to continue creating the new chain, the user is presented with the screen in FIG. 12.
  • the screen in FIG. 12 may indicate that a request for creating a new chain has been transmitted to a remote server.
  • the remote server has created the new chain
  • the user is presented with the screen in FIG. 13.
  • the image on the screen in FIG. 13 indicates a new chain has been created.
  • Chain-related activity of the user is public and may be visible to other registered users of the application, including users who are not friends or friends of friends of the user.
  • the home screen of the public network area of the application presents the most recent chain that the user has not viewed from the user's network 1405.
  • the user's network includes the user's friends and friends of friends.
  • Information 1401 associated with the most recent chain includes the title of the chain and the name of the user that created the chain. This information 1401 may disappear after a certain period of time.
  • Information presented on the screen also includes a link 1402 (e.g., the most recent link) associated with the chain.
  • the link 1402 is a link to a video that is stored on a server such as the server in FIG. 1 or some other cloud server.
  • Information (also referred to as metadata) associated with the link 1402 may also include the name of the user who created the link, the username of that user, the location (e.g., current location of that user, home location of the user, location where the link or video associated with the link was created, etc.), how long ago the link was posted to the chain, how many views the link has had since being posted, what number link in the chain (e.g., 10 out of 15 links), and how many total links there are in the chain. This information may also disappear after a certain period of time.
  • the background of the public network home screen may be the profile photo of the user who posted the link, a screen capture from the link posted by the user, a profile photo of the user who created the chain, a video associated with the link which can be viewed by just selecting the video, etc.
  • the user may be able to view the video associated with the link by selecting (e.g., touching, clicking, etc.) the link (or any portion of the link) or by selecting the profile photo presented on the screen.
  • the user may be able to add a video to the chain or reply to a video on the chain by selecting add link or reply option 1403.
  • the new link is placed at the end of the chain. In other embodiments, the new link may be placed right after the link which is being replied to.
  • the screen of FIG. 14 also presents a notifications option 1404.
  • the notifications option may change its appearance (e.g., color, attributes, etc.) and may be accompanied by a sound whenever a new, and/or unviewed, event (e.g., a new like, a new link, etc.) associated with a chain (e.g., a network chain, a chain for which the user previously liked the chain or a link of the chain, a chain that a user previously viewed and/or searched for, a trending chain, a suggested chain, etc.) occurs.
  • the screen of FIG. 14 also presents options for trending chains 1406 and suggested chains 1407. These are explained in further detail in the specification.
  • the screen of FIG. 14 also presents an option to search for chains 1408.
  • Touching or selecting the right side of the screen moves the screen to a next link in the chain (e.g., the 11 th created link in the chain).
  • a next link in the chain e.g., the 11 th created link in the chain.
  • the user touches or selects the right side of the screen, or swipes left to right, on FIG. 14 the user is presented with the screen in FIG. 16.
  • the screen in FIG. 16 presents the 11 th created link in the chain.
  • the user may watch the video associated with the link by selecting the middle of the screen or selecting the link at the bottom of the screen.
  • the user is presented with the screen in FIG. 19.
  • the user is presented with options to find friends 1901 and to create a new chain 1902.
  • the user is presented with the screen in FIG. 20.
  • the video of the user is captured when the user selects the capture option on the screen FIG. 20 and continues until the user selects the option to stop capturing the video.
  • the screen of FIG. 7 is presented.
  • the screen of FIG. 21 is presented.
  • the user is presented with an option to create a new chain 2101, search for existing friends 2102, or send the video to a group 2103 ( aja, Dijanna, Isabel) or a single friend 2104 (Anni Cartwright with username of ANCART).
  • the friends or groups are presented in digital card- form. Each friend is identifiable by their profile photo. Each friend in a group is identifiable by a collage profile photo associated with the group. The collage profile photo includes profile photos of at least some members of the group.
  • the various cards may be ordered based on logic dictated by the server and/or logic provided by the user. For example, the cards may be associated in order of most recent communication with the user.
  • the top leftmost video card is the video card associated with a friend from whom the most recent video (or photo) message was received or to whom the most recent video (or phot) message was sent, or associated with a friend with whom the most recent video call was made.
  • Any references to video in this specification may alternatively or additionally refer to photos and/or audio.
  • the cards may be ordered based on most recent video posting (e.g., to a chain or a group) or video communication (e.g., to one or ore users) activity on the application.
  • the user may input text into the search box 2102.
  • the user may input the text "Ann" 2201 into the search box.
  • the user is presented with a list of friends with the text "Ann" in their name and/or username in some embodiments.
  • the user may select 2301 either the photo, name, or username of "Anni Cartwright” in order to send the video to Anni Cartwright.
  • the user may select the continue option 2302 to send the video to Anni Cartwright.
  • FIG. 24 indicates that the user's request to send the video is being processed.
  • a link may be an activatable or selectable reference or bookmark to a video.
  • the user When the user selects the notifications option 1404 in FIG. 14, the user is taken to the screen of FIG. 30. As indicated in the screen of FIG. 30, the user is shown that there are “155 new likes” (notification) for the chain titled “Not all those who wander are lost #lovemylife #lotr #NZ" that was created by Mimi Jackson (one of the user's friends). The user is also shown there are “4 new videos” (notification) for the same chain.
  • the notifications may be arranged in terms of latest events. The most recent event may be a "like" for the "Not all those who wander are lost #lovemylife #lotr #NZ" chain and therefore this chain is presented first in the list.
  • Previously viewed notifications, or old unviewed notifications that have passed a certain duration of time since the occurrence of the notification event are also shown.
  • a previously viewed or old notification is a like for the chain titled "Sed ut perspiciatis unde omnis iste natus error sit" created by Kornelia Quinlan.
  • the user When the user selects the option to search 1408 for a chain in FIG. 14, the user is presented with the search screen in FIG. 31.
  • the user may search for the text "Cats" in the search box 3201 as shown on the screen of FIG. 32.
  • search results include a list of users 3202 (and a total number of user search results) with the text "Cats" in their names and/or usernames in some embodiments.
  • Each displayed user may be a friend (one degree of separation), a friend of a friend (two degrees of separation), or may be unassociated with the user, i.e., more than two degrees of separation.
  • a friends icon 3203 may appear next to the user (e.g., Courtney Cats is a friend).
  • an option 3204 is presented to send a friend request to the user.
  • the search results may also include a list of chains 3205 (and a total number of chains search results) in which the text "cats" is preceded by a hashtag or some other activating symbol. Each chain is presented with information associated with the chain as shown on the screen of FIG. 33.
  • the user When the user selects a chain, the user is transported to a screen with the first video of the selected chain. Alternatively, the user is transported to a screen with one of the other videos, i.e., not the first video, of the selected chain. For example, the user is transported to a screen with the most watched video of the chain.
  • the user selects the "X" option 3301 of FIG. 33, the user is returned to the public network home screen of FIG. 15.
  • Selecting the private network or "Friends" option on the screen of FIG. 5 leads to the Friends screen of FIG. 26.
  • the user may select an option 2602 to send a video message to the user's friend "Kaja Soto" in FIG. 26.
  • the number "5" in the option indicates that the user has 5 unviewed messages from Kaja Soto.
  • the screen in FIG. 34 shows the last video that Kaja Soto sent to the user.
  • the last video may be the last or most recent unseen video that Kaja Soto sent to the user.
  • Merely selecting or touching the screen may be enough for initializing and viewing the video. Notably there is no visible "play” option overlaid on or near the video.
  • Information about the video is presented in the bottom part of the screen.
  • Information 3401 about the video includes the name of the user that sent the video, the username of that user, the location where the video was captured (or the current location of that user or the home location of that user or the user's custom defined location), and the amount of time that has elapsed since receiving the video from that user or the amount of time that has elapsed since that user sent the video.
  • the user may select an option to reply to the video message by selecting the reply option 3402.
  • the reply option 3402 the user's mobile device presents the screen of FIG. 37.
  • the user may choose the record option 3701 to start capturing video associated with the user.
  • the recording starts immediately when the user selects the reply option, without the need to choose a subsequent record option.
  • the capturing of video may be based on previously stored video capture settings or default video capture settings set by the application provider and/or by the user of the mobile device.
  • the video capture settings may define the length of video capture, the cameras (e.g., front and/or back camera) that are activated for the video capture, the focus settings, the lighting settings, pre and post processing settings, etc.
  • the user may select an option 3403 to view a "Birdseye view" of the video as visible on the screen of FIG. 35. If the user selects to view a birdseye view of the video 3501, the user is taken the screen of FIG. 144. On this screen, the user has a birdseye view of the user's private chain or channel or network with Kaja Soto. The name of the private chain or channel is Kaja Soto and the chain has 56 messages and includes 2 members. The user can see selectable thumbnails of the 24 th link 14401, the 25 th link 14402, and the 26 th link 14003 of the chain.
  • Each link in the birdseye view shows the link itself 14401, the profile photo associated with the link 14404, the sequence number 14405 in the private chain between the user and Kaja Soto, the total number of links 14406 in the private chain, and the display name 14407 of the user who posted the link.
  • an arrow 14408 to take the user to a private chain that has more recent activity added to it compared to that of the activity in the private chain between the user and Kaja Soto.
  • an arrow 14409 to take the user to a private chain that has less recent activity added to it compared to that of the activity in the private chain between the user and Kaja Soto.
  • the 25 th link 14402 is highlighted because the birdseye view was reached from selecting birdseye view 14601 on a screen that was displaying the 25 th link such as FIG. 146. Also, as shown in FIG. 146, the user may select an option 14602 to delete the link. If the user deletes the link, the 25 th link in the birdseye view may be replaced with a black box with the label "REMOVED.” An example of such a box 14501 is shown in FIG. 145.
  • the private chain may be associated with a group. Such a chain may have more than two members.
  • the birdseye view of a chain shows the links in digital card form as a series, and includes links sent to and received by a group or a user.
  • a card may be selected from the series to view the video associated with the card.
  • the videos described in this specification are stored on the server.
  • any video described in this specification may be stored on the user's mobile device and/or transmitted to or received from an external social network.
  • FIG. 147 shows a screen of a link in a public chain.
  • the link was created by the user because there is an option to delete the link 14702.
  • the user selects birdseye view 14701 option to view the chain.
  • the user is presented with the screen of FIG. 145.
  • the highlighted link in FIG. 145 is the link presented on the screen of FIG. 147.
  • the birdseye view of the public chain shows the creator of the chain 14501, the name of the chain 14502, the number of links in the chain 14503, the number of views of the chain 14504, and the cumulative number of likes of the chain 14505, which is sum of the likes for each individual link in the chain.
  • the birdseye view shows a series of links in the chain.
  • the 14 th link 14511 is a link added by Kornelia Quinlan
  • the 15 th link 14512 is a link added by Mimi Jackson
  • the 16 th link 14513 has been removed.
  • the 16 th link may have been removed by the creator of that link or the 16 th link may have been reported by the user (e.g., Mimi Jackson) of the application, in which case, the link appears as "removed" to the user only. If the 16 th link was deleted by the creator, then other users cannot view the link as well. If the 16 th link was reported by the user, the user cannot view the link but other users who have not reported the link will still be able to view the link. However, if the 16 th link receives equal to or greater than a threshold number of reports in a certain period, the link would appear as "removed” for other users as well and they will not be able to view the link.
  • Each thumbnail in birdseye view shows the link 14511, the profile photo 14521 of the user who added the link, the sequence number 14522 of the link in the chain, and the total number of links 14523 in the chain.
  • the removed link only has information showing the sequence number and the total number of links in the chain.
  • an arrow 14531 to take the user to a public chain that has more recent activity added to it compared to that of the activity in the public chain for which the birdseye view is presented on the screen of FIG. 145.
  • an arrow 14532 to take the user to a public chain that has less recent activity added to it compared to that of the activity in the public chain for which the birdseye view is presented on the screen of FIG. 145.
  • the user watches the video message of FIG. 34 until its completion or if the user selects or touches the screen (e.g., the face or the right side of the screen), the user is presented with the screen in FIG. 36.
  • the user may select the reply option 3601.
  • the reply option 3601 the user's mobile device presents the screen of FIG. 37.
  • the mobile device captures a video until the user selects the stop option 3801 of FIG. 38.
  • the user is then presented with the continue option 3901 of FIG. 39.
  • a location 3902 of the video capture is also presented in FIG. 39.
  • the user is provided with an option to edit the recorded video.
  • the video is sent to the target user, i.e., Kaja Soto.
  • the user may select an option to change the location 3902.
  • a list of locations is presented to the user and the user may select a desired location.
  • the user may also select a "no location" option in which case the video is sent without any location information.
  • the user may select an option to add text 3903 to the video.
  • an input text box is presented for the user to enter text.
  • the user is presented with the screen of FIG. 40.
  • FIG 40 shows a screen 4001 with the user's profile photo or live video being captured from a camera of the mobile device.
  • the screen also shows a recent video or the most recent video 4002 that was captured by the user or transmitted by the user.
  • the information associated with the video includes the name of the user who created the video, the username of that user, the location where the video was captured (or other location selected by the user), and the time elapsed since capturing the video or transmitting a link for the video.
  • Selecting the video information 4003 allows the user to view the video.
  • the user may also send a link to the video to friends, groups, chains, or may create a new chain by selecting the appropriate option on the screen.
  • FIG. 5 also presents an option 511 to access the user's photos and/or videos.
  • These are photos and/or videos captured by the camera of the user's mobile device, downloaded to the user's mobile device from a cloud server or any server described herein, etc.
  • the photos and/or videos accessible by this option include media captured by the instant application.
  • the photos and/or videos accessible through this option may have been captured using other applications of the mobile device or a different mobile device.
  • the user may select one or more photos and/or videos to send to the user's friends.
  • the user selects the option from FIG. 5, the user is presented with the screen of FIG. 41.
  • the user may select a photo 4201 as shown on the screen of FIG. 42.
  • the application returns the user to the screen of FIG. 7.
  • the user may select a video as shown on the screen of FIG. 43.
  • the user selects the video 4301 as shown in FIG. 43 and selects "Done,” the user is presented with the screen of FIG. 44.
  • the screen of FIG. 44 informs the user that the maximum length of a video that can be sent using the application needs to be equal to or less than a certain duration (e.g., 15 seconds).
  • a certain duration e.g. 15 seconds
  • the permitted duration is modifiable.
  • the application returns the user to the screen of FIG. 5.
  • the user selects the private network or Friends option 502 of FIG. 5
  • the user is transported to the screen of FIG. 26.
  • the user touches or selects the bottom portion of the screen or swipes downward the user is presented with the screen of FIG. 46.
  • the digital cards on the friends screen may be arranged by most recent activity associated with the user (e.g., sending and/or receiving video messages or calls directly with the user and/or in a group in which the user and the friend are members) or most frequent activity during a certain period (e.g., within the last week).
  • the most recent friend that was added e.g., Kornelia Quinlan 4601 occupies the last card on the friends screen.
  • the screen also shows an advertisement 4602 (or any other non-friend or non-group) embedded in the order of cards.
  • the advertisement may be an advertisement for an external application or a link to an external application.
  • the advertisement 4602 may open up a separate application such as an Internet browser or may open up an application store as a link to download a new application.
  • the friends screen i.e., the screen of FIG. 46 and/or FIG. 26
  • the order of cards does not change.
  • the order of cards may change when the user is not on the friends screen based on most recent and/or most frequent activity associated with each card. As indicated on the screen of FIG.
  • the "School Friends” group 4603 is listed above the "Mah Gurlz” group 4604. This is because the activity in the "School Friends” group 4603 is more recent than the activity in the "Mah Gurlz” group 4604.
  • the activity on which the groups and/or friends are ordered may be based on activity involving the user of the mobile device, e.g., a call such as a video call made to or received from the user of the mobile device, a video message sent to or received from the user of the mobile device, etc. In other embodiments, the activity on which the groups and/or friends are ordered may be based on activity in the group on the application or activity of the friend on the application regardless of whether it involves the user of the mobile device.
  • a user may select the add friends option 2603 shown on the screen of FIG. 26.
  • the screen of FIG. 47 is presented.
  • the user may search for friends in the search box using search parameters such as name, username, location, etc.
  • the user may also select the "Connect to Facebook,” 4701 “Connect Google,” 4702 and “Connect Contacts,” 4703 options to establish a connection to social networks or a contact list stored on the user's mobile device and/or on a cloud server.
  • the user may have to provide authentication information to connect to these social networks unless they were previously authenticated to and connected with the application.
  • the add friends screen shows to the user people that may be known to the user 4704.
  • This list may be presented based on the user's friends, friends of friends, groups, and activity on the application, including searching for chains, viewing chains, adding links to chains, liking chains, sending or receipt of friend requests, searching activity or other activity on other applications on the mobile device, searching activity or other activity on social networks authenticated to by the application, etc.
  • the application also shows the number of common friends on the application between the user and the suggested person to whom the application recommends sending a friend request.
  • the add friends option 2603 shown on the screen of FIG. 26 the screen of FIG. 48 is presented. As seen in FIG.
  • the user has previously authenticated 4801 to Facebook and Google, e.g., via authentication credentials associated with those social networks, and the user has provided access of the mobile device or cloud server contact list to the application.
  • the number next to the social network name or contacts shows the number of friends of the user on the social network or the number of contacts of the user in the contact list, respectively.
  • the number may be an indication of the number of people from those social networks or from the contact list that are already registered users of the application (and/or that are friends with the user on the application, or that are not friends with the user on the application), or that are not registered users of the application.
  • the user may select the option to invite more Facebook friends 4802.
  • the target recipient who may not be a user of the application receives a notification on Facebook to register for the application via a download link to the application for downloading the application to the mobile device.
  • the presented list of users may be Facebook users who are also registered users of the application or the presented list of users may be Facebook users who are not currently registered users of the application.
  • Facebook may refer to any social network other than Facebook as well.
  • the user sends a friend request to "Dijanna McGuiness" 4901 by selecting her name, her photo, or the "add friends" icon next to her name.
  • the icon 5001 changes its appearance as shown in FIG. 50.
  • the icon 5001 in FIG. 50 shows that a friend request has already been sent to Dijanna McGuiness.
  • the profile photos next to the suggested names are profile photos associated with the application and selected by those users.
  • the user selects the "Facebook” option 5002 of FIG. 50, the user is provided with a list of people 5101 as shown in the screen of FIG. 51.
  • the list of people may be the user's Facebook friends and/or friends from the application.
  • the list of people may be the user's Facebook friends who are also registered users of the application.
  • the number of friends in common may be connected friends on the application and/or connected friends on Facebook.
  • the user may also search for names by entering text in the search box 5102.
  • the server processing the search query may search for names or registered users in the application and/or on Facebook.
  • the user may type in "Anni Cartwright” 5201.
  • a list of search results with the text "Anni Cartwright" in the name and/or username field are presented.
  • the "application” refers to the instant video conferencing, video management, video social networking, video communication networking, and video manipulation application for which screenshots are shown in the figures.
  • FIG. 53 shows that the user can access the user's photos and/or videos (e.g., stored on the mobile device or on a cloud server) by selecting the appropriate option 3702 in FIG. 37.
  • the user may have previously provided permission to the application to access the user's photos and/or videos on the mobile device and/or on a cloud server.
  • FIG. 54 shows an instance where a user selects a photo 5401.
  • FIG. 55 shows an instance where a user selects a video longer than the permitted duration 5501. When the user does this, the user is presented with the screen of FIG. 56.
  • FIG. 57 shows an instance where a user selects a video equal to or less than the permitted duration 5701. When the user makes this selection, the user does not receive a message that the video cannot be uploaded.
  • the "New Friend Requests" card 2604 is highlighted.
  • the screen of FIG. 58 is presented to reveal the friend requests received by the user. Even if there are multiple friend requests, the user may need to view these friend requests one by one (i.e., no collage of profile photos associated with the requests). In other embodiments, one or more friend requests are presented as a collage. The friend requests may be ordered from oldest received to newest received, or vice versa. The number of friend requests is visible in the card.
  • the user On the screen of FIG. 58, the user has the option of accepting 5802 or denying 5803 the friend request from "Kornelia Quinlan" 5801.
  • the user may also ignore the request and simply select the photo or swipe across the photo to view the next friend request. If the user declines the friend request, the person who sent the friend request to the user does not get a message indicating that the user denied that person's request. However, that person may be able to send another friend request to the user.
  • an icon indicating that a friend request has been sent to this person is displayed near the person's name whenever the user encounters the person (e.g., views the person's profile or views the person's name in a list of names from the application) on the application. If the person denies the friend request, an icon indicating that a new friend request can be sent to this person is displayed near the person's name whenever the user encounters the person on the application.
  • the screen of FIG. 59 is shown which indicates 5901 that the friend request has been accepted. Selecting, touching, or swiping across the highlighted card of FIG. 59 leads to the screen of FIG. 60. As indicated on the screen of FIG. 60, the next friend request is from "Viktor Power" 6001. Separately, the top of the screen shows that the user is now friends with Kornelia Quinlan 6002. When the user selects the message at the top of the screen, the user is transported to the screen of FIG. 61. The screen of FIG. 61 shows that Kornelia Quinlan 6101 is now a friend of the user. This card is presented as the last card on the screen.
  • any reference to selection of an option or item on a screen described in this specification may additionally or alternatively refer to clicking, swiping, touching, or staring (persistent eye contact for a certain period of time) at, or hovering over, for a certain period of time, the option or item.
  • the user When the user selects the option to add new groups 2605 on the screen of FIG. 26, the user is presented with the screen of FIG. 68.
  • the user can choose which friends to add to the group.
  • the friends presented in FIG. 68 are presented in order of most recent activity on the application (e.g., creating a new group, creating a new chain, adding a link to an existing chain, etc.) or most recent communication (e.g., video call and/or video message) with the user or most recent activity of the friend associated with the user (e.g., liking the user's chain, adding a link to the user's chain, adding the user to a group, etc.), or they may be presented according to some other order such as date added as a friend, alphabetical order, physical distance from user, etc.
  • the screen of FIG. 69 shows that the user selected "Dijanna McGuiness" 6901 to add to the group. Although not shown in FIG. 69, the user selected some other friends as well. In some embodiments, there may be a limit on how many friends can be added to a new group.
  • the user selects the continue option 6902, and then the user is transported to the screen of FIG. 70.
  • the user may enter a name 7001 for the group.
  • FIG. 70 shows the collage 7002 of user photos that will be used as the group photo.
  • the user may be able to send videos to the group such that others in the group can view the videos.
  • a group may also be known as a private network or a private channel.
  • the user Prior to sending a video message to a friend, group, or chain, the user is presented with an option to add text to the video message by selecting the appropriate option in FIG. 7.
  • the user selects the appropriate option 702 in FIG. 7
  • the user is transported to the screen of FIG. 62 or FIG. 65 or FIG. 71 or FIG. 74.
  • the user enters text into the box 7101 as shown in the screen of FIG. 63 or FIG. 66 or FIG. 72 or FIG. 75 and selects "Done.”
  • the text box may have a certain maximum length, and the user's message will have to conform to maximum length of characters or words.
  • the user selects "Done," 7201 the user is led to the screen of FIG. 64 or FIG. 67 or FIG. 73 or FIG.
  • the user may be able to change characteristics associated with the text, including size, font, location of text as overlaid on the video, etc.
  • the video is sent to the target recipient by selecting the continue option 7302 in FIG. 64 or FIG. 67 or FIG. 73 or FIG. 76, the text remains on the foreground while the video plays in the background upon activation of the video by the recipient.
  • the target recipient may have the option to hide the text or otherwise change characteristics associated with the text, including size, degree of transparency of the text in the foreground, font, location of text as overlaid on the video, etc.
  • the text may even be overlaid on a still photo or a video stored on the mobile device that is sent to the target recipient.
  • the still photo is displayed to the target recipient, along with the text overlaid on the photo if added by the user, for a certain period of time (e.g., 15 seconds).
  • the user may select their own photo 2611, name 2612 (also known as display name), or username 2613.
  • the user is presented with the screen of FIG. 77.
  • the user is presented with recent activity 7701.
  • the activity is limited to public network activity such as activity with regard to chains (e.g., adding links to a chain, liking a chain, etc.).
  • chains e.g., adding links to a chain, liking a chain, etc.
  • the user can see that the user's most recent activity was adding links to a couple of chains.
  • the user added a link to her own chain, while at 1:35 PM, the user added a link to another user's chain.
  • the act of registering on the application may be a public activity.
  • the user can choose to edit the user's name by selecting the name 7702 on FIG. 77. When the user does so, the user is transported to the screen of FIG. 78 where the user can edit the user's name. In some embodiments, the user may not be able to edit the user's username 7704. However, in alternate embodiments, the user may even be able to edit the user's username.
  • the user selects the user's photo 7703 in FIG. 77 the user is transported to the screen of FIG. 79 where the user can change the photo 7901. As shown in FIG.
  • the user has options to delete the present photo 7902, capture a new photo 7903, or upload an existing photo 7904 that is stored on the mobile device or on a cloud server.
  • the capture option 7903 user is taken to the screen of FIG. 80.
  • the user can choose which camera 8001 (front or back) to capture the photo and then select the option 8002 to capture the photo. Then the user is transported to the next screen in FIG. 81.
  • the user can select an option 8103 to confirm the photo.
  • the captured photo 8201 may also be stored in the photos on the mobile device or on the cloud server as seen in FIG. 82.
  • the user may decide to edit the listed location, which may be the current location of the mobile device.
  • the user can select the location for association with a captured video.
  • the user is transported to the screen presented in FIG. 83 or FIG. 86.
  • the user can search for a location 8301, select a location 8302, or select "no location," 8303 in which case the location information is hidden or absent and not associated with a video message.
  • the user can search 8401 for a particular location as shown on the screen in FIG. 84 or FIG. 87.
  • the user can then select one of the displayed search results 8501 as shown in FIG. 85 or FIG. 88.
  • the user may also select the live effects option 504 on the screen of FIG. 5.
  • the live effects option is being selected for the user's profile photo 501 that is visible in FIG. 5.
  • the user may select a particular photo or video, or capture a new photo or video, and choose to apply a live effect to the selection.
  • the live effects may include masks, avatars, filters, moving effects, etc.
  • the live effects may be applied to user faces and/or backgrounds identified in both video messages and in video calls. Live effects may be applied to both photos and videos.
  • the user may apply a live effect to a video once it is captured and prior to transmitting it to friend, group (e.g., a private group visible only to members in the group), or chain (e.g., a public chain).
  • group e.g., a private group visible only to members in the group
  • chain e.g., a public chain
  • the user may select the Skeletor mask 8901 of FIG. 89 or FIG. 94.
  • the mask is applied to the user's face in FIG. 89 or FIG. 94 as a preview (not shown in the figure).
  • the application Prior to applying the mask to the user's face, the application identifies the user's face from the photo using one or more facial identification techniques. When the user confirms the selection in FIG. 90 or FIG.
  • the mask is applied to the user's photo. Since this mask is free or has been previously purchased, the mask is applied to the photo without requesting payment.
  • the user may choose to send the modified photo to a friend, group, or chain.
  • the user may select the "Mono" filter (e.g., a color filter) 8902 of FIG. 89 or FIG. 94.
  • the filter may be applied to the photo of FIG. 91 or FIG. 96 as a preview prior to purchase. Alternatively, the filter may have to be purchased even to apply the filter as a preview.
  • the user is presented with a pop-up window 9201, as shown in FIG. 92 or FIG. 97, that prompts the user to confirm that the user wants to pay for the filter.
  • a pop-up window 9201 as shown in FIG. 92 or FIG. 97, that prompts the user to confirm that the user wants to pay for the filter.
  • the user may be asked for authentication credentials.
  • Authentication credentials may include a username and password, a fingerprint, etc.
  • the transaction may be processed through any server described herein or an external transaction server in communication with the server described herein.
  • the appearance of the "Mono" filter option is changed 9301 as shown in FIG. 93 or FIG. 98. Selecting the mono filter 9301 or the check mark 9302 causes the filter to be applied to the selected video or audio.
  • the user may select an option 2606 to call the user's friend "Kaja Soto."
  • the user is presented with the screen in FIG. 99.
  • the screen indicates that the user is calling Kaja Soto.
  • the user may end the call or the initialization of the call by selecting the cancel option 9901.
  • the user may choose to activate or deactivate video transmission from the user's mobile device (and/or video reception at the user's mobile device) by selecting the video option 9902.
  • the user may choose to activate or deactivate voice transmission from the user's mobile device (and/or voice reception at the user's mobile device) by selecting the audio option 9903.
  • the user may also choose to change the camera (e.g., camera facing user or front camera, camera facing away from the user or back camera, etc.) that is capturing the video by selecting the camera option 9904.
  • the camera e.g., camera facing user or front camera, camera facing away from the user or back camera, etc.
  • the video conference begins. This is shown on the screen in FIG. 100.
  • the user can view the video 10001 of his or her friend. If the user's friend is not transmitting video, the user views the friend's profile photo or a blank screen or a home screen of the application or an advertisement.
  • the user's video 10002 or the user's image is presented on the side of the screen and is smaller in size compared to the friend's video or image. Also shown is an advertisement 10003 at the top of the screen.
  • the advertisement is not overlaid on the video or image of the user's friend. In other embodiments, the advertisement is overlaid on the video or image of the user's friend. In other embodiments, this advertisement may be located in other parts of the screen. The advertisement may disappear after a certain period. In some embodiments, any advertisement described in this specification may be for an application, a product, a service, etc. The advertisement may be interactive. When the user selects the advertisement, the advertisement or the advertised item is presented in an Internet browser or other application such as an application store application while the video conference is still going on in the background.
  • the screen presents options to activate or deactivate video 10004, activate or deactivate voice 10005, change the camera capturing the video 10006, add a live effect to or remove a live effect from the video 10007, end the call 10009, and add a friend to the call 10008. These options are overlaid on the friend's video or image. When the user selects or taps on the friend's video or image, the options at the bottom of the screen disappear, except for the end call option. As indicated in various screenshots, an image of the application name may be overlaid on the video conference. [000113] When the user selects the option 10008 to add a friend to the call, the user is transported to the screen in FIG. 101 while the video conference is still ongoing. On this screen, the user may add one or more friends.
  • friends or friend cards are displayed on this screen may be explained by other portions of this specification.
  • the friends are ordered based on recent activity on the application, recent communication with the user, most frequent activity on the application and/or communication with the user, etc.
  • the user may also search for a particular friend.
  • the user enters the text "Zuz" 10201. While the user is typing the text or upon completion of typing the text, the user is presented with search results from among the user's friends.
  • the user selects the friend "Zuzen Graeme" 10301. Once the user makes this selection, the user is taken to the screen of FIG. 104.
  • This screen shows that while the video conference is going on, a call is being placed to Zuzen Graeme.
  • Zuzen receives the call, he may be notified that this is an invitation to join a three -person call as opposed to a one-on-one call. In other embodiments, Zuzen may not receive notification that the invitation from the user is for a three-person call.
  • the user is taken to the screen of FIG. 105.
  • This screen shows that there are three users on the call.
  • Each user's live video feed is displayed on the user's mobile device screen.
  • the user may be able to switch the visual positions of each person's live video feed on the user's screen.
  • the visual positions of the each person's live video feed may switch based on who is talking. In other embodiments, the visual positions of each person's live video feed remains constant during the duration of the video conference or call.
  • the user may be able to add up to a certain number of users to the call.
  • the user selects or touches any part of the screen in FIG. 105, the user is taken to the screen of FIG. 110, which shows that the user has the same options as those which are presented in the screen of FIG. 100.
  • the user selects the end call option 11001, the user is taken to the screen of FIG. 109 where an advertisement is played or shown to the user.
  • the advertisement may play for a particular period of time and there may be an option to skip the advertisement.
  • the advertisement may be an interactive advertisement such that when the user selects the advertisement, the user is transported to a different application or an Internet browser or an application store. Any features described with respect to any other advertisement in the specification are applicable to this advertisement, and vice versa.
  • the call may still continue with the other members of the call even though none of them initiated the call, and the user is returned to the screen of FIG. 26.
  • the call is ended for all members of the call. If Zuzen drops off the call, the user is transported to the screen of FIG. 106 where the current duration 10601 of the call is still visible.
  • the duration of the call is calculated from the time when the call initiated, regardless of whether others were added in and/or dropped off. In other embodiments, the duration of the call is calculated from the time of the most recent drop off. Selecting or touching the video of the user's friend transports the user to the screen of FIG. 100. As seen in FIG.
  • the options at the bottom of the screen are brought up or made visible.
  • the user may select the live effects option 10007.
  • the live effects option 10007 the user is transported to the screen of FIG. 107 where the user may select one or more masks, filters, avatars, moving effects, 10701 etc.
  • FIG. 108 the user selects the "None" option 10801 and selects the check mark 10802 to continue. The user may then be taken to the screen of FIG. 100.
  • the user may select the option for more commands ("!) 1601.
  • the user is transported to the screen of FIG. 111.
  • the user is then given options to view the link of FIG. 16 in birds-eye view 11101, send a friend request to the user who created the link 11102, or report the video 11103.
  • the user selects the option to send a friend request 11102
  • the user is transported to the screen of FIG. 112 where the only option presented on the screen is to send a friend request 11201 and the other options have disappeared.
  • the friend request is sent 11301 to the user who created the link as shown on the screen of FIG. 113.
  • the user When the user selects or touches the "Friend Request Sent" message 11301 on the screen of FIG. 113, the user is transported back to the screen of FIG. 16. [000117] The user may once again return to the screen of FIG. I l l from FIG. 16 as described herein. This time the user may select the option to report the video 11103. When the user makes this selection, the video disappears or blacks out in the background and the user is transported to the screen of FIG. 114. Here the user has options to report the video for inappropriateness 11401, abusiveness 11402, or spam 11403. As shown on the screen of FIG. 115, the user may select the option to report the video for spam. The user is prompted whether the user is sure that this is spam.
  • the user is returned to the screen of FIG. 16.
  • the user confirms 11502
  • the user is transported to the screen of FIG. 116 which indicates that the user's report is being transmitted to the server or to a reporting computing device in communication with the server.
  • the subsequent screen of FIG. 117 indicates that the content has been reported.
  • the next link or video in the chain is presented on the screen of the mobile device either automatically or upon the user selecting or touching the screen.
  • the screen of FIG. 118 is presented to the user.
  • the video link in the chain will appear as a blacked out or "reported video" to the user who reported the video, both in normal view and in birdseye view.
  • the video may still be visible in the chain to other users until the video received equal to or greater than a threshold number of reports.
  • the server may pre-process any videos being posted to a chain, shared to a group, and/or a sent to a friend.
  • the pre-processing may include determining whether the video is appropriate for transmission and that it is not spam, abusive, or inappropriate.
  • the user may navigate between chains by selecting or touching near the bottom or top of the screen, or by swiping top to bottom, or bottom to top.
  • the screen of FIG. 119 presents a link in the chain started by Anni Cartwright.
  • the user selects an area near the bottom of the screen 11901 above the link, the user is taken to the screen of FIG. 14 that has link in a chain started by Mimi Jackson, i.e., the user herself.
  • the users selects an area 1414 near the top of the screen of FIG. 14, the user is taken to the screen of FIG. 132 which shows an advertisement. Selecting an area near the top of the screen of FIG. 132 transports the user back to the screen of FIG. 119.
  • the advertisement of FIG. 132 may be an interactive advertisement, and any features described with respect to other advertisements in the specification are also applicable to this advertisement.
  • the user When the user selects the more commands option 1415 of the link in the chain of FIG. 14, the user is presented with the screen of FIG. 120.
  • the user is given options to see a birdseye view of the link 12001 or to delete the link 12002.
  • the delete option is present only for links created by the user herself.
  • the user cannot delete links created by others.
  • the user may be able to delete links created by others.
  • the user chooses the delete option in FIG. 120
  • the user is transported to the screen in FIG. 121.
  • the user select an option 12101 to not delete the link.
  • the user once the user confirms that the user wishes to delete the link by selecting the confirm option 12102, the user is transported to the screen of FIG. 122 that indicates that the server is processing the user's delete request. If the delete operation is successful, an appropriate message is presented as shown in the screen of FIG. 123. If the delete operation is not successful, an appropriate message is presented as shown in the screen of FIG. 124.
  • the user may select the "School Friends" group 4603 or private network of FIG. 46. When the user makes this selection, the user is transported to the screen of FIG. 125. The user may be able to change the name of the group by selecting the appropriate option 12501. When the user changes the name of the group, the previous name of the group may still be visible for the other members of the group. In alternate embodiments, the group name changes for all members of the group if the name is changed by either the initiator of the group or one of the members of the group. The screen shows how many members 12502 are in the group. The screen shows options for starting a video call 12503 with members of the group or sending a video message 12504 to members of the group. The screen also lists the members 12505 of the group.
  • the user may be able to select a particular member of the group and doing so will show their profile screen.
  • the profile screen shows whether the user is already friends with the user or not.
  • the profile screen also shows how many friends that member has on the application.
  • the user may start a one-on-one video call with the member or send a video message to that member, if the user is already friends with that member. If the user is not friends with that member, the user can send a friend request to that member. Whether or not the user is not friends with that member, the user may be able to see how many friends that member has and their recent public activity, e.g., activity on public chains.
  • registered users may be able to control what portions (e.g., profile photo, list of friends, video messages, public chain activity, etc.) of their profile are visible (and whether the users will be discovered in searches) to themselves, their friends, their friends of friends, their groups, non-friends in groups, and non-friends outside of their groups, non-friends on public chains on which both the registered user and the non-friend have posted links, etc.
  • portions e.g., profile photo, list of friends, video messages, public chain activity, etc.
  • the user may send a video, either a captured video or a downloaded or uploaded video, to a single friend or a group. If the user sends the video message to a group, all users of the group will be able to view the video and respond to the video. In some embodiments, the terms video, message, and video message may be used interchangeably.
  • the user selects the appropriate option 12506 in FIG. 125, the user is transported to the screen of FIG. 126.
  • the user may select the option 12601 to leave the group. Once the user leaves the group, the user is returned to the screen of FIG. 26.
  • the user may not be able to add a new friend to the group or remove one of the group members (except the user himself or herself) from the group.
  • the user may be able to add a new friend to the group and remove one of the group members.
  • the user may be able to remove or delete video messages that the user sent to the group, chain, or to a particular friend such that others can longer view them.
  • the user cannot remove or delete video messages that others posted on the group, chain, or received from others such as the user's friends.
  • the user can remove or delete video messages that others (e.g., friends or non-friends) posted on the group, chain, or received from others such as the user's friends.
  • an advertisement 12701 may be presented on the home screen of FIG. 4.
  • the advertisement is shown in FIG. 127.
  • this advertisement pops up when the application is re-launched.
  • the user may close out of the application and reinitialize the application.
  • the user may access the application after a certain period of time even though the application has not been closed.
  • the user may have the option of closing or skipping the advertisement.
  • the user may select the advertisement.
  • This advertisement is similar to the other advertisements described in the specification.
  • the pending notifications are presented to users.
  • the user may select the friends icon 502 on the home screen of FIG. 5 to look at the latest notifications from the user's private network which are overlaid on the home screen.
  • the notification may be a friend request 12801 from "Cipriano Dockartaigh” if only one friend request is pending.
  • the user can either accept or deny the friend request, or choose to address it later by selecting the "X" option 12802.
  • the notification 12901 is displayed as that in FIG. 129.
  • the user may select the view option 12902 to look at each friend request.
  • the user may swipe left and right to look at more notifications. For example, as shown on the screen in FIG. 130, there may be a notification 13001 of a new video message from "Kaja Soto.” The user may select the view option 13002 to view the video message.
  • FIG. 131 there may be a notification of new likes for a chain created by a user 13101, a chain in the user's network, a suggested chain, or a trending chain.
  • the user may select the view option 13101 to view the likes and information of the users associated with the new likes.
  • the user may select the chains icon on the home screen to look at the latest notifications from the user's public network which are overlaid on the home screen.
  • a user may choose to address any of the pop-up messages at a later point in time by just selecting the "X" option on the pop-up window.
  • FIG. 133 shows a screen of trending chains 13301.
  • Trending chains may be chains that are being viewed by a certain number of other users within a certain time frame, at a certain location, on a certain topic, etc.
  • FIG. 134 shows a screen of suggested chains 13401.
  • Suggested chains may be based on a user's previous activity. For example, the user may have previously had multiple likes or multiple views of links posted by a certain user, or the user may have viewed a certain user's profile. If so, that certain user's links on a particular chain, which the user has not previously viewed, or that certain user's new chains, which the user has not yet viewed, may populate the list of suggested chains. Alternatively, the user may have viewed or searched for chains with certain words. If so, chains which include that word as a hashtag may populate the list of suggested chains.
  • the user selects the option to add to the chain of FIG. 133, the user is presented with the screen of FIG. 135.
  • the screen of FIG. 135 informs 13501 the user that only friends of contributors to a chain can add to a chain.
  • the user is presented with the screen of FIG. 136 which shows the list of contributors 13601 to a chain.
  • the contributors are ordered in terms of the number of common friends between each contributor and the user.
  • the user's friend request will have to be accepted by one of the contributors to the chain for the user to be able to contribute to or add a link to the chain. In some embodiments, the user will still be able to "like" a link on the chain and view the link even though the user is not friends with any contributor on the chain.
  • a user may view a profile of any other user on the application.
  • the type of information that is viewable by a friend of the user versus a non-friend of the user may be changed by a user.
  • the screen of FIG. 137 shows a screen that is presented when a user views a non-friend.
  • the user is able to see that Dijanna has 17 friends 13701 on the application and is able to see that she posted links to a couple of chains today and that she also created a new chain today.
  • the user may send a friend request 13702 to Dijanna by selecting the appropriate option.
  • the screen of FIG. 138 shows a screen that is presented when a user views a friend.
  • the screen indicates that the user is friends 13801 with Kaja and Kaja's total number of friends 13802.
  • the screen also indicates that the user can make a video call 13803 and send a video message 13804 to Kaja.
  • the screen also indicates Kaja's most recent activity which included adding links to a couple of chains.
  • the user may also select the option to view more commands 13805. When the user selects this option, the user is presented with an option to unfriend 13901 as shown in the screen of FIG. 139. Once the option to unfriend is selected, the user is transported to the screen of FIG. 140.
  • the user may also be able to select the number of friends 13701, 13802 as presented on the screens of FIG. 137 and FIG. 138, which leads to the screens of FIG.
  • FIG. 143 is a block diagram of a method for control and manipulation of video signals in public and private communication networks, wherein but for the control and manipulation of video signals, which is necessarily rooted in computing technology, the user would not be able to separate the user's public and private video-related activity.
  • Any activity that is visible outside the user's friends (and/or groups, in some embodiments, and/or friends of friends, in some embodiments) may be referred to as public activity.
  • Any activity that is not visible outside the user's friends (and/or groups, in some embodiments, and/or friends of friends, in some embodiments) may be referred to as private activity.
  • the various blocks of FIG. 143 may be executed in a different order from that shown in FIG. 143. Some blocks may be optional.
  • the method comprises establishing a public video communication network for a mobile device user to interact with a video sequence created by a second mobile device user not connected to the user on the public video communication network, the interacting with the video sequence comprising adding video messages to the video sequence and viewing video messages from the video sequence.
  • the method comprises establishing a private video communication network for the mobile device user to interact with a third mobile device user connected to the user on the public video communication network, the interacting with the third mobile device user comprising transmitting video messages to the third mobile device user and receiving video messages from the third mobile device user.
  • the term "signal” may refer to a single signal or multiple signals.
  • the term “signals” may refer to a single signal or multiple signals. Any reference to a signal may be a reference to an attribute of the signal.
  • Any transmission, reception, connection, or communication may occur using any short-range (e.g., Bluetooth, Bluetooth Low Energy, near field communication, Wi-Fi Direct, etc.) or long-range communication mechanism (e.g., Wi-Fi, cellular, etc.). Additionally or alternatively, any transmission, reception, connection, or communication may occur using wired technologies. Any transmission, reception, or communication may occur directly between systems or indirectly via one or more systems such as servers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)

Abstract

La présente invention concerne un appareil de commande et de manipulation de signaux vidéo dans des réseaux de communication publics et privés. L'appareil comprend un processeur de dispositif informatique pour établir un réseau de communication vidéo public pour un utilisateur de dispositif mobile afin d'interagir avec une séquence vidéo créée par un deuxième utilisateur de dispositif mobile non connecté à l'utilisateur sur le réseau de communication vidéo public, l'interaction avec la séquence vidéo comprenant l'ajout de messages vidéo à la séquence vidéo et la visualisation de messages vidéo à partir de la séquence vidéo ; et établir un réseau de communication vidéo privé pour que l'utilisateur de dispositif mobile interagisse avec un troisième utilisateur de dispositif mobile connecté à l'utilisateur sur le réseau de communication vidéo public, l'interaction avec le troisième utilisateur de dispositif mobile comprenant la transmission de messages vidéo au troisième utilisateur de dispositif mobile et la réception de messages vidéo en provenance du troisième utilisateur de dispositif mobile.
PCT/US2018/024184 2017-03-23 2018-03-23 Commande et manipulation de signaux vidéo dans des réseaux de communication publics et privés WO2018175989A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762475818P 2017-03-23 2017-03-23
US62/475,818 2017-03-23

Publications (1)

Publication Number Publication Date
WO2018175989A1 true WO2018175989A1 (fr) 2018-09-27

Family

ID=63586211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/024184 WO2018175989A1 (fr) 2017-03-23 2018-03-23 Commande et manipulation de signaux vidéo dans des réseaux de communication publics et privés

Country Status (1)

Country Link
WO (1) WO2018175989A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220103785A1 (en) * 2020-04-24 2022-03-31 Meta Platforms, Inc. Dynamically modifying live video streams for participant devices in digital video rooms
US11909921B1 (en) * 2020-12-21 2024-02-20 Meta Platforms, Inc. Persistent digital video streaming

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020081389A (ko) * 2000-03-03 2002-10-26 퀄컴 인코포레이티드 기존 통신 시스템에서 그룹 통신 서비스에 가담하기 위한방법 및 장치
WO2003053034A1 (fr) * 2001-12-15 2003-06-26 Thomson Licensing S.A. Procede et systeme d'utilisation d'un canal de conversation privee dans un systeme de videoconference
US9178961B2 (en) * 2008-12-22 2015-11-03 At&T Intellectual Property I, L.P. Method and apparatus for providing a mobile video blog service
KR20150133630A (ko) * 2012-05-24 2015-11-30 르네상스 러닝, 인크. 온라인 소셜 플랫폼 상의 코멘트들에 대한 인터랙티브 구성
US20160011758A1 (en) * 2014-07-09 2016-01-14 Selfie Inc. System, apparatuses and methods for a video communications network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020081389A (ko) * 2000-03-03 2002-10-26 퀄컴 인코포레이티드 기존 통신 시스템에서 그룹 통신 서비스에 가담하기 위한방법 및 장치
WO2003053034A1 (fr) * 2001-12-15 2003-06-26 Thomson Licensing S.A. Procede et systeme d'utilisation d'un canal de conversation privee dans un systeme de videoconference
US9178961B2 (en) * 2008-12-22 2015-11-03 At&T Intellectual Property I, L.P. Method and apparatus for providing a mobile video blog service
KR20150133630A (ko) * 2012-05-24 2015-11-30 르네상스 러닝, 인크. 온라인 소셜 플랫폼 상의 코멘트들에 대한 인터랙티브 구성
US20160011758A1 (en) * 2014-07-09 2016-01-14 Selfie Inc. System, apparatuses and methods for a video communications network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220103785A1 (en) * 2020-04-24 2022-03-31 Meta Platforms, Inc. Dynamically modifying live video streams for participant devices in digital video rooms
US20220103786A1 (en) * 2020-04-24 2022-03-31 Meta Platforms, Inc. Dynamically modifying live video streams for participant devices in digital video rooms
US11647155B2 (en) * 2020-04-24 2023-05-09 Meta Platforms, Inc. Dynamically modifying live video streams for participant devices in digital video rooms
US11647156B2 (en) 2020-04-24 2023-05-09 Meta Platforms, Inc. Dynamically modifying live video streams for participant devices in digital video rooms
US11909921B1 (en) * 2020-12-21 2024-02-20 Meta Platforms, Inc. Persistent digital video streaming

Similar Documents

Publication Publication Date Title
US11882231B1 (en) Methods and systems for processing an ephemeral content message
US11277459B2 (en) Controlling a display to provide a user interface
US8848026B2 (en) Video conference call conversation topic sharing system
US9246917B2 (en) Live representation of users within online systems
RU2642513C2 (ru) Система связи
US9794264B2 (en) Privacy controlled network media sharing
US20160212230A1 (en) Contextual connection invitations
US20130179491A1 (en) Access controls for communication sessions
US11290292B2 (en) Complex computing network for improving streaming of audio conversations and displaying of visual representations on a mobile application
US20220200947A1 (en) Systems and methods for providing a flexible and integrated communications, scheduling, and commerce platform
US20220150295A1 (en) Methods and systems for initiating a coordinated effect
US20160306950A1 (en) Media distribution network, associated program products, and methods of using the same
EP3272127B1 (fr) Système d'interaction sociale à base de vidéo
WO2018175989A1 (fr) Commande et manipulation de signaux vidéo dans des réseaux de communication publics et privés
US20180219812A1 (en) Mobile app messaging platform system
US20230209103A1 (en) Interactive livestreaming experience

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18771900

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.01.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18771900

Country of ref document: EP

Kind code of ref document: A1