WO2016049875A1 - Device and method for capturing, sharing and watching video messages - Google Patents

Device and method for capturing, sharing and watching video messages Download PDF

Info

Publication number
WO2016049875A1
WO2016049875A1 PCT/CN2014/087985 CN2014087985W WO2016049875A1 WO 2016049875 A1 WO2016049875 A1 WO 2016049875A1 CN 2014087985 W CN2014087985 W CN 2014087985W WO 2016049875 A1 WO2016049875 A1 WO 2016049875A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
screen
electronic device
icon
portable electronic
Prior art date
Application number
PCT/CN2014/087985
Other languages
French (fr)
Inventor
Remyyiyang Ho
Zhuofei Chen
Shujia KANG
Xucheng TANG
Zongzhuo WU
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Priority to PCT/CN2014/087985 priority Critical patent/WO2016049875A1/en
Publication of WO2016049875A1 publication Critical patent/WO2016049875A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/4424Monitoring of the internal components or processes of the client device, e.g. CPU or memory load, processing speed, timer, counter or percentage of the hard disk space used
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44209Monitoring of downstream path of the transmission network originating from a server, e.g. bandwidth variations of a wireless network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • This application relates generally to electronic devices used for exchanging instant messages, including but not limited to electronic devices with touch-sensitive surfaces for capturing, sharing and watching video-based instant messages on the touch-sensitive surfaces based on user instructions (e.g. , through finger gestures or movement of the electronic devices) .
  • IM applications are widely deployed today on different types of electronic devices such as desktop, laptop, tablet, and smartphones. People use the IM applications to primarily exchange text-based, audio-based, or static image-based messages. These messages are relatively easy to generate and share between different IM users. In contrast, there are unique challenges to share video-based messages due to the difficulties of capturing videos and large network bandwidth required for transmitting the videos. For example, it often takes multiple steps before an IM user can start the video camera built into a mobile device. This cumbersome process makes the mobile device less qualified for capturing and sharing those time-sensitive moments. On the other hand, video messages can provide much more information than the messages in other media formats can offer. With the wide use of mobile devices equipped with video cameras, it is critical to develop an IM application so that IM users can exchange video messages with each other using their mobile devices.
  • Such methods and interfaces may complement or replace conventional methods for capturing, sharing, and watching video messages.
  • Such methods and interfaces reduce the burden on a user when trying to capture, share or watch a video message and produce a more efficient human-machine interface.
  • battery-operated electronic devices e.g. , smartphones
  • such methods and interfaces conserve power and increase the usage time between battery charges.
  • the disclosed devices may be a portable device (e.g. , a laptop, a tablet, or a handheld device) that has a touch-sensitive surface (e.g. , a touchpad or touch screen) .
  • the device has a graphical user interface (GUI) , one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions.
  • GUI graphical user interface
  • the user interacts with the GUI primarily through finger contacts and gestures on the touch-sensitive surface.
  • the functions may include video capturing, editing, sharing or playing within an IM application. Executable instructions for performing these functions may be included in a computer readable storage medium or other computer program product configured for execution by one or more processors.
  • a method for capturing, editing and sharing videos is performed at an electronic device having a touch screen, one or more processors, and memory storing programs executed by the one or more processors.
  • the method includes: while running an online messaging application: starting a video recording window on the touch screen in response to a user instruction; starting a video recording session in response to detecting a first finger gesture on the touch screen; detecting a second finger gesture on the touch screen during the video recording session; and continuing the video recording session when detecting a third finger gesture on the touch screen and a time gap between the second finger gesture and the third finger gesture is less than a predefined time window; stopping the video recording session when no finger gesture is detected within the predefined time window; and transmitting the recorded video to a remote server, wherein the remote server is configured to share the recorded video with one or more target users of the online messaging application.
  • a portable electronic device includes a touch screen, one or more processors, memory, and one or more programs; the programs are stored in the memory and configured to be executed by the processors and the programs include instructions for performing the operations of the method described above.
  • a non-transitory computer readable storage medium has stored therein instructions which, when executed by an electronic device having a touch screen, cause the device to perform the operations of the method described above.
  • a method for playing and watching videos is performed at a portable electronic device having a touch screen, one or more processors, and memory storing programs executed by the one or more processors.
  • the method includes: receiving information related to a video from a remote server, the video-related information identifying a source of the video and an online messaging application for playing the video; determining a current status of the online messaging application and a network setting of the portable electronic device; and managing the download and play of the video on the screen in accordance with the current status of the online messaging application and the network setting of the portable electronic device.
  • a portable electronic device includes a touch screen, one or more processors, memory, and one or more programs; the programs are stored in the memory and configured to be executed by the processors and the programs include instructions for performing the operations of the method described above.
  • a non-transitory computer readable storage medium has stored therein instructions which, when executed by an electronic device having a touch screen, cause the device to perform the operations of the method described above.
  • portable electronic devices with touch screens are provided with faster, more efficient and intuitive methods and interfaces for capturing, editing, sharing and watching video messages thereby increasing the effectiveness, efficiency, and user satisfaction with such devices.
  • Such methods and interfaces may complement or replace conventional methods for manipulating user interface objects.
  • FIG. 1A is a block diagram illustrating a portable electronic device having a touch screen in accordance with some embodiments.
  • FIG. 1B is a block diagram illustrating exemplary components for event handling within the portable electronic device in accordance with some embodiments.
  • FIG. 2A is a block diagram of a portable electronic device with a display and a touch-sensitive surface in accordance with some embodiments.
  • FIG. 2B illustrates an exemplary user interface for the portable electronic device with a touch-sensitive surface that is separate from the display in accordance with some embodiments.
  • FIG. 3 is an exemplary user interface illustrating multiple applications installed in a portable electronic device having a touch screen in accordance with some embodiments.
  • FIGS. 3A-3E are exemplary user interfaces illustrating processes of capturing, editing, and sharing videos using an IM application installed in a portable electronic device having a touch screen in accordance with some embodiments.
  • FIGS. 4A-4C are flow diagrams illustrating a method of capturing, editing, and sharing videos in accordance with some embodiments.
  • FIGS. 5A-5J are exemplary user interfaces illustrating processes of playing and watching videos using an IM application installed in a portable electronic device having a touch screen in accordance with some embodiments.
  • FIGS. 6A-6E are flow diagrams illustrating a method of playing and watching videos in accordance with some embodiments.
  • the electronic device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions.
  • Other portable devices such as laptops or tablet computers with touch-sensitive surfaces (e.g. , touch screens and/or touch pads) , may also be used.
  • the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g. , a touch screen and/or a touch pad) .
  • an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device may include one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick.
  • the device supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • applications such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • the various applications executed on the device may use at least one common physical user-interface device, such as the touch-sensitive surface.
  • One or more functions of the touch-sensitive surface as well as corresponding information displayed on the touch-sensitive surface may be adjusted and/or varied from one application to the next and/or within a respective application.
  • a common physical architecture (such as the touch-sensitive surface) of the device may support the variety of applications with user interfaces that are intuitive and transparent to the user.
  • FIG. 1A is a block diagram illustrating a portable electronic device 100 having a touch screen system 112 in accordance with some embodiments.
  • Device 100 may include memory 102 (which may include one or more computer readable storage mediums) , memory controller 122, one or more processing units (CPU’s ) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external port 124.
  • Device 100 may include one or more optical sensors 164. These components may communicate over one or more communication buses or signal lines 103.
  • device 100 is only one example of a portable electronic device, and that device 100 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of the components.
  • the various components shown in FIG. 1A may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • Memory 102 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as CPU 120 and the peripherals interface 118, may be controlled by memory controller 122.
  • Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102.
  • the one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data.
  • peripherals interface 118, CPU 120, and memory controller 122 may be implemented on a single chip, such as chip 104. In some other embodiments, they may be implemented on separate chips.
  • RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals.
  • RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals.
  • RF circuitry 108 may include well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth.
  • an antenna system an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth.
  • SIM subscriber identity module
  • RF circuitry 108 may communicate with networks, such as the Internet, also referred to as the World Wide Web (WWW) , an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN) , and other devices by wireless communication.
  • networks such as the Internet, also referred to as the World Wide Web (WWW)
  • WWW World Wide Web
  • a wireless network such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN) , and other devices by wireless communication.
  • LAN wireless local area network
  • MAN metropolitan area network
  • the wireless communication may use any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM) , Enhanced Data GSM Environment (EDGE) , high-speed downlink packet access (HSDPA) , wideband code division multiple access (W-CDMA) , code division multiple access (CDMA) , time division multiple access (TDMA) , Bluetooth, Wireless Fidelity (Wi-Fi) (e.g. , IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n) , voice over Internet Protocol (VoIP) , Wi-MAX, a protocol for e-mail (e.g.
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • HSDPA high-speed downlink packet access
  • W-CDMA wideband code division multiple access
  • CDMA code division multiple access
  • TDMA time division multiple access
  • Bluetooth Wireless Fidelity (Wi-Fi) (e.g. ,
  • IMAP Internet message access protocol
  • POP post office protocol
  • instant messaging e.g. , extensible messaging and presence protocol (XMPP) , Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE) , Instant Messaging and Presence Service (IMPS) ) , and/or Short Message Service (SMS) , or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
  • XMPP extensible messaging and presence protocol
  • SIMPLE Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions
  • IMPS Instant Messaging and Presence Service
  • SMS Short Message Service
  • Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100.
  • Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111.
  • Speaker 111 converts the electrical signal to human-audible sound waves.
  • Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves.
  • Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data may be retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118.
  • audio circuitry 110 also includes a headset jack.
  • the headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g. , a headphone for one or both ears) and input (e.g. , a microphone) .
  • I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118.
  • I/O subsystem 106 may include display controller 156 and one or more input controllers 160 for other input or control devices.
  • the one or more input controllers 160 receive/send electrical signals from/to other input or control devices 116.
  • the other input control devices 116 may include physical buttons (e.g. , push buttons, rocker buttons, etc. ) , dials, slider switches, joysticks, click wheels, and so forth.
  • input controller (s) 160 may be coupled to any (or none) of the following: a keyboard, infrared port, USB port, and a pointer device such as a mouse.
  • Touch screen 112 provides an input interface and an output interface between the device and a user.
  • Display controller 156 receives and/or sends electrical signals from/to touch screen 112.
  • Touch screen 112 displays visual output to the user.
  • the visual output may include graphics, text, icons, video, and any combination thereof (collectively termed "graphics" ) .
  • graphics text, icons, video, and any combination thereof.
  • some or all of the visual output may correspond to user-interface objects.
  • Touch screen 112 has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact.
  • Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and converts the detected contact into interaction with user-interface objects (e.g. , one or more soft keys, icons, web pages or images) that are displayed on touch screen 112.
  • user-interface objects e.g. , one or more soft keys, icons, web pages or images
  • a point of contact between touch screen 112 and the user corresponds to a finger of the user.
  • Touch screen 112 may use LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments.
  • Touch screen 112 and display controller 156 may detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112.
  • the user may make contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth.
  • the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen.
  • the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
  • device 100 may include a touchpad (not shown) for activating or deactivating particular functions.
  • the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output.
  • the touchpad may be a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
  • Power system 162 for powering the various components.
  • Power system 162 may include a power management system, one or more power sources (e.g. , battery, alternating current (AC) ) , a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g. , a light-emitting diode (LED) ) and any other components associated with the generation, management and distribution of power in portable devices.
  • power sources e.g. , battery, alternating current (AC)
  • AC alternating current
  • a power failure detection circuit e.g., a power failure detection circuit
  • a power converter or inverter e.g. , a power converter or inverter
  • a power status indicator e.g. , a light-emitting diode (LED)
  • Device 100 may also include one or more optical sensors 164.
  • FIG. 1A shows an optical sensor coupled to optical sensor controller 158 in I/O subsystem 106.
  • Optical sensor 164 may include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors.
  • CMOS complementary metal-oxide semiconductor
  • Optical sensor 164 receives light from the environment, projected through one or more lens, and converts the light to data representing an image.
  • imaging module 143 also called a camera module
  • optical sensor 164 may capture still images or videos.
  • an optical sensor is located on the back of device 100, opposite touch screen 112 on the front of the device, so that the touch screen may be used as a viewfinder for still and/or video image acquisition.
  • an optical sensor is located on the front of the device so that the user's image may be obtained for videoconferencing while the user views the other video conference participants on the touch screen.
  • the position of optical sensor 164 can be changed by the user (e.g. , by rotating the lens and the sensor in the device housing) so that a single optical sensor 164 may be used along with the touch screen for both video conferencing and still and/or video image acquisition.
  • Device 100 may also include one or more proximity sensors 166.
  • the proximity sensor turns off and disables touch screen 112 when the electronic device is placed near the user's ear (e.g. , when the user is making a phone call) .
  • Device 100 may also include one or more accelerometers 168.
  • FIG. 1A shows accelerometer 168 coupled to peripherals interface 118.
  • accelerometer 168 may be coupled to an input controller 160 in I/O subsystem 106.
  • information is displayed on the touch screen in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers.
  • Device 100 optionally includes, in addition to accelerometer (s) 168, a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g. , portrait or landscape) of device 100.
  • GPS or GLONASS or other global navigation system
  • the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136.
  • memory 102 stores device/global internal state 157, as shown in FIGS. 1A and 2.
  • Device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen 112; sensor state, including information obtained from the device’s various sensors and input control devices 116; and location information concerning the device’s location and/or attitude.
  • Operating system 126 includes various software components and/or drivers for controlling and managing general system tasks (e.g. , memory management, storage device control, power management, etc. ) and facilitates communication between various hardware and software components.
  • general system tasks e.g. , memory management, storage device control, power management, etc.
  • Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124.
  • External port 124 e.g. , Universal Serial Bus (USB) , FIREWIRE, etc.
  • USB Universal Serial Bus
  • FIREWIRE FireWire
  • Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124.
  • External port 124 e.g. , Universal Serial Bus (USB) , FIREWIRE, etc.
  • USB Universal Serial Bus
  • FIREWIRE FireWire
  • Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124.
  • External port 124 e.g. , Universal Serial Bus (USB) , FIREWIRE, etc.
  • USB Universal
  • Contact/motion module 130 may detect contact with touch screen 112 (in conjunction with display controller 156) and other touch sensitive devices (e.g. , a touchpad or physical click wheel) .
  • Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g. , detecting a finger-down event) , determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g. , detecting one or more finger-dragging events) , and determining if the contact has ceased (e.g. , detecting a finger-up event or a break in contact) .
  • Contact/motion module 130 receives contact data from the touch-sensitive surface.
  • Determining movement of the point of contact may include determining speed (magnitude) , velocity (magnitude and direction) , and/or an acceleration (achange in magnitude and/or direction) of the point of contact. These operations may be applied to single contacts (e.g. , one finger contacts) or to multiple simultaneous contacts (e.g. , "multi-touch" /multiple finger contacts) .
  • contact/motion module 130 and display controller 156 detects contact on a touchpad.
  • contact/motion module 130 and controller 160 detects contact on a click wheel.
  • Contact/motion module 130 may detect a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns. Thus, a gesture may be detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) event at the same position (or substantially the same position) as the finger-down event (e.g. , at the position of an icon) . As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event.
  • Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the intensity of graphics that are displayed.
  • graphics includes any object that can be displayed to a user, including without limitation text, web pages, icons (such as user-interface objects including soft keys) , digital images, videos, animations and the like.
  • graphics module 132 stores data representing graphics to be used. Each graphic may be assigned a corresponding code.
  • Graphics module 132 receives, from applications etc. , one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
  • Text input module 134 which may be a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g. , contacts 137, instant messenger module 147, and any other application that needs text input) .
  • GPS module 135 determines the location of the device and provides this information for use in various applications (e.g. , to telephone 138 for use in location-based dialing and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets) .
  • applications e.g. , to telephone 138 for use in location-based dialing and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets.
  • Applications 136 may include the following modules (or sets of instructions) , or a subset or superset thereof:
  • ⁇ contacts module 137 (sometimes called an address book or contact list) ;
  • Examples of other applications 136 that may be stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
  • contacts module 137 may be used to manage an address book or contact list (e.g. , stored in application internal state 192 of contacts module 137 in memory 102 or memory 270) , including: adding name (s) to the address book; deleting name (s) from the address book; associating telephone number (s) , e-mail address (es) , physical address (es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference 139, e-mail 140, or IM 141; and so forth.
  • an address book or contact list e.g. , stored in application internal state 192 of contacts module 137 in memory 102 or memory 270
  • telephone module 138 may be used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation and disconnect or hang up when the conversation is completed.
  • the wireless communication may use any of a plurality of communications standards, protocols and technologies.
  • image management module 144 includes executable instructions to arrange, modify (e.g. , edit) , or otherwise manipulate, label, delete, present (e.g. , in a digital slide show or album) , and store still and/or video images.
  • modify e.g. , edit
  • present e.g. , in a digital slide show or album
  • video player module 145 includes executable instructions to display, present or otherwise play back videos (e.g. , on touch screen 112 or on an external, connected display via external port 124) .
  • camera module 146 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, or delete a still image or video from memory 102.
  • the instant messenger module 147 includes executable instructions to enter a sequence of characters corresponding to an instant text message, record an audio stream corresponding to an instant audio message, capture an image/video stream corresponding to an instant image/video message, to add/modify/delete previously entered characters to edit the text/audio/image/video messages, to transmit a respective instant text/audio/image/video message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages) , to receive instant text/audio/image/video messages and to view received instant text/audio/image/video messages.
  • SMS Short Message Service
  • MMS Multimedia Message Service
  • XMPP extensible Markup Language
  • SIMPLE Session Initiation Protocol
  • IMPS Internet Messaging Protocol
  • transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in a MMS and/or an Enhanced Messaging Service (EMS) .
  • EMS Enhanced Messaging Service
  • instant messaging refers to both telephony-based messages (e.g. , messages sent using SMS or MMS) and Internet-based messages (e.g. , messages sent using XMPP, SIMPLE, or IMPS) .
  • modules and applications correspond to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g. , the computer-implemented methods and other information processing methods described herein) .
  • modules i.e. , sets of instructions
  • video player module 145 and camera module 146 can be incorporated into the instant messenger module 147.
  • memory 102 may store a subset of the modules and data structures identified above. Furthermore, memory 102 may store additional modules and data structures not described above.
  • device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad.
  • a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 may be reduced.
  • the predefined set of functions that may be performed exclusively through a touch screen and/or a touchpad include navigation between user interfaces.
  • the touchpad when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that may be displayed on device 100.
  • the touchpad may be referred to as a “menu button. ”
  • the menu button may be a physical push button or other physical input control device instead of a touchpad.
  • FIG. 1B is a block diagram illustrating exemplary components for event handling in accordance with some embodiments.
  • memory 102 in FIG. 1A or 270 (FIG. 2A) includes event sorter 170 (e.g. , in operating system 126) and a respective application 136-1 (e.g. , any of the aforementioned applications 137-147) .
  • event sorter 170 e.g. , in operating system 126
  • application 136-1 e.g. , any of the aforementioned applications 137-147 .
  • Event sorter 170 receives event information and determines the application 136-1 and the application view 191 of application 136-1 to which to deliver the event information.
  • Event sorter 170 includes event monitor 171 and event dispatcher module 174.
  • application 136-1 includes application internal state 192, which indicates the current application view (s) displayed on touch screen 112 when the application is active or executing.
  • device/global internal state 157 is used by the event sorter 170 to determine which application (s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
  • application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
  • Event monitor 171 receives event information from peripherals interface 118.
  • Event information includes information about a sub-event (e.g. , a user touch on touch screen 112, as part of a multi-touch gesture) .
  • Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer (s) 168, and/or microphone 113 (through audio circuitry 110) .
  • Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch screen 112 or a touch-sensitive surface.
  • event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripheral interface 118 transmits event information only when there is a predefined event (e.g. , receiving an input above a predetermined noise threshold and/or for more than a predetermined duration) .
  • a predefined event e.g. , receiving an input above a predetermined noise threshold and/or for more than a predetermined duration
  • event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
  • Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views, when touch screen 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display.
  • the application views (of a respective application) in which a touch is detected may correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected may be called the hit view, and the set of events that are recognized as proper inputs may be determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
  • Hit view determination module 172 receives information related to sub-events of a touch-based gesture.
  • hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e. , the first sub-event in the sequence of sub-events that form an event or potential event) .
  • the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
  • Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
  • Event dispatcher module 174 dispatches the event information to an event recognizer (e.g. , event recognizer 180) .
  • event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173.
  • event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver module 182.
  • operating system 126 includes event sorter 170.
  • application 136-1 includes event sorter 170.
  • event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
  • application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application’s user interface.
  • Each application view 191 of the application 136-1 includes one or more event recognizers 180.
  • a respective application view 191 includes a plurality of event recognizers 180.
  • one or more of event recognizers 180 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 136-1 inherits methods and other properties.
  • a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170.
  • Event handler 190 may utilize or call data updater 176, object updater 177 or GUI updater 178 to update the application internal state 192.
  • one or more of the application views 191 includes one or more respective event handlers 190.
  • one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
  • a respective event recognizer 180 receives event information (e.g. , event data 179) from event sorter 170, and identifies an event from the event information.
  • Event recognizer 180 includes event receiver 182 and event comparator 184.
  • event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which may include sub-event delivery instructions) .
  • Event receiver 182 receives event information from event sorter 170.
  • the event information includes information about a sub-event, for example, a touch or a touch movement.
  • the event information also includes additional information, such as location of the sub-event.
  • the event information may also include speed and direction of the sub-event.
  • events include rotation of the device from one orientation to another (e.g. , from a portrait orientation to a landscape orientation, or vice versa) , and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
  • Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event.
  • event comparator 184 includes event definitions 186.
  • Event definitions 186 contain definitions of events (e.g. , predefined sequences of sub-events) , for example, event 1 (187-1) , event 2 (187-2) , and others.
  • sub-events in an event 187 include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching.
  • the definition for event 1 (187-1) is a double tap on a displayed object.
  • the double tap for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase.
  • the definition for event 2 (187-2) is a dragging on a displayed object.
  • the dragging for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch screen 112, and lift-off of the touch (touch end) .
  • the event also includes information for one or more associated event handlers 190.
  • event definition 187 includes a definition of an event for a respective user-interface object.
  • event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch screen 112, when a touch is detected on touch screen 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event) . If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test.
  • the definition for a respective event 187 also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’s event type.
  • a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
  • a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers.
  • metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers may interact with one another.
  • metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
  • a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized.
  • a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view.
  • event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
  • event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
  • data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module 145.
  • object updater 177 creates and updates objects used in application 136-1. For example, object updater 176 creates a new user-interface object or updates the position of a user-interface object.
  • GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch screen.
  • event handler (s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178.
  • data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
  • event handling of user touches on touch screens also applies to other forms of user inputs to manipulate the electronic devices 100 with input-devices, not all of which are initiated on touch screens, e.g. , coordinating mouse movement and mouse button presses with or without single or multiple keyboard presses or holds, user movements taps, drags, scrolls, etc. , on touch-pads, pen stylus inputs, movement of the device, oral instructions, detected eye movements, biometric inputs, and/or any combination thereof, which may be utilized as inputs corresponding to sub-events which define an event to be recognized.
  • FIG. 2A is a block diagram of a portable electronic device with a display and a touch-sensitive surface in accordance with some embodiments.
  • Device 200 need not be portable.
  • device 200 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child’s learning toy) , a gaming system, or a control device (e.g. , a home or industrial controller) .
  • Device 200 typically includes one or more processing units (CPU’s ) 210, one or more network or other communications interfaces 260, memory 270, and one or more communication buses 220 for interconnecting these components.
  • CPU processing units
  • Communication buses 220 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
  • Device 200 includes input/output (I/O) interface 230 comprising display 240, which is typically a touch screen. I/O interface 230 also may include a keyboard and/or mouse (or other pointing device) 250 and touchpad 255.
  • Memory 270 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 270 may optionally include one or more storage devices remotely located from CPU (s) 210.
  • memory 270 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable electronic device 100 ( Figure 1) , or a subset thereof. Furthermore, memory 270 may store additional programs, modules, and data structures not present in memory 102 of portable electronic device 100. For example, memory 270 of device 200 may store drawing module 280 and presentation module 282 while memory 102 of portable electronic device 100 (FIG. 1A) may not store these modules.
  • Each of the above identified elements in FIG. 2A may be stored in one or more of the previously mentioned memory devices.
  • Each of the above identified modules corresponds to a set of instructions for performing a function described above.
  • the above identified modules or programs i.e. , sets of instructions
  • memory 270 may store a subset of the modules and data structures identified above.
  • memory 270 may store additional modules and data structures not described above.
  • FIG. 2B illustrates an exemplary user interface on a device (e.g. , device 200, FIG. 2A) with a touch-sensitive surface 451 (e.g. , a tablet or touchpad 255, FIG. 2A) that is separate from the display 450 (e.g. , touch screen 112) .
  • a touch-sensitive surface 451 e.g. , a tablet or touchpad 255, FIG. 2A
  • the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in FIG. 2B.
  • the touch sensitive surface e.g. , 451 in FIG. 2B
  • has a primary axis e.g. , 452 in FIG.
  • the device detects contacts (e.g. , 460 and 462 in FIG. 2B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g. , in FIG. 2B 460 corresponds to 468 and 462 corresponds to 470) .
  • user inputs e.g. , contacts 460 and 462 detected by the device on the touch-sensitive surface (e.g. , 451 in FIG. 2B) are used by the device to manipulate the user interface on the display (e.g. , 450 in FIG. 2B) of the electronic device when the touch-sensitive surface is separate from the display.
  • similar methods may be used for other user interfaces described herein.
  • finger inputs e.g. , finger contacts, finger tap gestures, finger swipe gestures
  • one or more of the finger inputs are replaced with input from another input device (e.g. , a mouse based input or stylus input) .
  • a swipe gesture may be replaced with a mouse click (e.g. , instead of a contact) followed by movement of the cursor along the path of the swipe (e.g. , instead of movement of the contact) .
  • a tap gesture may be replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g. , instead of detection of the contact followed by ceasing to detect the contact) .
  • multiple user inputs are simultaneously detected, it should be understood that multiple computer mice may be used simultaneously, or a mouse and finger contacts may be used simultaneously.
  • UI user interfaces
  • associated processes may be implemented on an electronic device with a display and a touch-sensitive surface, such as device 200 or portable electronic device 100.
  • FIG. 3 is an exemplary user interface 300 illustrating multiple applications installed in the portable electronic device 100 having a touch screen 112 in accordance with some embodiments. Similar user interfaces may be implemented on device 200.
  • the user interface 300 includes the following elements, or a subset or superset thereof:
  • Signal strength indicator (s) for wireless communication such as cellular signal indicator 302 and Wi-Fi signal indicator 308;
  • a user can select one of the applications by finger tapping a respective icon in the touch screen 112. For example, a finger tap 301 of the icon corresponding to the messenger 147 causes the device 100 to display the messenger application’s user interface on the touch screen 112. As will be described below, the user can exchange instant text/audio/image/video messages with others through the user interface.
  • FIGS. 3A-3E are exemplary user interfaces illustrating processes of capturing, editing, and sharing videos using an IM application installed in a portable electronic device having a touch screen in accordance with some embodiments.
  • the user interfaces in these figures are used to illustrate the processes described below in connection with FIGS. 4A-4C.
  • FIGS. 3A-3E some finger contact or movement sizes may be exaggerated for illustrative purposes. No depiction in the figures bearing on finger contact or movements should be taken as a requirement or limitation for the purpose of understanding sizes and scale associated with the methods and devices disclosed herein.
  • FIG. 3A depicts an exemplary user interface 300A for capturing videos when the messenger 147 (which is typically an online messaging application installed on the portable electronic device 100) is in a conversation list view.
  • the user interface 300A displays a list of conversation entries, each entry including related information such as an identifier 332 of another participant of the conversation, a brief description 333 of the conversation (e.g. , the most recently received message in a conversation) , and a timestamp 334 of the last update to the conversation, etc.
  • the user of the device 100 may choose from, and thereby enter a conversation with one or more target users by selecting one of the conversation entries, whereupon the user will be able to view instant messages (e.g. , text, multimedia, etc. ) included in the conversation.
  • the user can also generate instant messages and share them with others through one of the conversation.
  • the user may invoke the camera application 146 from the messenger 147 to capture a video and share the video with others.
  • the user of the messenger 147 has not yet engaged in a conversation with a particular entity in the conversation list view, the user may decide with whom to share the captured video after completing the video capture and editing process.
  • FIG. 3B illustrates an exemplary user interface 300B of an individual conversation view.
  • the user of the messenger 147 is currently engaging a conversation with one or more other users. Therefore, the captured video will be shared with the one or more other participants of this particular conversation.
  • the user may give an instruction to the device 100 to start a video recording window on the touch screen 112.
  • the user instruction is a downward finger swipe gesture 330 on the touch screen 112 from top to bottom.
  • the conversation list 315 is pulled down and being replaced by the video recording window 310.
  • the user instruction of starting the video recording window is to shake the electronic device.
  • the shaking is detected by, for example, the accelerometer 168.
  • the device 100 determines the meaning of the shaking movement based on the magnitude of the rate of change of acceleration. For example, when the rate of change of acceleration exceeds a predetermined threshold value, the device 100 determines that the user’s intent is to start the video recording window automatically.
  • the video recording window 310 may include a video recording indicator 325 and a progress bar 326.
  • a picture may be initially displayed on the screen while activating the camera module 145 and the microphone 113 in the background.
  • the video recording indicator 325 may change its appearance to indicate the readiness of the camera module 145 and the microphone 113.
  • the device 100 can start the video capturing process. Subsequently, the device 100 stops the video capturing process upon detection of a finger life-off gesture from the video recording window 310.
  • a progress indicator 327 on the progress bar 326 may demonstrate the progress of the video recording and indicate the length of the video relative to a predetermined length (after the video recording starts) .
  • the messenger 147 may limit the maximum length of a video message that can be shared with others to be 10 seconds in order to limit the usage of bandwidth.
  • such video may be referred to as “micro video” in the rest of this application.
  • the progress bar 326 may have a scale measuring the length of the dotted progress indicator 327, which indicates the length of the video recorded so far and the progress of the recording before it reaches 10 seconds.
  • the user may move his/her finger around on the touch screen 112.
  • one or more finger gestures on the touch screen 112, such as 340-344 may be detected.
  • the user may accidentally lift his/her finger off the touch screen 112 without intending to stop the video recording process and then press the finger on the touch screen 112.
  • the user may be walking while recording the video using the device 100. Therefore, it is important for the device 100 to interpret these finger movements correctly and act accordingly.
  • the temporal and spatial location of different finger gestures may be used for determining whether the video recording should be continued or stopped. The user may be even given an opportunity to edit the recorded video before it being transmitted to a remote server to be shared with one or more target users of the online messaging application, such as one or more users from the conversation view in FIG. 3B.
  • the video recording window may be started when the user of the messenger 147 is engaging an online conversation with one or more target users as shown in FIG. 3B.
  • the exemplary conversation window 321 displays the messages exchanged between the user of the device 100 and other users of the online conversation.
  • the conversation window 321 comprises a conversation pane for displaying the messages, a text message input field 326, and an additional function button 323.
  • the conversation pane may comprise messages 322a-b submitted to the online conversation by different participants of the conversation.
  • the user may submit text messages to the online conversation using the message input field 326.
  • the message input field 326 displays one or more characters or emoticons entered by the user in accordance with some embodiments.
  • the message input field 326 may further comprise an input cursor.
  • the user can generate and submit multimedia messages to the online conversation. For example, instead of shaking the device 100, a user may finger tap 324 the additional function button 323 to bring up a matrix of additional functions at the bottom portion of the exemplary user interface 300B.
  • the additional function matrix includes the following elements, or a subset or superset thereof:
  • the functions in the additional function matrix shown in FIG. 3B may or may not be the same as those applications shown in FIG. 3.
  • the contacts function 137’in FIG. 3B may refer to the contact list of the user of the messenger 147 while the contacts application 137 in FIG. 3 may refer to the contact list of the user of the phone application 138.
  • the photos function 144’in FIG. 3B may be the same as the photo application 144 in FIG. 3 because both manage the images stored in the memory of the device 100.
  • the video chat function 145’a llows the user to have video conference with another user, which is different from the micro video function 148 as described in the present application.
  • a user can select one element of the additional function matrix by finger tapping a respective icon in the touch screen 112.
  • a finger tap 328 of the icon corresponding to the micro video function 148 may cause the device 100 to start a micro video recording window on the touch screen 112, so that micro videos may be recorded and shared with one or more target users of the online messaging application.
  • one aspect of the present application is a more convenient method for capturing, editing and sharing videos using the electronic device 100 while the device is running an online messaging application, such that the user can quickly record one or more micro videos using finger gestures and optionally edit the recorded videos before sharing the recorded videos with one or more target users of the online messaging application.
  • FIG. 3C depicts an exemplary user interface 300C that replaces the user interface 300B on the touch screen 112 in response to the user’s finger tap 328 of the icon corresponding to the micro video function 148.
  • the video recording window 320 may include a video recording indicator 325 to indicate the readiness of the camera module 146 and the microphone 113 and a progress bar 326 comprising a progress indicator 327 to demonstrate the progress of the video recording and indicate the current length of the video relative to a predetermined length (after the video recording starts) .
  • a micro video message it is possible that the user may move his/her finger around on the touch screen 112.
  • one or more finger gestures on the touch screen 112, such as 340-344, may be detected.
  • the user may accidentally lift his/her finger off the touch screen 112 without intending to stop the video recording process and then press the finger on the touch screen 112.
  • the user may be walking while recording the video using the device 100. Therefore, it is important for the device 100 to interpret these finger movements correctly and act accordingly.
  • the temporal and spatial location of different finger gestures may be used for determining whether the video recording should be continued or stopped.
  • the video may be transmitted to a remote server. And the remote server may then share the recorded video with one or more target users of the online messaging application specified by the user of the device 100.
  • the designation of the one or more target users of the messaging application depends on the context of how the video recording window is started.
  • FIGS. 3B and 3C depict that the video recording window 320 is started when the user is in an online conversation 321.
  • the video recording window 320 is started.
  • the one or more target users of the micro video message are other participants of the same conversation.
  • the video recording window 310 is started when the online messaging application is in the conversation list view, but not in a particular conversation view.
  • a user instruction such as a downward finger swipe gesture on the touch screen 112, or shaking the portable electronic device 100
  • the video recording window 310 is started.
  • the one or more target users of the micro video message are designated after the completion of the video recording process.
  • FIG. 3D depicts an exemplary user interface 300D that is displayed after the completion of the video recording process to designate one or more target users to share the recorded video with.
  • the top portion of the exemplary user interface 300D includes a video playing window 360.
  • a user selection of the video playing window 360 triggers the device 100 to play the video captured by the device 100.
  • the first or last frame of the recorded video may be displayed in the video playing window 360.
  • a progress bar 366 may include a progress indicator 367 to visualize that the progress of the video recording is 100%of a predetermined length.
  • the bottom portion of the exemplary user interface 300D displays different options for the user to decide where to send the recorded video.
  • the options include the following choices, or a subset or superset thereof:
  • a conversation contact (e.g. A in FIG. 3D) 353;
  • a checkbox 356 is provided so that the user may designate one or more target users to send the recorded video.
  • Moments 351 allows the user to post the video on the user’s virtual message board of the messenger application 147 by selecting the icon of Moments 351. Any user of the messenger application 147 can visit the user’s message board to watch videos posted by the user.
  • the virtual message board may include multiple categories and the user can select the “>” sign 355 to bring up the multiple categories and designate a particular category for hosting the video.
  • FIG. 3E depicts an exemplary user interface 300E that is displayed in response to a user selection of the video playing window 360 in FIG. 3D.
  • a preview window 320-1 is displayed to allow the user to edit the recorded video before sharing.
  • a plurality of icons, such as confirm 370 and modify 371, may be provided to facilitate the editing. The user may select different clips of the video (not the entire video) to share, and delete unwanted portions of the recorded video using the modify icon 371, among others.
  • the camera may record a video clip C 1 for the period of 328.
  • the period of 328 is terminated when a second finger gesture 342 (e.g. , finger-up) is detected.
  • the second finger gesture 342 may be an accident unintended by the user of the device 100.
  • the device 100 continues recording the video clip C 2 after detecting the second finger gesture 342 for the period of 329 until a third finger gesture 344 (e.g. , finger-down) is detected on the touch screen 112.
  • More video clips C 3 , C 4 , ..., C N may be recorded in response to subsequent finger gestures.
  • the length of video clips like the one corresponding to the period of 329 is checked based on the time difference between the second finger gesture 342 and the third finger gesture 344. If the time difference is no greater than a predefined threshold (e.g. , 0.5 second) , it is assumed that the user does not intend to stop the video recording using the second finger gesture and such video clip should be combined with the previous video clip C 1 . On the other hand, if the time difference is greater than the predefined threshold, such video clip will be highlighted during the preview process so that the user can determine whether the video clip should be kept as part of the recorded video to be shared with other users or should be deleted.
  • a predefined threshold e.g. 0.5 second
  • a progress bar 326 is displayed below the video clips. Dotted progress indicator 377 is used to show the length of each video clip during the video recording process. In some embodiments, some portions of the progress bar 326 may not have the dotted progress indicator. For example, the portion on the progression bar 376 corresponding to the duration of 329 may be blank to show no finger gesture is detected during the period 329. The user can modify 371 the video clips recorded the video recording process before confirming 370 the video and sharing it with one or more target users.
  • FIGS. 4A-4C are flow diagrams illustrating a method of capturing, editing, and sharing videos in accordance with some embodiments.
  • Method 400 is performed at an electronic device having a display and a touch-sensitive surface.
  • the display is a touch screen and the touch-sensitive surface is on the display.
  • the display is separate from the touch-sensitive surface.
  • the method 400 provides an efficient, intuitive way to record micro videos and share the recorded micro videos with users of an online messaging application.
  • the method reduces the steps to active video recording devices on a portable device, thereby creating a more efficient human-machine interface.
  • enabling a user to record and share micro videos on the touch-sensitive display faster and more efficiently conserves power and increases the usage time between battery charges.
  • the method 400 has a fault-tolerant capability so that it can ignore the user’s accidental, unintended finger gestures on the touch screen.
  • the device while running an online messaging application, starts (410) a video recording window on the touch screen in response to a user instruction (402) .
  • the device may start (410) the video recording window on the touch screen in response to detecting by a built-in accelerometer a shaking (404) of the electronic device while running an online messaging application (406) .
  • the device may start (410) the video recording window on the touch screen in response to detecting the downward finger swipe gesture (e.g. , 330, FIG. 3A) on the touch screen from top to bottom (408) .
  • FIG. 3A illustrates a user interfaces 300A of an online messaging application running on the portable electronic device.
  • the video recording window is started in response to a user instruction of a downward finger swipe gesture 330.
  • a user may start the video recording window by shaking the portable electronic device while running the online messaging application.
  • the shaking may be detected by the accelerometer (406) in the portable electronic device.
  • the portable electronic device determines the meaning of the shaking movement based on the magnitude of the rate of change of acceleration. For example, when the rate of change of acceleration exceeds a predetermined threshold value, the device determines that the user’s intent is to start the video recording window automatically.
  • FIG. 3B illustrates a user interface 300B of an online messaging application.
  • a user may finger tap (324) the additional function button 323 to activate the additional function matrix at the bottom region of the user interface 300B.
  • the additional function matrix includes the micro video function icon 148 among other function icons.
  • the video recording window is started in place of the additional function matrix as shown in FIG. 3C.
  • the portable electronic device 100 may start the video recording window in response to detecting a shaking of the portable electronic device during the online conversation session. Relative to the two finger tapping gestures detection, the shaking detection to start the video recording window is more efficient.
  • the video recording window starting step 410 includes initially displaying (412) a picture in the video recording window while activating the camera and the microphone and displaying (414) a progress bar corresponding to a predefined length of the video to be recorded.
  • a dynamic view preview of the video being recorded may be rendered (416) in the video recording window.
  • a picture may be initially displayed (412) in the video recording window 310 and 320 respectively when activating the camera and the microphone.
  • the picture may be user-specified or a default picture configured in the portable electronic device.
  • the picture may be replaced by the video recording indicator to indicate the readiness of the camera and the microphone.
  • the video recording indicator is dynamically updated according to the current status of the camera.
  • a dark camera image or a closed eye image may be displayed in the window while the camera and the microphone are activated.
  • the video recording indicator is updated to an open shutter image or an open eye image to indicate the readiness of the camera and the microphone for video recording.
  • a dynamic preview of images through the camera may be rendered (416) in places of the still images.
  • the device starts a video recording process using the camera and the microphone in response to detecting a first finger gesture on the touch screen (418) .
  • the first finger gesture is a finger down gesture on the touch screen.
  • the device may start the video recording process upon detecting a finger contact on the touch screen within a predefined area of the video recording window and then instruct the camera and the microphone to start the video recording process.
  • the device monitors the user’s finger contact with the touch screen to determine when it should stop the video record process.
  • the user is allowed to move the finger contact position on the touch screen while maintaining the constant finger contact with the touch screen but without interrupting the video recording process.
  • the device may detect (420) a second finger gesture on the touch screen (e.g. , a lift-off of the finger from the touch screen) .
  • the video recording process may be continued (430) or stopped (434) depending on the detection of a third finger gesture and a time gap between the second finger gesture and the third finger gesture.
  • the device transmits (436) the recorded video to a remote server for sharing with other users of the online message application.
  • the device prompts (446) the user to decide the target users for receiving the recorded video.
  • the device displays multiple options for sharing the video, including sharing it as one of the moments, a local copy in the portable electronic device, a conversation contact, or a group, among others.
  • the device prompts (448) the user to edit the portion of video recording corresponding to the time gap between the second finger gesture and the third finger gesture.
  • the portable electronic device may detect the upward finger swipe gesture (422) as the second finger gesture.
  • the device detects (426) a finger contact position on the touch screen.
  • the device detects (428) a termination of the finger contact with the touch screen.
  • the starting position of the upward finger swipe gesture may be at location 340
  • the ending position of the upward finger swipe gesture (where the finger is lifted off the touch screen) may be at location 342.
  • the position 342 may be directly above the position 340 or above the position in certain angle, as shown in FIGS. 3A and 3C.
  • the video recording process may stop (434) when no finger gesture is detected within a predefined time window.
  • the video recording process continues (430) .
  • the finger may be briefly lifted off the touch screen surface due to the instability of the finger contact with the touch screen.
  • the device may measure a time gap between the second finger gesture and the third finger gesture.
  • the time gap is less than the predefined time window, the second and third finger gestures are treated as false alarms and the video recording process continues (430) .
  • the user may intend to skip a brief period during a video recording process. Therefore, lifting the finger off the touch screen for a brief duration (e.g. , 1-3 seconds) may give the user options to leave bookmarks on the portions of the recorded video for deleting (448) prior to transmitting (436) to the remote server.
  • the entire video recording process may include a plurality of video clips C 1 , C 2 , C 3 , C 4 , ...C N .
  • the clip C 2 corresponding to the period of 329 may be displayed as blank on the progress bar 376 in the preview window 320-1 of FIG. 3E, indicating that no finger gesture on the touch screen 112 was detected by the device 100 during the period of 329.
  • the user may decide to keep the clip C 2 corresponding to the period of 329 when the no finger gesture was due the instability of the finger contact with the touch screen 112 or to delete the clip C 2 when the user intentionally lifted his/her finger off the touch screen so as to delete the clip C 2 from the recorded video for sharing.
  • a finger contact position on the touch screen may be continuously monitored (432) .
  • the spatial character of finger gestures may be used to determine whether or not to stop the video recording process.
  • the device may detect a sequence of finger gestures from position 340 to position 342 and then to position 344.
  • the video recording process may continue (430) .
  • the position 344 is located above position 342 for illustrative purposes.
  • the position 344 may be located outside the video recording window. As long as the finger contact with the touch screen continues, the video recording process will not stop (430) .
  • the device may transmit (436) the recorded video to a remote server.
  • the remote server is configured to share (438) the recorded video with one or more target users of the online messaging application.
  • the device prompts (446) the user with multiple options for sharing the recorded video.
  • the device seamlessly records (440) a sequence of micro videos when the recording process exceeds a predetermined video length.
  • the predetermined video length is between 8 to 15 seconds (442) .
  • the predetermined video length limits the size of the video files to be transmitted because smaller video files are easier to transmit (436) and share (438) with other users of the online messaging application.
  • the device may seamlessly write the first 10 seconds of video content into the first video file and the video content after the first 10 seconds into a second video clip file and so on so forth.
  • the series of micro videos is displayed to the user so that the user may edit (448) before transmitting (436) the videos to the remote server.
  • the remote server is configured to share (444) the series of videos with the one or more target users of the online messaging application.
  • the remote server may be configured to instruct the receiving ends of the series of micro videos to seamlessly combine the clips in the series into one longer video and play them one by one when sharing (444) the series of videos.
  • FIGS. 4A-4C may be implemented by components depicted in FIGS. 1A and 1B.
  • detection of the finger gestures may be implemented by event sorter 170, event recognizer 180, and event handler 190.
  • Event monitor 171 in event sorter 170 detects a finger gesture on a touch screen 112, and event dispatcher module 174 delivers the event information to application 136-1.
  • application 136-1 includes methods and graphical user-interfaces for updating the information displayed on the touch screen 112.
  • a respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186, and determines whether particular gestures have been performed.
  • event recognizer 180 activates an event handler 180 associated with the detection of a respective gesture.
  • Event handler 180 may utilize or call data updater 176 or object updater 177 to update data or a text display region and the application internal state 192.
  • FIGS. 1A and 1B it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted in FIGS. 1A and 1B.
  • the user interface 300 can appear on the touch screen of a portable electronic device that is responsible for capturing, editing, and sharing videos as described in connection with FIGS. 3A-3E.
  • the user interfaces 300A or 300B may be activated and rendered on the touch screen to replace the user interface 300.
  • the aforementioned embodiments are directed to micro video capturing, editing and sharing using a portable electronic device, i.e. , the sending end of an online messaging application.
  • the remote server After the remote server receives a video and information of a target user to whom the video is directed, the remote server identifies a device (e.g. , a mobile phone) from which the target user currently logs into his/her account of the online messaging application and then forwards the video and related information to the device.
  • a device e.g. , a mobile phone
  • FIGS. 5A-5J are exemplary user interfaces illustrating processes of playing and watching videos using an online messaging application installed in a portable electronic device having a touch screen in accordance with some embodiments.
  • FIGS. 6A-6E are flow diagrams illustrating a method of playing and watching videos on a portable electronic device in accordance with some embodiments.
  • the portable electronic device first receives (602) information related to a video from a remote server.
  • the video-related information identifies a source of the video, i.e. , the user identifier of the user who captures the video using his/her portable electronic device, and an online messaging application for playing the video.
  • the online messaging application used for capturing the video is the same as the online messaging application used for playing the video.
  • the two applications are different from each other.
  • the online messaging application may be a distributed instant messaging application such as WeChat by Tencent, Inc. This messaging application includes a client-side component installed in terminal devices such as a mobile phone and a server-side component installed in the remote server.
  • a user of the messaging application can, e.g. , start the application by finger tapping 301 the corresponding icon displayed in the user interface 300 of FIG. 3.
  • a video file typically includes a large amount of data and it would consume a large bandwidth when downloaded from the remote server, which might have adverse impact on other applications running on the portable electronic device. Therefore, it is important for the device to take multiple factors into consideration based on the video-related information before downloading the video itself.
  • the video-related information also includes one or more snapshots of the video, which may correspond to the ith frame of the video.
  • the snapshot can be the first frame of the video or a randomly-chosen frame of the video.
  • the snapshot serves as a preview of the video before the video is downloaded on the portable device.
  • the portable device In response to the arrival of the video-related information, the portable device first determines (604) a current status of the online messaging application and a network setting of the portable electronic device. Next, the portable electronic device manages (606) the download and play of the video on the touch screen in accordance with the current status of the online messaging application and the network setting of the electronic device.
  • the online messaging application has different running modes and the download and play of the video should be handled adaptively based on the specific running mode that the online messaging application is currently at.
  • some network settings around the portable electronic device may not be appropriate for playing the video message.
  • both types of information are useful when the portable electronic device determines a strategy of handling the download and play of the video. For example, when there is a large bandwidth available and there is an indicator that the user of the device is interested in watching the video, the device may start downloading video from the remote sever automatically and play the video on its display without any further user instruction. Conversely, when there is limited bandwidth available to the portable electronic device and the user has not indicated that he/she wants to watch the video, the portable device may hold off downloading the video until after receiving further user instruction. In some embodiments, the device may even interrupt the downloading of the video after the user has indicated a change of mind to save the bandwidth for other use.
  • FIG. 5A depicts a user interface 500A after the user finger taps 301 the icon corresponding to the messenger application 147 of FIG. 3.
  • This user interface 500A corresponds to a conversation list session of the online messaging application.
  • a user selection of the chats icon 512-1 brings up a list of conversations (also known as "chats" ) on the touch screen 112 as shown in FIG. 5A.
  • a user selection of the contacts icon 512-2 causes the device to display the contact list of the user of the online messaging application.
  • a user selection of the favorites icon 512-3 generates a list of messages (text/audio/image/video) shared by the user's contact list as described below in connection with FIG. 5H.
  • a user selection of the settings icon 512-4 allows the user to change the setting of the online application. For example, the user can configure the network setting of the online application so that the online application determines when and how to play video messages accordingly.
  • an active conversation list session indicates that the user of the portable device is not currently engaging in a particular conversation with another user of the online application. Otherwise, the user selection of the icon corresponding to the messenger application 147 in FIG. 3 would bring up the particular conversation session on the touch screen 112.
  • the user interface 500A displays at least three conversation entries, 501A, 501B, and 501C.
  • a finger tap 510 of the conversation entry 501A activates the corresponding conversation session, which replaces the user interface 500A with the user interface 500B as shown in FIG. 5B.
  • the conversation entry 501A includes a video icon 505 indicating that there is a new video from the other user of this conversation associated with this conversation entry.
  • the existence of the video icon 505 does not necessarily indicate that the video itself has or has not been downloaded from the remote server because the video icon 505 may be generated according to the video-related information received by the device. As described below, the device may start downloading the video even without any further user instruction whenever there is sufficient bandwidth available to the device.
  • the device may automatically download (626) the video from the remote server when the device has a network bandwidth above a predefined threshold level (e.g. , 1 Mgbs) and display (628) an indicator 505 of the video on the screen adjacent a conversation session icon associated with the source of the video.
  • a predefined threshold level e.g. 1 Mgbs
  • the device displays (632) a list of messages associated with the conversation session on the screen (e.g. , user interface 500B, FIG. 5B) and automatically plays (634) the video on the screen without further user instruction.
  • a first user scrolling of the list of messages e.g.
  • the device dynamically shrinks (636) the play of the video into a video icon (e.g. , 525, FIG. 5C) when the video icon moves outside a predefined region of the screen (e.g. , 545, FIG. 5C) .
  • the video icon includes the snapshot of the video as part of the video-related information originally received from the remote server. As such, different video icons corresponding to different videos look differently because they will have different snapshots.
  • the device In response to a second user scrolling of the list of messages (e.g. , 535, FIG. 5D) , the device then automatically plays (638) the video on the screen when the video icon moves back into the predefined region of the screen (e.g. , 545, FIG. 5E) . In other words, the device may start playing the video based on the user’s finger swiping gestures on the touch screen 112 without the user manually selecting the corresponding video icon.
  • Whether or not the device needs more user instructions before downloading and playing the video on the touch screen 112 depends, at least in part, on the network setting of the portable electronic device 100. Downloading and playing video on a mobile device and usually requires a high network bandwidth because the video is often data-intensive and the mobile device may or may not have the appropriate network setting to accomplish such task. Sometimes, although there might be a high-bandwidth network (e.g. , 3G/4G/LTE cellular wireless network) available to the portable electronic for downloading the video file from the remote server, such network bandwidth may be too expensive for the user of the portable electronic device. Therefore, the user of the portable electronic device 100 can use the setting option 512-4 to determine different video downloading policies based on the surrounding network environment.
  • a high-bandwidth network e.g. , 3G/4G/LTE cellular wireless network
  • the portable device when the online messaging application is in a conversation session with the source of the video (e.g. , user interface 500B) , the portable device automatically downloads (608) the video from the remote server and plays the video on the screen when the electronic device has a network bandwidth above a predefined threshold. For example, when there is at least one of a wired network connection and a Wi-Fi network connection and a 3G or above wireless connection available to the electronic device, the video may be played (610) on the screen without further user instruction while it is being downloaded onto the portable device. In other words, the downloading process and the playing process may at least overlap each other to improve the user experience. As shown in FIGS.
  • the online messaging application enters the conversation session and then plays the video 515 automatically without any further user instruction.
  • the portable electronic device may have already started downloading the video using the Wi-Fi connection 308, which not only has sufficient bandwidth but is also free.
  • the number of times by which the video may be played is also user-configurable.
  • the video is played (612) repeatedly and then shrunk to a video icon when there is a user instruction to interrupt the play of the video.
  • the device 100 plays the video repeatedly until after receiving a user instruction (e.g. , a finger swipe gesture 520) to stop playing the video.
  • the video playing window 515 dynamically shrinks into the video icon 525 of the user interface 500C as shown in FIG. 5C.
  • the video is played (614) repeatedly on the screen and then shrunk to an icon in response to a new message added to the conversation session. For example, the video playing window 515 in FIG.
  • the video 5B may be dynamically shrink into the video icon 525 of the user interface 500C in FIG. 5C in response to the new message 530 submitted by the user of the portable electronic device 100.
  • the video is played (616) repeatedly on the screen for a predefined number of times and then shrunk to an icon after that even without any further user instruction.
  • the different embodiments are not mutually exclusive from each other.
  • the user configuration may be that the video will be played for five times consecutively provided that there is no user instruction to terminate the process. Note that, as described below in connection with FIG. 5F, the user can always choose to replay the video by pressing the icon 525 in the user interface 500C.
  • the video may be replayed even without a user selection of the video icon 525.
  • FIG. 5D depicts a user interface 500D including a sequence of text messages between the two users. As more and more messages are added to the conversation session, the video icon 525 shown in the user interface 500C is pushed out of the user interface 500D.
  • the user wants to replay the video he or she can swipe (535) the touch screen 112 downward.
  • the sequence of messages moves down and the video message re-appears in the user interface 500E.
  • the device may start replaying the video by dynamically expanding the video icon into the video play window when the video icon enters into the predefined region 545 on the touch screen 112.
  • the same criteria used for determining the playing policy may be used herein again.
  • the user may configure that all the video icons in the region 545 will be played automatically without requiring further user instruction.
  • the region 545 shown in the user interface 500E is for illustrative purpose and its size is variable by the user.
  • the device when the online messaging application is in a conversation session with the source of the video, the device automatically generates (618) a video icon corresponding to the video and displays the video icon on the screen when the electronic device does not have a network bandwidth above the predefined threshold.
  • the video icon includes (619) a snapshot of the video (e.g. , the snapshot included in the video-related information) .
  • the device downloads (620) the video from the remote server and dynamically replaces the video icon with the play of the video on the screen in response to a user selection of the video icon. As shown in FIG. 5F, the user interface 500F indicates that the device no longer has the Wi-Fi connection indicator 308.
  • the wireless connection indicator 302 may correspond to a 2G wireless network standard.
  • the video is not downloaded from the remote server and played until after a user selection of the video icon.
  • the device may start downloading the video from the remote server and play it on the touch screen 112 as shown in FIG. 5E.
  • the expansion of the video icon into the video playing window may push certain messages out of the user interface 500F.
  • the online messaging application monitors the network environment around the device 100 and updates its video playing and watching policy accordingly.
  • the device may automatically download (622) the video from the remote server and dynamically replace the video icon with the play of the video on the screen without further user instruction.
  • FIG. 5G depicts a user interface 500G of a conversation list session.
  • the user interface 500G is replaced with the user interface 500H of FIG. 5H, which corresponds to a message sharing session.
  • the device displays (642) a list of messages shared by different users of the online message application, the list of messages including a video icon corresponding to the video.
  • the device automatically downloads (644) the video from the remote server and plays the video on the screen without further user instruction when the video icon moves into a predefined region of the screen.
  • this situation is similar to the user interfaces 500D (FIG. 5D) and 500E (FIG. 5E).
  • the device automatically suspends (646) the download of the video from the remote server and dynamically shrinks the play of the video into the video icon without further user instruction when the video icon moves outside the predefined region of the screen.
  • the device can stop further downloading data from the remote server and save the bandwidth for other use.
  • a user scrolling gesture 565 brings the messages shared by different contacts upward.
  • 5I depicts a user interface 500I including the updated list of messages, one of which having a video message icon 570.
  • the video may be played automatically without any further user instruction.
  • the device may start playing the video by dynamically replacing the video icon with the video playing window 580 as shown in the user interface 500J of FIG. 5J (which indicates that the device 100 has a Wi-Fi connection) .
  • the video icon may remain as is if there is no high bandwidth network connection available to the device 100 and the device 100 has not downloaded the video from the remote server yet.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present application.
  • the first contact and the second contact are both contacts, but they are not the same contact.
  • the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting, “ depending on the context.
  • the phrase “if it is determined” or “if [astated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event] " or “in response to detecting [the stated condition or event] , " depending on the context.
  • stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method is performed at a portable electronic device having a screen, one or more processors and memory storing programs executed by the one or more processors. The method includes: receiving information related to a video from a remote server, the video-related information identifying a source of the video and an online messaging application for playing the video; determining a current status of the online messaging application and a network setting of the portable electronic device; and managing the download and play of the video on the screen in accordance with the current status of the online messaging application and the network setting of the portable electronic device.

Description

DEVICE AND METHOD FOR CAPTURING, SHARING AND WATCHING VIDEO MESSAGES TECHNICAL FIELD
This application relates generally to electronic devices used for exchanging instant messages, including but not limited to electronic devices with touch-sensitive surfaces for capturing, sharing and watching video-based instant messages on the touch-sensitive surfaces based on user instructions (e.g. , through finger gestures or movement of the electronic devices) .
BACKGROUND
Instant messaging (IM) applications are widely deployed today on different types of electronic devices such as desktop, laptop, tablet, and smartphones. People use the IM applications to primarily exchange text-based, audio-based, or static image-based messages. These messages are relatively easy to generate and share between different IM users. In contrast, there are unique challenges to share video-based messages due to the difficulties of capturing videos and large network bandwidth required for transmitting the videos. For example, it often takes multiple steps before an IM user can start the video camera built into a mobile device. This cumbersome process makes the mobile device less qualified for capturing and sharing those time-sensitive moments. On the other hand, video messages can provide much more information than the messages in other media formats can offer. With the wide use of mobile devices equipped with video cameras, it is critical to develop an IM application so that IM users can exchange video messages with each other using their mobile devices.
SUMMARY
Accordingly, there is a need for electronic devices with faster, more efficient and intuitive methods and interfaces for capturing, sharing, and watching video messages. Such methods and interfaces may complement or replace conventional methods for capturing, sharing, and watching video messages. Such methods and interfaces reduce the burden on a user when trying to capture, share or watch a video message and produce a  more efficient human-machine interface. For battery-operated electronic devices (e.g. , smartphones) , such methods and interfaces conserve power and increase the usage time between battery charges.
The above deficiencies and other problems associated with the conventional approaches are reduced or eliminated by the disclosed devices, which may be a portable device (e.g. , a laptop, a tablet, or a handheld device) that has a touch-sensitive surface (e.g. , a touchpad or touch screen) . In some embodiments, the device has a graphical user interface (GUI) , one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI primarily through finger contacts and gestures on the touch-sensitive surface. In some embodiments, the functions may include video capturing, editing, sharing or playing within an IM application. Executable instructions for performing these functions may be included in a computer readable storage medium or other computer program product configured for execution by one or more processors.
In accordance with some embodiments, a method for capturing, editing and sharing videos is performed at an electronic device having a touch screen, one or more processors, and memory storing programs executed by the one or more processors. The method includes: while running an online messaging application: starting a video recording window on the touch screen in response to a user instruction; starting a video recording session in response to detecting a first finger gesture on the touch screen; detecting a second finger gesture on the touch screen during the video recording session; and continuing the video recording session when detecting a third finger gesture on the touch screen and a time gap between the second finger gesture and the third finger gesture is less than a predefined time window; stopping the video recording session when no finger gesture is detected within the predefined time window; and transmitting the recorded video to a remote server, wherein the remote server is configured to share the recorded video with one or more target users of the online messaging application.
In accordance with some embodiments, a portable electronic device includes a touch screen, one or more processors, memory, and one or more programs; the programs are stored in the memory and configured to be executed by the processors and the programs include instructions for performing the operations of the method described above. In  accordance with some embodiments, a non-transitory computer readable storage medium has stored therein instructions which, when executed by an electronic device having a touch screen, cause the device to perform the operations of the method described above.
In accordance with some embodiments, a method for playing and watching videos is performed at a portable electronic device having a touch screen, one or more processors, and memory storing programs executed by the one or more processors. The method includes: receiving information related to a video from a remote server, the video-related information identifying a source of the video and an online messaging application for playing the video; determining a current status of the online messaging application and a network setting of the portable electronic device; and managing the download and play of the video on the screen in accordance with the current status of the online messaging application and the network setting of the portable electronic device.
In accordance with some embodiments, a portable electronic device includes a touch screen, one or more processors, memory, and one or more programs; the programs are stored in the memory and configured to be executed by the processors and the programs include instructions for performing the operations of the method described above. In accordance with some embodiments, a non-transitory computer readable storage medium has stored therein instructions which, when executed by an electronic device having a touch screen, cause the device to perform the operations of the method described above.
Thus, portable electronic devices with touch screens are provided with faster, more efficient and intuitive methods and interfaces for capturing, editing, sharing and watching video messages thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces may complement or replace conventional methods for manipulating user interface objects.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the aforementioned embodiments of the invention as well as additional embodiments thereof, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable electronic device having a touch screen in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event handling within the portable electronic device in accordance with some embodiments.
FIG. 2A is a block diagram of a portable electronic device with a display and a touch-sensitive surface in accordance with some embodiments.
FIG. 2B illustrates an exemplary user interface for the portable electronic device with a touch-sensitive surface that is separate from the display in accordance with some embodiments.
FIG. 3 is an exemplary user interface illustrating multiple applications installed in a portable electronic device having a touch screen in accordance with some embodiments.
FIGS. 3A-3E are exemplary user interfaces illustrating processes of capturing, editing, and sharing videos using an IM application installed in a portable electronic device having a touch screen in accordance with some embodiments.
FIGS. 4A-4C are flow diagrams illustrating a method of capturing, editing, and sharing videos in accordance with some embodiments.
FIGS. 5A-5J are exemplary user interfaces illustrating processes of playing and watching videos using an IM application installed in a portable electronic device having a touch screen in accordance with some embodiments.
FIGS. 6A-6E are flow diagrams illustrating a method of playing and watching videos in accordance with some embodiments.
Like reference numerals refer to corresponding parts throughout the several views of the drawings.
DESCRIPTION OF EMBODIMENTS
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present application. However, it will be apparent to one of ordinary skill in the art that the  present application may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the electronic device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Other portable devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g. , touch screens and/or touch pads) , may also be used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g. , a touch screen and/or a touch pad) .
In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device may include one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick.
The device supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications executed on the device may use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the touch-sensitive surface may be adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device may support the variety of applications with user interfaces that are intuitive and transparent to the user.
Attention is now directed toward embodiments of a portable electronic device having a touch screen. FIG. 1A is a block diagram illustrating a portable electronic  device 100 having a touch screen system 112 in accordance with some embodiments. Device 100 may include memory 102 (which may include one or more computer readable storage mediums) , memory controller 122, one or more processing units (CPU’s ) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external port 124. Device 100 may include one or more optical sensors 164. These components may communicate over one or more communication buses or signal lines 103.
It should be appreciated that device 100 is only one example of a portable electronic device, and that device 100 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of the components. The various components shown in FIG. 1A may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
Memory 102 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as CPU 120 and the peripherals interface 118, may be controlled by memory controller 122.
Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data.
In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 may be implemented on a single chip, such as chip 104. In some other embodiments, they may be implemented on separate chips.
RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 108 may include well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital  signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 108 may communicate with networks, such as the Internet, also referred to as the World Wide Web (WWW) , an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN) , and other devices by wireless communication. The wireless communication may use any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM) , Enhanced Data GSM Environment (EDGE) , high-speed downlink packet access (HSDPA) , wideband code division multiple access (W-CDMA) , code division multiple access (CDMA) , time division multiple access (TDMA) , Bluetooth, Wireless Fidelity (Wi-Fi) (e.g. , IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n) , voice over Internet Protocol (VoIP) , Wi-MAX, a protocol for e-mail (e.g. , Internet message access protocol (IMAP) and/or post office protocol (POP) ) , instant messaging (e.g. , extensible messaging and presence protocol (XMPP) , Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE) , Instant Messaging and Presence Service (IMPS) ) , and/or Short Message Service (SMS) , or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data may be retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118. In some embodiments, audio circuitry 110 also includes a headset jack. The headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g. , a headphone for one or both ears) and input (e.g. , a microphone) .
I/O subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118. I/O  subsystem 106 may include display controller 156 and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/send electrical signals from/to other input or control devices 116. The other input control devices 116 may include physical buttons (e.g. , push buttons, rocker buttons, etc. ) , dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller (s) 160 may be coupled to any (or none) of the following: a keyboard, infrared port, USB port, and a pointer device such as a mouse.
Touch screen 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives and/or sends electrical signals from/to touch screen 112. Touch screen 112 displays visual output to the user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively termed "graphics" ) . In some embodiments, some or all of the visual output may correspond to user-interface objects.
Touch screen 112 has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and converts the detected contact into interaction with user-interface objects (e.g. , one or more soft keys, icons, web pages or images) that are displayed on touch screen 112. In an exemplary embodiment, a point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 may use LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments. Touch screen 112 and display controller 156 may detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. The user may make contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts  and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
In some embodiments, in addition to the touch screen, device 100 may include a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad may be a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
Device 100 also includes power system 162 for powering the various components. Power system 162 may include a power management system, one or more power sources (e.g. , battery, alternating current (AC) ) , a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g. , a light-emitting diode (LED) ) and any other components associated with the generation, management and distribution of power in portable devices.
Device 100 may also include one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to optical sensor controller 158 in I/O subsystem 106. Optical sensor 164 may include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor 164 receives light from the environment, projected through one or more lens, and converts the light to data representing an image. In conjunction with imaging module 143 (also called a camera module) , optical sensor 164 may capture still images or videos. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen 112 on the front of the device, so that the touch screen may be used as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that the user's image may be obtained for videoconferencing while the user views the other video conference participants on the touch screen. In some embodiments, the position of optical sensor 164 can be changed by the user (e.g. , by rotating the lens and the sensor in the device housing) so that a single optical sensor 164 may be used along with the touch screen for both video conferencing and still and/or video image acquisition.
Device 100 may also include one or more proximity sensors 166. In some embodiments, the proximity sensor turns off and disables touch screen 112 when the electronic device is placed near the user's ear (e.g. , when the user is making a phone call) .
Device 100 may also include one or more accelerometers 168. FIG. 1A shows accelerometer 168 coupled to peripherals interface 118. Alternately, accelerometer 168 may be coupled to an input controller 160 in I/O subsystem 106. In some embodiments, information is displayed on the touch screen in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device 100 optionally includes, in addition to accelerometer (s) 168, a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g. , portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136. Furthermore, in some embodiments memory 102 stores device/global internal state 157, as shown in FIGS. 1A and 2. Device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen 112; sensor state, including information obtained from the device’s various sensors and input control devices 116; and location information concerning the device’s location and/or attitude.
Operating system 126 (e.g. , Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, Android, iOS, Sailfish, Symbian, BlackBerry OS, Windows phone, Windows mobile  or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g. , memory management, storage device control, power management, etc. ) and facilitates communication between various hardware and software components.
Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g. , Universal Serial Bus (USB) , FIREWIRE, etc. ) is adapted for coupling directly to other devices or indirectly over a network (e.g. , the Internet, wireless LAN, etc. ) .
Contact/motion module 130 may detect contact with touch screen 112 (in conjunction with display controller 156) and other touch sensitive devices (e.g. , a touchpad or physical click wheel) . Contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g. , detecting a finger-down event) , determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g. , detecting one or more finger-dragging events) , and determining if the contact has ceased (e.g. , detecting a finger-up event or a break in contact) . Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, may include determining speed (magnitude) , velocity (magnitude and direction) , and/or an acceleration (achange in magnitude and/or direction) of the point of contact. These operations may be applied to single contacts (e.g. , one finger contacts) or to multiple simultaneous contacts (e.g. , "multi-touch" /multiple finger contacts) . In some embodiments, contact/motion module 130 and display controller 156 detects contact on a touchpad. In some embodiments, contact/motion module 130 and controller 160 detects contact on a click wheel.
Contact/motion module 130 may detect a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns. Thus, a gesture may be detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) event at the same position (or substantially the same position) as the finger-down event (e.g. , at the position of an icon) . As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the intensity of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including without limitation text, web pages, icons (such as user-interface objects including soft keys) , digital images, videos, animations and the like. In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic may be assigned a corresponding code. Graphics module 132 receives, from applications etc. , one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
Text input module 134, which may be a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g. , contacts 137, instant messenger module 147, and any other application that needs text input) .
GPS module 135 determines the location of the device and provides this information for use in various applications (e.g. , to telephone 138 for use in location-based dialing and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets) .
Applications 136 may include the following modules (or sets of instructions) , or a subset or superset thereof:
· contacts module 137 (sometimes called an address book or contact list) ;
· telephone module 138;
· image management module 144;
· video player module 145;
· camera module 146; and/or
· instant messenger module 147.
Examples of other applications 136 that may be stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 may be used to manage an address book or contact list (e.g. , stored in application internal state 192 of contacts module 137 in memory 102 or memory 270) , including: adding name (s) to the address book; deleting name (s) from the address book; associating telephone number (s) , e-mail address (es) , physical address (es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference 139, e-mail 140, or IM 141; and so forth.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 may be used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation and disconnect or hang up when the conversation is completed. As noted above, the wireless communication may use any of a plurality of communications standards, protocols and technologies.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, image management module 144 includes executable instructions to arrange, modify (e.g. , edit) , or otherwise manipulate, label, delete, present (e.g. , in a digital slide show or album) , and store still and/or video images.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, and speaker 111, video player module 145 includes executable instructions to display, present or otherwise play back videos (e.g. , on touch screen 112 or on an external, connected display via external port 124) .
In conjunction with touch screen 112, display controller 156, optical sensor (s) 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 146 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, or delete a still image or video from memory 102.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, the instant messenger module 147 includes executable instructions to enter a sequence of characters corresponding to an instant text message, record an audio stream corresponding to an instant audio message, capture an image/video stream corresponding to an instant image/video message, to add/modify/delete previously entered characters to edit the text/audio/image/video messages, to transmit a respective instant text/audio/image/video message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages) , to receive instant text/audio/image/video messages and to view received instant text/audio/image/video messages. In some embodiments, transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in a MMS and/or an Enhanced Messaging Service (EMS) . As used herein, “instant messaging” refers to both telephony-based messages (e.g. , messages sent using SMS or MMS) and Internet-based messages (e.g. , messages sent using XMPP, SIMPLE, or IMPS) .
Each of the above identified modules and applications correspond to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g. , the computer-implemented methods and other information processing methods described herein) . These modules (i.e. , sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. For example, video player module 145 and camera module 146 can be incorporated into the instant messenger module 147. In some embodiments, memory 102 may store a subset of the modules and data structures identified above. Furthermore, memory 102 may store additional modules and data structures not described above.
In some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 may be reduced.
The predefined set of functions that may be performed exclusively through a touch screen and/or a touchpad include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that may be displayed on device 100. In such embodiments, the touchpad may be referred to as a “menu button. ” In some other embodiments, the menu button may be a physical push button or other physical input control device instead of a touchpad.
FIG. 1B is a block diagram illustrating exemplary components for event handling in accordance with some embodiments. In some embodiments, memory 102 (in FIG. 1A) or 270 (FIG. 2A) includes event sorter 170 (e.g. , in operating system 126) and a respective application 136-1 (e.g. , any of the aforementioned applications 137-147) .
Event sorter 170 receives event information and determines the application 136-1 and the application view 191 of application 136-1 to which to deliver the event information. Event sorter 170 includes event monitor 171 and event dispatcher module 174. In some embodiments, application 136-1 includes application internal state 192, which indicates the current application view (s) displayed on touch screen 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by the event sorter 170 to determine which application (s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
In some embodiments, application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripherals interface 118. Event information includes information about a sub-event (e.g. , a user touch on touch screen 112, as part of a multi-touch gesture) . Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer (s) 168, and/or microphone 113 (through audio circuitry 110) . Information  that peripherals interface 118 receives from I/O subsystem 106 includes information from touch screen 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripheral interface 118 transmits event information only when there is a predefined event (e.g. , receiving an input above a predetermined noise threshold and/or for more than a predetermined duration) .
In some embodiments, event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views, when touch screen 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected may correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected may be called the hit view, and the set of events that are recognized as proper inputs may be determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e. , the first sub-event in the sequence of sub-events that form an event or potential event) . Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
Event dispatcher module 174 dispatches the event information to an event recognizer (e.g. , event recognizer 180) . In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver module 182.
In some embodiments, operating system 126 includes event sorter 170. Alternatively, application 136-1 includes event sorter 170. In yet other embodiments, event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application’s user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 may utilize or call data updater 176, object updater 177 or GUI updater 178 to update the  application internal state 192. Alternatively, one or more of the application views 191 includes one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
respective event recognizer 180 receives event information (e.g. , event data 179) from event sorter 170, and identifies an event from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which may include sub-event delivery instructions) .
Event receiver 182 receives event information from event sorter 170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch the event information may also include speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g. , from a portrait orientation to a landscape orientation, or vice versa) , and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definitions 186 contain definitions of events (e.g. , predefined sequences of sub-events) , for example, event 1 (187-1) , event 2 (187-2) , and others. In some embodiments, sub-events in an event 187 include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a  touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch screen 112, and lift-off of the touch (touch end) . In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch screen 112, when a touch is detected on touch screen 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event) . If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test.
In some embodiments, the definition for a respective event 187 also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’s event type.
When a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
In some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers may interact with one another. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
In some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module 145. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 176 creates a new user-interface object or updates the position of a user-interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch screen.
In some embodiments, event handler (s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It shall be understood that the foregoing discussion regarding event handling of user touches on touch screens also applies to other forms of user inputs to manipulate the electronic devices 100 with input-devices, not all of which are initiated on touch screens, e.g. , coordinating mouse movement and mouse button presses with or without single or  multiple keyboard presses or holds, user movements taps, drags, scrolls, etc. , on touch-pads, pen stylus inputs, movement of the device, oral instructions, detected eye movements, biometric inputs, and/or any combination thereof, which may be utilized as inputs corresponding to sub-events which define an event to be recognized.
FIG. 2A is a block diagram of a portable electronic device with a display and a touch-sensitive surface in accordance with some embodiments. Device 200 need not be portable. In some embodiments, device 200 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child’s learning toy) , a gaming system, or a control device (e.g. , a home or industrial controller) . Device 200 typically includes one or more processing units (CPU’s ) 210, one or more network or other communications interfaces 260, memory 270, and one or more communication buses 220 for interconnecting these components. Communication buses 220 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device 200 includes input/output (I/O) interface 230 comprising display 240, which is typically a touch screen. I/O interface 230 also may include a keyboard and/or mouse (or other pointing device) 250 and touchpad 255. Memory 270 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 270 may optionally include one or more storage devices remotely located from CPU (s) 210. In some embodiments, memory 270 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable electronic device 100 (Figure 1) , or a subset thereof. Furthermore, memory 270 may store additional programs, modules, and data structures not present in memory 102 of portable electronic device 100. For example, memory 270 of device 200 may store drawing module 280 and presentation module 282 while memory 102 of portable electronic device 100 (FIG. 1A) may not store these modules.
Each of the above identified elements in FIG. 2A may be stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e. , sets of instructions) need not be implemented as  separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 270 may store a subset of the modules and data structures identified above. Furthermore, memory 270 may store additional modules and data structures not described above.
FIG. 2B illustrates an exemplary user interface on a device (e.g. , device 200, FIG. 2A) with a touch-sensitive surface 451 (e.g. , a tablet or touchpad 255, FIG. 2A) that is separate from the display 450 (e.g. , touch screen 112) . Although many of the examples which follow will be given with reference to inputs on a touch screen 112 (where the touch sensitive surface and the display are combined) , in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in FIG. 2B. In some embodiments the touch sensitive surface (e.g. , 451 in FIG. 2B) has a primary axis (e.g. , 452 in FIG. 2B) that corresponds to a primary axis (e.g. , 453 in FIG. 2B) on the display (e.g. , 450) . In accordance with these embodiments, the device detects contacts (e.g. , 460 and 462 in FIG. 2B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g. , in FIG. 2B 460 corresponds to 468 and 462 corresponds to 470) . In this way, user inputs (e.g. , contacts 460 and 462) detected by the device on the touch-sensitive surface (e.g. , 451 in FIG. 2B) are used by the device to manipulate the user interface on the display (e.g. , 450 in FIG. 2B) of the electronic device when the touch-sensitive surface is separate from the display. It should be understood that similar methods may be used for other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g. , finger contacts, finger tap gestures, finger swipe gestures) , it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g. , a mouse based input or stylus input) . For example, a swipe gesture may be replaced with a mouse click (e.g. , instead of a contact) followed by movement of the cursor along the path of the swipe (e.g. , instead of movement of the contact) . As another example, a tap gesture may be replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g. , instead of detection of the contact followed by ceasing to detect the contact) . Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice may be used simultaneously, or a mouse and finger contacts may be used simultaneously.
Attention is now directed towards embodiments of user interfaces ( "UI" ) and associated processes that may be implemented on an electronic device with a display and a touch-sensitive surface, such as device 200 or portable electronic device 100.
FIG. 3 is an exemplary user interface 300 illustrating multiple applications installed in the portable electronic device 100 having a touch screen 112 in accordance with some embodiments. Similar user interfaces may be implemented on device 200. In some embodiments, the user interface 300  includes the following elements, or a subset or superset thereof:
· Signal strength indicator (s) for wireless communication (s) , such as cellular signal indicator 302 and Wi-Fi signal indicator 308;
· Current time 304;
· Battery status indicator 306; and
· Icons for multiple applications, such as:
ο Contacts 137;
ο Phone 138;
ο Photos 144;
ο Video player 145;
ο Camera 146; and
ο Messenger 147.
A user can select one of the applications by finger tapping a respective icon in the touch screen 112. For example, a finger tap 301 of the icon corresponding to the messenger 147 causes the device 100 to display the messenger application’s user interface on the touch screen 112. As will be described below, the user can exchange instant text/audio/image/video messages with others through the user interface.
FIGS. 3A-3E are exemplary user interfaces illustrating processes of capturing, editing, and sharing videos using an IM application installed in a portable electronic device having a touch screen in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below in connection  with FIGS. 4A-4C. In FIGS. 3A-3E, some finger contact or movement sizes may be exaggerated for illustrative purposes. No depiction in the figures bearing on finger contact or movements should be taken as a requirement or limitation for the purpose of understanding sizes and scale associated with the methods and devices disclosed herein.
For ease of explanation, FIG. 3A depicts an exemplary user interface 300A for capturing videos when the messenger 147 (which is typically an online messaging application installed on the portable electronic device 100) is in a conversation list view. In the conversation list view, because of space constraints on the display of the portable electronic device 100, the user interface 300A displays a list of conversation entries, each entry including related information such as an identifier 332 of another participant of the conversation, a brief description 333 of the conversation (e.g. , the most recently received message in a conversation) , and a timestamp 334 of the last update to the conversation, etc. The user of the device 100 may choose from, and thereby enter a conversation with one or more target users by selecting one of the conversation entries, whereupon the user will be able to view instant messages (e.g. , text, multimedia, etc. ) included in the conversation. In addition, the user can also generate instant messages and share them with others through one of the conversation. For example, the user may invoke the camera application 146 from the messenger 147 to capture a video and share the video with others. Note that because the user of the messenger 147 has not yet engaged in a conversation with a particular entity in the conversation list view, the user may decide with whom to share the captured video after completing the video capture and editing process. In contrast, FIG. 3B illustrates an exemplary user interface 300B of an individual conversation view. In this case, the user of the messenger 147 is currently engaging a conversation with one or more other users. Therefore, the captured video will be shared with the one or more other participants of this particular conversation.
While running the online messaging application, the user may give an instruction to the device 100 to start a video recording window on the touch screen 112. In some embodiments, the user instruction is a downward finger swipe gesture 330 on the touch screen 112 from top to bottom. In response to the downward finger swipe gesture 330, the conversation list 315 is pulled down and being replaced by the video recording window 310. In some other embodiments, the user instruction of starting the video recording window is to shake the electronic device. The shaking is detected by, for  example, the accelerometer 168. In some embodiments, the device 100 determines the meaning of the shaking movement based on the magnitude of the rate of change of acceleration. For example, when the rate of change of acceleration exceeds a predetermined threshold value, the device 100 determines that the user’s intent is to start the video recording window automatically.
In some embodiments, the video recording window 310 may include a video recording indicator 325 and a progress bar 326. When starting the video recording window 310, a picture may be initially displayed on the screen while activating the camera module 145 and the microphone 113 in the background. The video recording indicator 325 may change its appearance to indicate the readiness of the camera module 145 and the microphone 113. Upon detection of a finger press gesture on the video recording window 310, the device 100 can start the video capturing process. Subsequently, the device 100 stops the video capturing process upon detection of a finger life-off gesture from the video recording window 310.
When capturing the video, a progress indicator 327 on the progress bar 326 may demonstrate the progress of the video recording and indicate the length of the video relative to a predetermined length (after the video recording starts) . For example, the messenger 147 may limit the maximum length of a video message that can be shared with others to be 10 seconds in order to limit the usage of bandwidth. For convenience, such video may be referred to as “micro video” in the rest of this application. Accordingly, the progress bar 326 may have a scale measuring the length of the dotted progress indicator 327, which indicates the length of the video recorded so far and the progress of the recording before it reaches 10 seconds.
During the recording of a micro video message, it is possible that the user may move his/her finger around on the touch screen 112. As a result, one or more finger gestures on the touch screen 112, such as 340-344, may be detected. In some cases, the user may accidentally lift his/her finger off the touch screen 112 without intending to stop the video recording process and then press the finger on the touch screen 112. For example, the user may be walking while recording the video using the device 100. Therefore, it is important for the device 100 to interpret these finger movements correctly and act accordingly. In some embodiments, the temporal and spatial location of different finger  gestures may be used for determining whether the video recording should be continued or stopped. The user may be even given an opportunity to edit the recorded video before it being transmitted to a remote server to be shared with one or more target users of the online messaging application, such as one or more users from the conversation view in FIG. 3B.
As noted above, besides starting the video recording window 310 in a conversation list view as illustrated in FIG. 3A, the video recording window may be started when the user of the messenger 147 is engaging an online conversation with one or more target users as shown in FIG. 3B. In this case, the exemplary conversation window 321 displays the messages exchanged between the user of the device 100 and other users of the online conversation. Through the exemplary conversation window 321, the user can compose and transmit new messages to other users of the online conversation. In this example, the conversation window 321 comprises a conversation pane for displaying the messages, a text message input field 326, and an additional function button 323. In some embodiments, the conversation pane may comprise messages 322a-b submitted to the online conversation by different participants of the conversation. The user may submit text messages to the online conversation using the message input field 326. The message input field 326 displays one or more characters or emoticons entered by the user in accordance with some embodiments. The message input field 326 may further comprise an input cursor.
In addition to generating text-based messages via the message input field 326, the user can generate and submit multimedia messages to the online conversation. For example, instead of shaking the device 100, a user may finger tap 324 the additional function button 323 to bring up a matrix of additional functions at the bottom portion of the exemplary user interface 300B. In some embodiments, the additional function matrix includes the following elements, or a subset or superset thereof:
· Contacts 137’;
· Voice 138’;
· Photos 144’;
· Video chat 145’;
· Camera 146’; and
· Micro video 148.
Note that the functions in the additional function matrix shown in FIG. 3B may or may not be the same as those applications shown in FIG. 3. For example, the contacts function 137’in FIG. 3B may refer to the contact list of the user of the messenger 147 while the contacts application 137 in FIG. 3 may refer to the contact list of the user of the phone application 138. In contrast, the photos function 144’in FIG. 3B may be the same as the photo application 144 in FIG. 3 because both manage the images stored in the memory of the device 100. Note that the video chat function 145’a llows the user to have video conference with another user, which is different from the micro video function 148 as described in the present application. A user can select one element of the additional function matrix by finger tapping a respective icon in the touch screen 112. For example, a finger tap 328 of the icon corresponding to the micro video function 148 may cause the device 100 to start a micro video recording window on the touch screen 112, so that micro videos may be recorded and shared with one or more target users of the online messaging application. As will be described below, one aspect of the present application is a more convenient method for capturing, editing and sharing videos using the electronic device 100 while the device is running an online messaging application, such that the user can quickly record one or more micro videos using finger gestures and optionally edit the recorded videos before sharing the recorded videos with one or more target users of the online messaging application.
FIG. 3C depicts an exemplary user interface 300C that replaces the user interface 300B on the touch screen 112 in response to the user’s finger tap 328 of the icon corresponding to the micro video function 148. Similar to the video recording window 310 shown in FIG. 3A, the video recording window 320 may include a video recording indicator 325 to indicate the readiness of the camera module 146 and the microphone 113 and a progress bar 326 comprising a progress indicator 327 to demonstrate the progress of the video recording and indicate the current length of the video relative to a predetermined length (after the video recording starts) . During the recording of a micro video message, it is possible that the user may move his/her finger around on the touch screen 112. As a result, one or more finger gestures on the touch screen 112, such as 340-344, may be detected. In some cases, the user may accidentally lift his/her finger off the touch screen 112 without intending to stop the video recording process and then press the finger on the touch screen 112. For example, the user may be walking while recording the video using the device 100.  Therefore, it is important for the device 100 to interpret these finger movements correctly and act accordingly. In some embodiments, the temporal and spatial location of different finger gestures may be used for determining whether the video recording should be continued or stopped.
After a video recording process is stopped, the video may be transmitted to a remote server. And the remote server may then share the recorded video with one or more target users of the online messaging application specified by the user of the device 100. As noted above, the designation of the one or more target users of the messaging application depends on the context of how the video recording window is started. For example, FIGS. 3B and 3C depict that the video recording window 320 is started when the user is in an online conversation 321. In response to a user instruction, such as a finger tap 328 on the micro video icon 148, or shaking the portable electronic device 100, the video recording window 320 is started. In such case, the one or more target users of the micro video message are other participants of the same conversation. In another example, as illustrated in FIG. 3A, the video recording window 310 is started when the online messaging application is in the conversation list view, but not in a particular conversation view. In response to a user instruction, such as a downward finger swipe gesture on the touch screen 112, or shaking the portable electronic device 100, the video recording window 310 is started. In such case, the one or more target users of the micro video message are designated after the completion of the video recording process.
FIG. 3D depicts an exemplary user interface 300D that is displayed after the completion of the video recording process to designate one or more target users to share the recorded video with. The top portion of the exemplary user interface 300D includes a video playing window 360. A user selection of the video playing window 360 triggers the device 100 to play the video captured by the device 100. In embodiments, the first or last frame of the recorded video may be displayed in the video playing window 360. A progress bar 366 may include a progress indicator 367 to visualize that the progress of the video recording is 100%of a predetermined length.
After the video recording process is stopped, the bottom portion of the exemplary user interface 300D displays different options for the user to decide where to  send the recorded video. In some embodiments, the options include the following choices, or a subset or superset thereof:
· Moments 351;
· Myself 352;
· A conversation contact (e.g. A in FIG. 3D) 353; and
· A group chat 354.
In some embodiments, next to some choices (e.g. , 352) , a checkbox 356 is provided so that the user may designate one or more target users to send the recorded video. Moments 351 allows the user to post the video on the user’s virtual message board of the messenger application 147 by selecting the icon of Moments 351. Any user of the messenger application 147 can visit the user’s message board to watch videos posted by the user. In some embodiments, the virtual message board may include multiple categories and the user can select the “>” sign 355 to bring up the multiple categories and designate a particular category for hosting the video.
In some embodiments, the user is given an opportunity to edit the recorded video before sending the recorded video to the remote server. The editing allows the user to delete a portion of the recorded video before sharing it with other users. FIG. 3E depicts an exemplary user interface 300E that is displayed in response to a user selection of the video playing window 360 in FIG. 3D. As shown in FIG. 3E, a preview window 320-1 is displayed to allow the user to edit the recorded video before sharing. A plurality of icons, such as confirm 370 and modify 371, may be provided to facilitate the editing. The user may select different clips of the video (not the entire video) to share, and delete unwanted portions of the recorded video using the modify icon 371, among others.
For example, during the video recording, as illustrated in FIGS. 3A and 3C, after detecting a first finger gesture 340 (e.g. , finger-down) on the touch screen 112 to start a video recording process, the camera may record a video clip C1 for the period of 328. The period of 328 is terminated when a second finger gesture 342 (e.g. , finger-up) is detected. As noted above, the second finger gesture 342 may be an accident unintended by the user of the device 100. To avoid missing any information, the device 100 continues recording the video clip C2 after detecting the second finger gesture 342 for the period of 329 until a third  finger gesture 344 (e.g. , finger-down) is detected on the touch screen 112. More video clips C3, C4, …, CN may be recorded in response to subsequent finger gestures. During the preview process, the length of video clips like the one corresponding to the period of 329 is checked based on the time difference between the second finger gesture 342 and the third finger gesture 344. If the time difference is no greater than a predefined threshold (e.g. , 0.5 second) , it is assumed that the user does not intend to stop the video recording using the second finger gesture and such video clip should be combined with the previous video clip C1. On the other hand, if the time difference is greater than the predefined threshold, such video clip will be highlighted during the preview process so that the user can determine whether the video clip should be kept as part of the recorded video to be shared with other users or should be deleted.
In some embodiments, in the preview window 320-1, a progress bar 326 is displayed below the video clips. Dotted progress indicator 377 is used to show the length of each video clip during the video recording process. In some embodiments, some portions of the progress bar 326 may not have the dotted progress indicator. For example, the portion on the progression bar 376 corresponding to the duration of 329 may be blank to show no finger gesture is detected during the period 329. The user can modify 371 the video clips recorded the video recording process before confirming 370 the video and sharing it with one or more target users.
FIGS. 4A-4C are flow diagrams illustrating a method of capturing, editing, and sharing videos in accordance with some embodiments. Method 400 is performed at an electronic device having a display and a touch-sensitive surface. In some embodiments (e.g. , portable electronic device 100, FIG. 1A) , the display is a touch screen and the touch-sensitive surface is on the display. In some embodiments (e.g. , device 200, FIG. 2A) , the display is separate from the touch-sensitive surface. Some operations in method 400 may be combined and/or the order of some operations may be changed.
As described below, the method 400 provides an efficient, intuitive way to record micro videos and share the recorded micro videos with users of an online messaging application. The method reduces the steps to active video recording devices on a portable device, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to record and share micro videos on the touch-sensitive  display faster and more efficiently conserves power and increases the usage time between battery charges. In addition, the method 400 has a fault-tolerant capability so that it can ignore the user’s accidental, unintended finger gestures on the touch screen.
The device, while running an online messaging application, starts (410) a video recording window on the touch screen in response to a user instruction (402) . As noted above in connection with FIGS. 3A-3C, the device may start (410) the video recording window on the touch screen in response to detecting by a built-in accelerometer a shaking (404) of the electronic device while running an online messaging application (406) . In some other embodiments, the device may start (410) the video recording window on the touch screen in response to detecting the downward finger swipe gesture (e.g. , 330, FIG. 3A) on the touch screen from top to bottom (408) .
For example, FIG. 3A illustrates a user interfaces 300A of an online messaging application running on the portable electronic device. The video recording window is started in response to a user instruction of a downward finger swipe gesture 330. In some other embodiments, a user may start the video recording window by shaking the portable electronic device while running the online messaging application. The shaking may be detected by the accelerometer (406) in the portable electronic device. For example, the portable electronic device determines the meaning of the shaking movement based on the magnitude of the rate of change of acceleration. For example, when the rate of change of acceleration exceeds a predetermined threshold value, the device determines that the user’s intent is to start the video recording window automatically.
In another example, FIG. 3B illustrates a user interface 300B of an online messaging application. During the online conversation session, a user may finger tap (324) the additional function button 323 to activate the additional function matrix at the bottom region of the user interface 300B. The additional function matrix includes the micro video function icon 148 among other function icons. In response to a finger tapping 328 of the micro video function icon, the video recording window is started in place of the additional function matrix as shown in FIG. 3C.
In some embodiments, instead of the two finger tapping gestures as shown in FIG. 3B, the portable electronic device 100 may start the video recording window in response to detecting a shaking of the portable electronic device during the online  conversation session. Relative to the two finger tapping gestures detection, the shaking detection to start the video recording window is more efficient.
In some embodiments, the video recording window starting step 410 includes initially displaying (412) a picture in the video recording window while activating the camera and the microphone and displaying (414) a progress bar corresponding to a predefined length of the video to be recorded. Upon a detection of the readiness of the camera and the microphone, a dynamic view preview of the video being recorded may be rendered (416) in the video recording window. As shown in FIGS. 3A and 3C, a picture may be initially displayed (412) in the  video recording window  310 and 320 respectively when activating the camera and the microphone. In some embodiments, the picture may be user-specified or a default picture configured in the portable electronic device. At the end of the course of activating the camera and the microphone, the picture may be replaced by the video recording indicator to indicate the readiness of the camera and the microphone. In some embodiments, the video recording indicator is dynamically updated according to the current status of the camera.
For example, a dark camera image or a closed eye image may be displayed in the window while the camera and the microphone are activated. Once the camera and the microphone are ready, the video recording indicator is updated to an open shutter image or an open eye image to indicate the readiness of the camera and the microphone for video recording. And once the camera and the microphone are ready to record videos, a dynamic preview of images through the camera may be rendered (416) in places of the still images.
Following the step 410 of starting a video recording window, the device starts a video recording process using the camera and the microphone in response to detecting a first finger gesture on the touch screen (418) . In some embodiments, the first finger gesture is a finger down gesture on the touch screen. For example, the device may start the video recording process upon detecting a finger contact on the touch screen within a predefined area of the video recording window and then instruct the camera and the microphone to start the video recording process.
During the recording process, the device monitors the user’s finger contact with the touch screen to determine when it should stop the video record process. As described above in FIGS. 3A and 3C, the user is allowed to move the finger contact position  on the touch screen while maintaining the constant finger contact with the touch screen but without interrupting the video recording process. At some point in time, the device may detect (420) a second finger gesture on the touch screen (e.g. , a lift-off of the finger from the touch screen) . After detecting the second finger gesture (420) , the video recording process may be continued (430) or stopped (434) depending on the detection of a third finger gesture and a time gap between the second finger gesture and the third finger gesture. If the video recording process is terminated, the device then transmits (436) the recorded video to a remote server for sharing with other users of the online message application. Prior to transmitting (436) the recorded video, in some embodiments, the device prompts (446) the user to decide the target users for receiving the recorded video. As shown in FIG. 3D, prior to transmitting the recorded video, in some embodiments, the device displays multiple options for sharing the video, including sharing it as one of the moments, a local copy in the portable electronic device, a conversation contact, or a group, among others. Also prior to transmitting (436) the recorded video, in some embodiments, the device prompts (448) the user to edit the portion of video recording corresponding to the time gap between the second finger gesture and the third finger gesture.
For example, in FIGS. 3A and 3C, during the video recording process, the portable electronic device may detect the upward finger swipe gesture (422) as the second finger gesture. In response to the upward finger swipe gesture (424) , the device detects (426) a finger contact position on the touch screen. In response to the finger contact position located outside the video recording window, the device detects (428) a termination of the finger contact with the touch screen. The starting position of the upward finger swipe gesture may be at location 340, and the ending position of the upward finger swipe gesture (where the finger is lifted off the touch screen) may be at location 342. The position 342 may be directly above the position 340 or above the position in certain angle, as shown in FIGS. 3A and 3C. After the upward swipe finger gesture ended at location 342 in FIGS. 3A and 3C, the video recording process may stop (434) when no finger gesture is detected within a predefined time window.
When a third finger gesture is detected on the touch screen and a time gap between the second finger gesture and the third finger gesture is less than the predefined time window, the video recording process continues (430) . For example, during the video recording process, the finger may be briefly lifted off the touch screen surface due to the  instability of the finger contact with the touch screen. When the finger resumes the contact with the touch screen, the device may measure a time gap between the second finger gesture and the third finger gesture. When the time gap is less than the predefined time window, the second and third finger gestures are treated as false alarms and the video recording process continues (430) . Alternatively, the user may intend to skip a brief period during a video recording process. Therefore, lifting the finger off the touch screen for a brief duration (e.g. , 1-3 seconds) may give the user options to leave bookmarks on the portions of the recorded video for deleting (448) prior to transmitting (436) to the remote server.
For example, as shown in FIG. 3E, the entire video recording process may include a plurality of video clips C1, C2, C3, C4, …CN. During the period of 329, no finger gesture on the touch screen 112 is detected by the device 100. However, since the time gap between the finger-up gesture at position 342 and the finger-down gesture at position 344 is less than a predetermined time window, the video recording process continues. In some embodiments, the clip C2 corresponding to the period of 329 may be displayed as blank on the progress bar 376 in the preview window 320-1 of FIG. 3E, indicating that no finger gesture on the touch screen 112 was detected by the device 100 during the period of 329. During the preview, the user may decide to keep the clip C2 corresponding to the period of 329 when the no finger gesture was due the instability of the finger contact with the touch screen 112 or to delete the clip C2 when the user intentionally lifted his/her finger off the touch screen so as to delete the clip C2 from the recorded video for sharing.
In some embodiments, during the video recording process, a finger contact position on the touch screen may be continuously monitored (432) . The spatial character of finger gestures may be used to determine whether or not to stop the video recording process. For example, as shown in FIGS. 3A and 3C, the device may detect a sequence of finger gestures from position 340 to position 342 and then to position 344. When the device detects a continuous finger contact with the touch screen, the video recording process may continue (430) . In FIGS. 3A and 3C, the position 344 is located above position 342 for illustrative purposes. In some embodiments, the position 344 may be located outside the video recording window. As long as the finger contact with the touch screen continues, the video recording process will not stop (430) .
Once the video recording process is stopped (434) , the device may transmit (436) the recorded video to a remote server. In some embodiments, the remote server is configured to share (438) the recorded video with one or more target users of the online messaging application. As shown in FIG. 3D, prior to transmitting the recorded video, in some embodiments, the device prompts (446) the user with multiple options for sharing the recorded video.
In some embodiments, after starting the video recording process using the camera and the microphone, the device seamlessly records (440) a sequence of micro videos when the recording process exceeds a predetermined video length. In some embodiments, the predetermined video length is between 8 to 15 seconds (442) . The predetermined video length limits the size of the video files to be transmitted because smaller video files are easier to transmit (436) and share (438) with other users of the online messaging application.
For example, assuming that the predetermined video length is 10 seconds (442) , when a video recording process has lasted more than 10 seconds, the device may seamlessly write the first 10 seconds of video content into the first video file and the video content after the first 10 seconds into a second video clip file and so on so forth. In some embodiments, during preview as shown in FIG. 3E, the series of micro videos is displayed to the user so that the user may edit (448) before transmitting (436) the videos to the remote server. In some other embodiments, once transmitted, the remote server is configured to share (444) the series of videos with the one or more target users of the online messaging application. For example, the remote server may be configured to instruct the receiving ends of the series of micro videos to seamlessly combine the clips in the series into one longer video and play them one by one when sharing (444) the series of videos.
For example, the operations depicted in FIGS. 4A-4C may be implemented by components depicted in FIGS. 1A and 1B. For example, detection of the finger gestures may be implemented by event sorter 170, event recognizer 180, and event handler 190. Event monitor 171 in event sorter 170 detects a finger gesture on a touch screen 112, and event dispatcher module 174 delivers the event information to application 136-1. In this case, application 136-1 includes methods and graphical user-interfaces for updating the information displayed on the touch screen 112. A respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186, and  determines whether particular gestures have been performed. When the predefined event or sub-event is detected, event recognizer 180 activates an event handler 180 associated with the detection of a respective gesture. Event handler 180 may utilize or call data updater 176 or object updater 177 to update data or a text display region and the application internal state 192. Similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted in FIGS. 1A and 1B.
Referring again to FIG. 3, the user interface 300 can appear on the touch screen of a portable electronic device that is responsible for capturing, editing, and sharing videos as described in connection with FIGS. 3A-3E. For example, in response to a finger tap 301 of the icon corresponding to the messenger 147 application, the  user interfaces  300A or 300B may be activated and rendered on the touch screen to replace the user interface 300. The aforementioned embodiments are directed to micro video capturing, editing and sharing using a portable electronic device, i.e. , the sending end of an online messaging application. After the remote server receives a video and information of a target user to whom the video is directed, the remote server identifies a device (e.g. , a mobile phone) from which the target user currently logs into his/her account of the online messaging application and then forwards the video and related information to the device.
Attention is now directed to the embodiments of playing and watching micro videos on a portable electronic device, i.e. , the receiving end of the online messaging application. Similarly, the user interface 300 in FIG. 3 may appear on the touch screen of the portable electronic device that is responsible for playing and watching videos. In particular, FIGS. 5A-5J are exemplary user interfaces illustrating processes of playing and watching videos using an online messaging application installed in a portable electronic device having a touch screen in accordance with some embodiments. FIGS. 6A-6E are flow diagrams illustrating a method of playing and watching videos on a portable electronic device in accordance with some embodiments.
As shown in FIG. 6A, the portable electronic device first receives (602) information related to a video from a remote server. The video-related information identifies a source of the video, i.e. , the user identifier of the user who captures the video using his/her portable electronic device, and an online messaging application for playing the  video. In some embodiments, the online messaging application used for capturing the video is the same as the online messaging application used for playing the video. In some other embodiments, the two applications are different from each other. For example, the online messaging application may be a distributed instant messaging application such as WeChat by Tencent, Inc. This messaging application includes a client-side component installed in terminal devices such as a mobile phone and a server-side component installed in the remote server. A user of the messaging application can, e.g. , start the application by finger tapping 301 the corresponding icon displayed in the user interface 300 of FIG. 3. As noted above, a video file typically includes a large amount of data and it would consume a large bandwidth when downloaded from the remote server, which might have adverse impact on other applications running on the portable electronic device. Therefore, it is important for the device to take multiple factors into consideration based on the video-related information before downloading the video itself. In some embodiments, the video-related information also includes one or more snapshots of the video, which may correspond to the ith frame of the video. For example, the snapshot can be the first frame of the video or a randomly-chosen frame of the video. As describe below, the snapshot serves as a preview of the video before the video is downloaded on the portable device. In response to the arrival of the video-related information, the portable device first determines (604) a current status of the online messaging application and a network setting of the portable electronic device. Next, the portable electronic device manages (606) the download and play of the video on the touch screen in accordance with the current status of the online messaging application and the network setting of the electronic device.
The online messaging application has different running modes and the download and play of the video should be handled adaptively based on the specific running mode that the online messaging application is currently at. In addition, some network settings around the portable electronic device may not be appropriate for playing the video message. As described below, both types of information are useful when the portable electronic device determines a strategy of handling the download and play of the video. For example, when there is a large bandwidth available and there is an indicator that the user of the device is interested in watching the video, the device may start downloading video from the remote sever automatically and play the video on its display without any further user instruction. Conversely, when there is limited bandwidth available to the  portable electronic device and the user has not indicated that he/she wants to watch the video, the portable device may hold off downloading the video until after receiving further user instruction. In some embodiments, the device may even interrupt the downloading of the video after the user has indicated a change of mind to save the bandwidth for other use.
FIG. 5A depicts a user interface 500A after the user finger taps 301 the icon corresponding to the messenger application 147 of FIG. 3. This user interface 500A corresponds to a conversation list session of the online messaging application. At the bottom of the user interface 500A are features of the online messaging application. In particular, a user selection of the chats icon 512-1 brings up a list of conversations (also known as "chats" ) on the touch screen 112 as shown in FIG. 5A. A user selection of the contacts icon 512-2 causes the device to display the contact list of the user of the online messaging application. A user selection of the favorites icon 512-3 generates a list of messages (text/audio/image/video) shared by the user's contact list as described below in connection with FIG. 5H. A user selection of the settings icon 512-4 allows the user to change the setting of the online application. For example, the user can configure the network setting of the online application so that the online application determines when and how to play video messages accordingly.
As described above, an active conversation list session indicates that the user of the portable device is not currently engaging in a particular conversation with another user of the online application. Otherwise, the user selection of the icon corresponding to the messenger application 147 in FIG. 3 would bring up the particular conversation session on the touch screen 112. As shown in the figure, the user interface 500A displays at least three conversation entries, 501A, 501B, and 501C. A finger tap 510 of the conversation entry 501A activates the corresponding conversation session, which replaces the user interface 500A with the user interface 500B as shown in FIG. 5B. The conversation entry 501A includes a video icon 505 indicating that there is a new video from the other user of this conversation associated with this conversation entry. Note that the existence of the video icon 505 does not necessarily indicate that the video itself has or has not been downloaded from the remote server because the video icon 505 may be generated according to the video-related information received by the device. As described below, the device may start downloading the video even without any further user instruction whenever there is sufficient bandwidth available to the device.
As shown in FIG. 6D, when the online messaging application is in a conversation list session (624) , the device may automatically download (626) the video from the remote server when the device has a network bandwidth above a predefined threshold level (e.g. , 1 Mgbs) and display (628) an indicator 505 of the video on the screen adjacent a conversation session icon associated with the source of the video. In response to a user selection of the conversation session icon (630) , the device displays (632) a list of messages associated with the conversation session on the screen (e.g. , user interface 500B, FIG. 5B) and automatically plays (634) the video on the screen without further user instruction. Then in response to a first user scrolling of the list of messages (e.g. , 521, FIG. 5C), the device dynamically shrinks (636) the play of the video into a video icon (e.g. , 525, FIG. 5C) when the video icon moves outside a predefined region of the screen (e.g. , 545, FIG. 5C) . In some embodiments, the video icon includes the snapshot of the video as part of the video-related information originally received from the remote server. As such, different video icons corresponding to different videos look differently because they will have different snapshots. In response to a second user scrolling of the list of messages (e.g. , 535, FIG. 5D) , the device then automatically plays (638) the video on the screen when the video icon moves back into the predefined region of the screen (e.g. , 545, FIG. 5E) . In other words, the device may start playing the video based on the user’s finger swiping gestures on the touch screen 112 without the user manually selecting the corresponding video icon.
Whether or not the device needs more user instructions before downloading and playing the video on the touch screen 112 depends, at least in part, on the network setting of the portable electronic device 100. Downloading and playing video on a mobile device and usually requires a high network bandwidth because the video is often data-intensive and the mobile device may or may not have the appropriate network setting to accomplish such task. Sometimes, although there might be a high-bandwidth network (e.g. , 3G/4G/LTE cellular wireless network) available to the portable electronic for downloading the video file from the remote server, such network bandwidth may be too expensive for the user of the portable electronic device. Therefore, the user of the portable electronic device 100 can use the setting option 512-4 to determine different video downloading policies based on the surrounding network environment.
In some embodiments, when the online messaging application is in a conversation session with the source of the video (e.g. , user interface 500B) , the portable device automatically downloads (608) the video from the remote server and plays the video on the screen when the electronic device has a network bandwidth above a predefined threshold. For example, when there is at least one of a wired network connection and a Wi-Fi network connection and a 3G or above wireless connection available to the electronic device, the video may be played (610) on the screen without further user instruction while it is being downloaded onto the portable device. In other words, the downloading process and the playing process may at least overlap each other to improve the user experience. As shown in FIGS. 5A and 5B, after the finger tap 510, the online messaging application enters the conversation session and then plays the video 515 automatically without any further user instruction. In this case, there is no need for further user instruction because the portable electronic device may have already started downloading the video using the Wi-Fi connection 308, which not only has sufficient bandwidth but is also free.
The number of times by which the video may be played is also user-configurable. In some embodiments, the video is played (612) repeatedly and then shrunk to a video icon when there is a user instruction to interrupt the play of the video. As shown in FIGS. 5B and 5C, the device 100 plays the video repeatedly until after receiving a user instruction (e.g. , a finger swipe gesture 520) to stop playing the video. In response, the video playing window 515 dynamically shrinks into the video icon 525 of the user interface 500C as shown in FIG. 5C. In some other embodiments, the video is played (614) repeatedly on the screen and then shrunk to an icon in response to a new message added to the conversation session. For example, the video playing window 515 in FIG. 5B may be dynamically shrink into the video icon 525 of the user interface 500C in FIG. 5C in response to the new message 530 submitted by the user of the portable electronic device 100. In yet some other embodiments, the video is played (616) repeatedly on the screen for a predefined number of times and then shrunk to an icon after that even without any further user instruction. Note that the different embodiments are not mutually exclusive from each other. For example, the user configuration may be that the video will be played for five times consecutively provided that there is no user instruction to terminate the process. Note that, as described below in connection with FIG. 5F, the user can always choose to replay the video by pressing the icon 525 in the user interface 500C.
In some embodiments, the video may be replayed even without a user selection of the video icon 525. For example, FIG. 5D depicts a user interface 500D including a sequence of text messages between the two users. As more and more messages are added to the conversation session, the video icon 525 shown in the user interface 500C is pushed out of the user interface 500D. When the user wants to replay the video, he or she can swipe (535) the touch screen 112 downward. As shown in FIG. 5E, the sequence of messages moves down and the video message re-appears in the user interface 500E. But instead of waiting for a user pressing the video icon, the device may start replaying the video by dynamically expanding the video icon into the video play window when the video icon enters into the predefined region 545 on the touch screen 112. The same criteria used for determining the playing policy may be used herein again. In other words, the user may configure that all the video icons in the region 545 will be played automatically without requiring further user instruction. Note that the region 545 shown in the user interface 500E is for illustrative purpose and its size is variable by the user.
In some embodiments, when the online messaging application is in a conversation session with the source of the video, the device automatically generates (618) a video icon corresponding to the video and displays the video icon on the screen when the electronic device does not have a network bandwidth above the predefined threshold. In some embodiments, the video icon includes (619) a snapshot of the video (e.g. , the snapshot included in the video-related information) . When the electronic device does not have a network bandwidth above the predefined threshold, the device downloads (620) the video from the remote server and dynamically replaces the video icon with the play of the video on the screen in response to a user selection of the video icon. As shown in FIG. 5F, the user interface 500F indicates that the device no longer has the Wi-Fi connection indicator 308. The wireless connection indicator 302 may correspond to a 2G wireless network standard. In this case, the video is not downloaded from the remote server and played until after a user selection of the video icon. For example, in response to the finger tap 555, the device may start downloading the video from the remote server and play it on the touch screen 112 as shown in FIG. 5E. Note that the expansion of the video icon into the video playing window may push certain messages out of the user interface 500F. In some embodiments, the online messaging application monitors the network environment around the device 100 and updates its video playing and watching policy accordingly. Using the  same example, when subsequently there is at least one of a wired network connection and a Wi-Fi network connection and a 3G or above wireless connection available to the portable electronic device, the device may automatically download (622) the video from the remote server and dynamically replace the video icon with the play of the video on the screen without further user instruction.
FIG. 5G depicts a user interface 500G of a conversation list session. In response to a user selection 560 of the favorites icon 512-3, the user interface 500G is replaced with the user interface 500H of FIG. 5H, which corresponds to a message sharing session. When the online messaging application is in a message sharing session (640) , the device displays (642) a list of messages shared by different users of the online message application, the list of messages including a video icon corresponding to the video. In response to a first user scrolling of the list of the messages, the device automatically downloads (644) the video from the remote server and plays the video on the screen without further user instruction when the video icon moves into a predefined region of the screen. In other words, this situation is similar to the user interfaces 500D (FIG. 5D) and 500E (FIG. 5E). Subsequently, in response to a second user scrolling of the list of the messages, the device automatically suspends (646) the download of the video from the remote server and dynamically shrinks the play of the video into the video icon without further user instruction when the video icon moves outside the predefined region of the screen. In other words, when the user indicates that he or she is no longer interested in watching the video, the device can stop further downloading data from the remote server and save the bandwidth for other use. As shown in FIG. 5H, a user scrolling gesture 565 brings the messages shared by different contacts upward. FIG. 5I depicts a user interface 500I including the updated list of messages, one of which having a video message icon 570. Depending on the network setting of the device, the video may be played automatically without any further user instruction. For example, when the video message icon 570 enters the predefined region 575 from the bottom of the user interface, the device may start playing the video by dynamically replacing the video icon with the video playing window 580 as shown in the user interface 500J of FIG. 5J (which indicates that the device 100 has a Wi-Fi connection) . On the other hand, the video icon may remain as is if there is no high bandwidth network connection available to the device 100 and the device 100 has not downloaded the video from the remote server yet.
While particular embodiments are described above, it will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present application. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "includes, " "including, " "comprises, " and/or "comprising, " when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" may be construed to mean "when" or "upon" or "in response to determining" or "in response to detecting, " depending on the context. Similarly, the phrase "if it is determined" or "if [astated condition or event] is detected" may be construed to mean "upon determining" or "in response to determining" or "upon detecting [the stated condition or event] " or "in response to detecting [the stated condition or event] , " depending on the context.
Although some of the various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (23)

  1. A method for playing videos on a portable electronic device, the method comprising:
    at the portable electronic device having a screen, one or more processors and memory storing programs executed by the one or more processors:
    receiving information related to a video from a remote server, the video-related information identifying a source of the video and an online messaging application for playing the video;
    determining a current status of the online messaging application and a network setting of the portable electronic device; and
    managing the download and play of the video on the screen in accordance with the current status of the online messaging application and the network setting of the portable electronic device.
  2. The method of claim 1, further comprising:
    when the online messaging application is in a conversation session with the source of the video, automatically downloading the video from the remote server and playing the video on the screen when the portable electronic device has a network bandwidth above a predefined threshold.
  3. The method of claim 2, wherein, when there is at least one of a wired network connection and a Wi-Fi network connection and a 3G or above wireless connection available to the portable electronic device, the video is played on the screen without further user instruction while it is being downloaded onto the portable electronic device.
  4. The method of claim 2, wherein the video is played repeatedly on the screen and then shrunk to an icon in response to a user instruction to interrupt the play of the video.
  5. The method of claim 2, wherein the video is played repeatedly on the screen and then shrunk to an icon in response to a new message added to the conversation session.
  6. The method of claim 2, wherein the video is played repeatedly on the screen for a predefined number of times and then shrunk to an icon without further user instruction.
  7. The method of claim 1, wherein the video-related information includes a snapshot of the video.
  8. The method of claim 1, further comprising:
    when the online messaging application is in a conversation session with the source of the video, automatically generating a video icon corresponding to the video and displaying the video icon on the screen when the portable electronic device does not have a network bandwidth above a predefined threshold.
  9. The method of claim 8, further comprising:
    when there is at least one of a wired network connection and a Wi-Fi network connection and a 3G or above wireless connection available to the portable electronic device, automatically downloading the video from the remote server and dynamically replacing the video icon with the play of the video on the screen without further user instruction.
  10. The method of claim 8, further comprising:
    when the electronic device does not have a network bandwidth above the predefined threshold, downloading the video from the remote server and dynamically replacing the video icon with the play of the video on the screen in response to a user selection of the video icon.
  11. The method of claim 1, further comprising:
    when the online messaging application is in a conversation list session:
    automatically downloading the video from the remote server when the portable electronic device has a network bandwidth above a predefined threshold;
    displaying an indicator of the video on the screen adjacent a conversation session icon associated with the source of the video; and
    in response to a user selection of the conversation session icon:
    displaying a list of messages associated with the conversation session on the screen; and
    automatically playing the video on the screen without further user instruction.
  12. The method of claim 11, further comprising:
    in response to a first user scrolling of the list of messages:
    dynamically shrinking the play of the video into a video icon when the video icon moves outside a predefined region of the screen; and
    in response to a second user scrolling of the list of messages:
    automatically playing the video on the screen when the video icon moves back into the predefined region of the screen.
  13. The method of claim 1, further comprising:
    when the online messaging application is in a message sharing session:
    displaying a list of messages shared by different users of the online message application, the list of messages including a video icon corresponding to the video that has not been downloaded from the remote server;
    in response to a first user scrolling of the list of the messages:
    automatically downloading the video from the remote server and playing the video on the screen without further user instruction when the video icon moves into a predefined region of the screen; and
    in response to a second user scrolling of the list of the messages:
    automatically suspending the download of the video from the remote server and dynamically shrinking the play of the video into the video icon without further user instruction when the video icon moves outside the predefined region of the screen.
  14. A portable electronic device, comprising:
    a touch screen;
    one or more processors;
    memory; and
    one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
    receiving information related to a video from a remote server, the video-related information identifying a source of the video and an online messaging application for playing the video;
    determining a current status of the online messaging application and a network setting of the portable electronic device; and
    managing the download and play of the video on the screen in accordance with the current status of the online messaging application and the network setting of the portable electronic device.
  15. The device of claim 14, wherein the one or more programs further include instructions for:
    when the online messaging application is in a conversation session with the source of the video, automatically downloading the video from the remote server and playing the video on the screen when the portable electronic device has a network bandwidth above a predefined threshold.
  16. The device of claim 14, wherein the one or more programs further include instructions for:
    when the online messaging application is in a conversation session with the source of the video, automatically generating a video icon corresponding to the video and displaying the video icon on the screen when the portable electronic device does not have a network bandwidth above a predefined threshold.
  17. The device of claim 14, wherein the one or more programs further include instructions for:
    when the online messaging application is in a conversation list session:
    automatically downloading the video from the remote server when the portable electronic device has a network bandwidth above a predefined threshold;
    displaying an indicator of the video on the screen adjacent a conversation session icon associated with the source of the video; and
    in response to a user selection of the conversation session icon:
    displaying a list of messages associated with the conversation session on the screen; and
    automatically playing the video on the screen without further user instruction.
  18. The device of claim 14, wherein the one or more programs further include instructions for:
    when the online messaging application is in a message sharing session:
    displaying a list of messages shared by different users of the online message application, the list of messages including a video icon corresponding to the video that has not been downloaded from the remote server;
    in response to a first user scrolling of the list of the messages:
    automatically downloading the video from the remote server and playing the video on the screen without further user instruction when the video icon moves into a predefined region of the screen; and
    in response to a second user scrolling of the list of the messages:
    automatically suspending the download of the video from the remote server and dynamically shrinking the play of the video into the video icon without further user instruction when the video icon moves outside the predefined region of the screen.
  19. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device with a touch screen, cause the portable electronic device to:
    receive information related to a video from a remote server, the video-related information identifying a source of the video and an online messaging application for playing the video;
    determine a current status of the online messaging application and a network setting of the portable electronic device; and
    manage the download and play of the video on the screen in accordance with the current status of the online messaging application and the network setting of the portable electronic device.
  20. The non-transitory computer readable storage medium of claim 19, wherein the one or more programs further include instructions for:
    when the online messaging application is in a conversation session with the source of the video, automatically downloading the video from the remote server and playing the video on the screen when the portable electronic device has a network bandwidth above a predefined threshold.
  21. The non-transitory computer readable storage medium of claim 19, wherein the one or more programs further include instructions for:
    when the online messaging application is in a conversation session with the source of the video, automatically generating a video icon corresponding to the video and displaying the video icon on the screen when the portable electronic device does not have a network bandwidth above a predefined threshold.
  22. The non-transitory computer readable storage medium of claim 19, wherein the one or more programs further include instructions for:
    when the online messaging application is in a conversation list session:
    automatically downloading the video from the remote server when the portable electronic device has a network bandwidth above a predefined threshold;
    displaying an indicator of the video on the screen adjacent a conversation session icon associated with the source of the video; and
    in response to a user selection of the conversation session icon:
    displaying a list of messages associated with the conversation session on the screen; and
    automatically playing the video on the screen without further user instruction.
  23. The non-transitory computer readable storage medium of claim 19, wherein the one or more programs further include instructions for:
    when the online messaging application is in a message sharing session:
    displaying a list of messages shared by different users of the online message application, the list of messages including a video icon corresponding to the video that has not been downloaded from the remote server;
    in response to a first user scrolling of the list of the messages:
    automatically downloading the video from the remote server and playing the video on the screen without further user instruction when the video icon moves into a predefined region of the screen; and
    in response to a second user scrolling of the list of the messages:
    automatically suspending the download of the video from the remote server and dynamically shrinking the play of the video into the video icon without further user instruction when the video icon moves outside the predefined region of the screen.
PCT/CN2014/087985 2014-09-30 2014-09-30 Device and method for capturing, sharing and watching video messages WO2016049875A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/087985 WO2016049875A1 (en) 2014-09-30 2014-09-30 Device and method for capturing, sharing and watching video messages

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/087985 WO2016049875A1 (en) 2014-09-30 2014-09-30 Device and method for capturing, sharing and watching video messages

Publications (1)

Publication Number Publication Date
WO2016049875A1 true WO2016049875A1 (en) 2016-04-07

Family

ID=55629302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/087985 WO2016049875A1 (en) 2014-09-30 2014-09-30 Device and method for capturing, sharing and watching video messages

Country Status (1)

Country Link
WO (1) WO2016049875A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108040279A (en) * 2017-12-07 2018-05-15 北京麒麟合盛网络技术有限公司 Video acceleration method and device
CN114079612A (en) * 2020-08-03 2022-02-22 阿里巴巴集团控股有限公司 Disaster recovery system and control method, device, equipment and medium thereof
WO2022127895A1 (en) * 2020-12-18 2022-06-23 华为技术有限公司 Packet processing method and related device
CN115119027A (en) * 2022-08-29 2022-09-27 北京陌陌信息技术有限公司 Video playing method for mobile terminal
CN114079612B (en) * 2020-08-03 2024-06-04 阿里巴巴集团控股有限公司 Disaster recovery system and management and control method, device, equipment and medium thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120060202A1 (en) * 2010-09-08 2012-03-08 Beijing Ruixin Online System Technology Co., Ltd Content service system, content server, content terminal and content service method
CN102708170A (en) * 2012-05-02 2012-10-03 张雯 Method and device for extracting and releasing online film and television information
CN103581702A (en) * 2012-08-01 2014-02-12 上海亿动信息技术有限公司 Video release control system based on mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120060202A1 (en) * 2010-09-08 2012-03-08 Beijing Ruixin Online System Technology Co., Ltd Content service system, content server, content terminal and content service method
CN102708170A (en) * 2012-05-02 2012-10-03 张雯 Method and device for extracting and releasing online film and television information
CN103581702A (en) * 2012-08-01 2014-02-12 上海亿动信息技术有限公司 Video release control system based on mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108040279A (en) * 2017-12-07 2018-05-15 北京麒麟合盛网络技术有限公司 Video acceleration method and device
CN114079612A (en) * 2020-08-03 2022-02-22 阿里巴巴集团控股有限公司 Disaster recovery system and control method, device, equipment and medium thereof
CN114079612B (en) * 2020-08-03 2024-06-04 阿里巴巴集团控股有限公司 Disaster recovery system and management and control method, device, equipment and medium thereof
WO2022127895A1 (en) * 2020-12-18 2022-06-23 华为技术有限公司 Packet processing method and related device
CN115119027A (en) * 2022-08-29 2022-09-27 北京陌陌信息技术有限公司 Video playing method for mobile terminal
CN115119027B (en) * 2022-08-29 2022-11-25 北京陌陌信息技术有限公司 Video playing method for mobile terminal

Similar Documents

Publication Publication Date Title
US11928317B2 (en) Device, method, and graphical user interface for sharing content from a respective application
US11783117B2 (en) Device, method, and graphical user interface for sharing a content object in a document
US11972043B2 (en) User detection by a computing device
JP6435365B2 (en) Device, method, and graphical user interface for sharing content from applications
US9712577B2 (en) Device, method, and graphical user interface for sharing content from a respective application
US10606469B2 (en) Device, method, and graphical user interface for managing multiple display windows
US8839122B2 (en) Device, method, and graphical user interface for navigation of multiple applications
US10394441B2 (en) Device, method, and graphical user interface for controlling display of application windows
US20130055119A1 (en) Device, Method, and Graphical User Interface for Variable Speed Navigation
US11120097B2 (en) Device, method, and graphical user interface for managing website presentation settings
KR101962774B1 (en) Method and apparatus for processing new messages associated with an application
US20230012613A1 (en) Device, Method, and Graphical User Interface for Managing Data Stored on a Device
WO2016049875A1 (en) Device and method for capturing, sharing and watching video messages
WO2016049882A1 (en) Device and method for capturing, sharing and watching video messages

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14903268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/08/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14903268

Country of ref document: EP

Kind code of ref document: A1