CN108228035B - Method for moving window between multi-screen devices and dual-display communication device - Google Patents

Method for moving window between multi-screen devices and dual-display communication device Download PDF

Info

Publication number
CN108228035B
CN108228035B CN201810310376.0A CN201810310376A CN108228035B CN 108228035 B CN108228035 B CN 108228035B CN 201810310376 A CN201810310376 A CN 201810310376A CN 108228035 B CN108228035 B CN 108228035B
Authority
CN
China
Prior art keywords
display
window
sensitive display
touch
transition indicator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810310376.0A
Other languages
Chinese (zh)
Other versions
CN108228035A (en
Inventor
S.瑟帕尔
A.德帕兹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Z124 Co
Original Assignee
Z124 Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/223,778 external-priority patent/US20120081309A1/en
Application filed by Z124 Co filed Critical Z124 Co
Publication of CN108228035A publication Critical patent/CN108228035A/en
Application granted granted Critical
Publication of CN108228035B publication Critical patent/CN108228035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A dual display communication device, comprising: a gesture capture area to receive a gesture; a first touch-sensitive display that receives gestures and displays a display image (e.g., a desktop or window of an application); and a second touch sensitive display that receives the gesture and displays the display image. The middleware receives a gesture that indicates to move a displayed image from the first touch-sensitive display to the second touch-sensitive display, e.g., to maximize a window to cover a portion of both displays simultaneously; moving a transition indicator from the first touch sensitive display toward the second touch sensitive display to a selected position that the displayed image will occupy in response to and prior to the movement of the displayed image to the second touch sensitive display; and thereafter moving the displayed image from the first touch sensitive display to the selected location towards the second touch sensitive display.

Description

Method for moving window between multi-screen devices and dual-display communication device
The divisional application is filed on 2012, 9/3/h, under application number 201210458810.2, entitled "display image conversion indicator".
Cross Reference to Related Applications
This application requires U.S. provisional application serial No. 61/389,000, filed 10/1/2010 at 35u.s.c. § 119(e), entitled "DUAL DISPLAY window SYSTEM"; 61/389,117, filed on 1/10/2010 entitled "Multi-OPERATING System docking DEVICE"; 61/389,087, filed on 1/10/2010 entitled "TABLET COMPATITING USER INTERFACE". Each of the above documents is incorporated herein by reference in its entirety for all and all purposes for which they teach.
Technical Field
The invention relates to displaying an image transition indicator.
Background
Most handheld computing devices, such as cellular telephones, tablet computers, and e-readers, use touch screen displays to not only communicate display information to a user, but also to receive input from user interface commands. While touch screen displays may increase the configurability of the handheld device and provide a variety of user interface options, such flexibility typically comes at a price. Touch screens provide the dual purpose of both content and receiving user commands, while flexible for the user, but can clutter the display and cause visual clutter, causing user frustration and loss of productivity.
The small form factor of a handheld computing device requires a careful tradeoff between the graphics displayed and the area provided for receiving input. On the one hand, small displays bound the display space, which may increase the difficulty of demonstrating actions or results. On the other hand, a virtual keypad or other user interface mechanism is overlaid on or placed close to the executing application, requiring the application to be squeezed into a smaller portion of the display.
This trade-off behavior is particularly difficult for single display touch screen devices. Limited screen space impairs single display touch screen devices. When a user enters information into the device through this single display, the ability to interpret the information in the display may be severely hampered, particularly when complex interactions between the display and the interface are required.
Disclosure of Invention
There is a need for a dual-sided, multi-display handheld computing device that provides enhanced power and/or versatility compared to conventional single-display handheld computing devices. These and other needs are addressed by the various aspects, embodiments, and/or configurations of the present disclosure. Also, while the present disclosure has been described in terms of exemplary embodiments, it should be appreciated that individual aspects of the disclosure may be separately claimed.
In one embodiment, a method provides the steps of:
(a) receiving, by a gesture capture area and/or a touch sensitive display, a gesture indicating to move a displayed image from a first touch sensitive display to a second touch sensitive display; and
(b) moving, by the microprocessor, a transition indicator from the first touch sensitive display toward the second touch sensitive display to a selected position occupied by the displayed image in response to and prior to the movement of the displayed image to the second touch sensitive display; and
(c) thereafter, moving, by the microprocessor, the displayed image from the first touch sensitive display to the selected location toward the second touch sensitive display.
In one embodiment, a dual display communication device includes:
(i) a gesture capture area to receive a gesture; and
(ii) a first touch-sensitive display that receives gestures and displays a display image, wherein the display image is a desktop and/or a window of an application;
(iii) the second touch display receives the gesture and displays a display image;
(iv) middleware operable to perform one or more of the following operations:
(A) receiving a gesture indicating that a displayed image is to be expanded from the first touch-sensitive display to cover most, if not all, of a second touch-sensitive display; and
(B) expanding a transition indicator to cover a majority, if not all, of the second touch sensitive display responsive to and prior to expansion of the display image to the second touch sensitive display; and
(C) thereafter, the display image is expanded to the second touch sensitive display.
In one configuration, the displayed image is a desktop, and the gesture is received by the touch-sensitive display.
In one configuration, the displayed image is a window, the gesture is received by the gesture capture region, and the gesture capture region does not display any displayed image.
In one configuration, the window is minimized on the first touch-sensitive display, while the window is maximized by gesture to cover a majority of the first and second touch-sensitive displays.
In one configuration, the transition indicator moves to a selected position along the travel path of the display image to preview the movement of the display image, and the transition indicator is substantially the same size and shape as the display image.
In one configuration, the transition indicator cannot receive or provide dynamic user input/output, respectively, and the transition indicator has a different appearance than the display image.
In one configuration, the display image and the transition indicator are simultaneously in active display positions of the first and second touch sensitive displays, respectively, prior to initiating movement of the display image, and the transition indicator comprises a user configured color, mode, design and/or photograph.
In one configuration, the transition indicator is graphical availability (affordance), the window is controlled by the multi-screen application, and the transition indicator is not actually responsive to user commands or requests during movement from the first touch-sensitive display to the second touch-sensitive display.
The present disclosure may provide various advantages based on particular aspects, embodiments, and/or configurations. Transition indicators may provide a more aesthetically pleasing and simplified user interface. It can also be used when the actual view cannot be rendered during the transition due to processing delays or other problems. The well (well) can avoid other empty and aesthetically tedious displays.
These and other advantages will become apparent in light of this disclosure.
The words "at least one," "one or more," and/or "are open-ended words that, in operation, are conjunctions and disjunctions. For example, each of the phrases "at least one of A, B and C", "at least one of A, B or C", "one or more of A, B and C", "one or more of A, B or C", and "A, B and/or C" means a alone, B alone, C, A and B alone in combination, B and C in combination, or A, B and C in combination.
The term "a" or "an" entity refers to one or more of such entities. Also, the terms "a" (or "an"), "one or more" and "at least one" may be used interchangeably herein. It should also be noted that the terms "comprising," "including," and "having" may be used interchangeably.
The term "automatic" and variations thereof, as used herein, refers to any process or operation that is not performed by specific human input when the process or operation is performed. However, the process or operation may be automatic, even if the process or operation is performed by a specific or unspecified human input, if the input is received prior to performing the process or operation. Human input is considered to be specific if such input affects how the process or operation is to be performed. Human input consistent with performing a process or operation is not considered "specific".
The term "computer-readable medium" as used herein refers to any tangible memory and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Digital file attachments to emails or other self-contained information archives or archive sets are considered a distribution medium equivalent to an actual storage medium. When the computer-readable medium is configured as a database, it should be appreciated that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include an actual storage medium or distribution medium, and equivalents and successor media, in which the software implementations of the present disclosure are stored, as well as those known in the art.
The term "desktop" refers to a metaphor for depicting a system. The desktop is generally considered a "surface" that typically includes pictures, called icons, widgets, folders, etc. that can activate display applications, windows, control rooms, files, folders, documents, and other illustrated items. The icons are typically selectable to initiate tasks via user interface interaction, allowing a user to execute applications or process other operations.
The term "display" refers to a portion of a screen used to display computer output to a user.
The term "display image" refers to an image presented on a display. A typical display image is a window or desktop. The display image may occupy the entire display or a portion of the display.
The term "display orientation" refers to the orientation of a rectangular display as determined by a user for viewing. The two most common types of display orientation are portrait (portrait) and landscape (landscape). In landscape mode, the display is oriented so that the width of the display is greater than the height of the display (e.g., a 4:3 ratio, which is 4 units wide and 3 units high, or a 16:9 ratio, which is 16 units wide and 9 units high). In other words, in landscape mode, the longer dimension of the display is oriented substantially horizontally, while the shorter dimension of the display is oriented substantially vertically. In contrast, in the portrait mode, the display is oriented so that the width of the display is less than the height of the display. In other words, in the portrait mode, the orientation defining the shorter dimension of the display is substantially horizontal, while the orientation defining the longer dimension of the display is substantially vertical. The multi-screen display may have a composite display that includes all of the screens. The composite display may have different display characteristics based on different orientations of the device.
The term "gesture" refers to a user action that expresses a desired idea, operation, meaning, result, and/or effect. User actions may include operating a device (e.g., turning the device on or off, changing the device orientation, moving a trackball or scroll wheel, etc.), movement of a body member associated with the device, movement of an implementation or tool associated with the device, audio input, etc. Gestures may be obtained on or with the device (e.g., on a screen) to interact with the device.
The term "module" as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element.
The term "gesture capture" refers to the sensing or detection of an instance and/or type of user gesture. Gesture capture may exist in one or more regions of the screen, the gesture region may be on the display, where it may be considered a touch sensitive display, or the gesture region may not be on the display, where it may be considered a gesture capture region.
A "multi-screen application" refers to an application that is capable of presenting one or more windows that can occupy multiple screens simultaneously. Generally, a multi-screen application can operate in a single-screen manner, in which one or more windows of the application are displayed on only one screen, or in a multi-screen manner, in which one or more windows are simultaneously displayed on a plurality of screens.
A "single screen application" refers to an application that is capable of presenting one or more windows that can only occupy a single screen at a time.
The terms "screen", "touch screen" refer to a physical structure that enables a user to interact with a computer through touch areas on the screen and provide information to the user via a display. The touch screen may sense user contact in a number of different ways, such as by a change in an electrical parameter (e.g., resistance or capacitance), acoustic wave change, infrared radiation proximity detection, light change detection, and so forth. For example, in a resistive touch screen, normally separate conductive and resistive metal layers in the screen circulate current. When the user touches the screen, the two layers become in contact at the contact location, thus noting the change in electric field and calculating the coordinates of the contact location. In a capacitive touch screen, the capacitive layer stores an electrical charge that is discharged upon a user via contact with the touch screen, causing a reduction in the electrical charge of the capacitive layer. The decrease is measured and the touch location coordinates are determined. In a surface acoustic wave touch screen, an acoustic wave is transmitted through the screen, and a user contact interferes with the acoustic wave. The receiving transducer detects the user contact and determines the contact location coordinates.
The term "window" refers to a displayed image (typically rectangular) on at least a portion of the display that includes or provides content that is different from other portions of the screen. The window may obscure the desktop.
The terms "determine," "calculate," and "estimate," and variations thereof, as used herein, are used interchangeably and may include any type of method, step, mathematical operation, or technique.
It should be understood that the term "device" as used herein should be given the broadest possible interpretation, in terms of 35u.s.c., section 112, paragraph 6. Accordingly, the claims incorporating the term "means" are intended to cover all structures, materials, or processes set forth herein, as well as all equivalents thereof. Further, the structures, materials, or processes and their equivalents should be included in the summary of the invention, the description of the drawings, the detailed description, the abstract, and all of the described in the claims.
The foregoing is a brief summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is not an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and/or configurations. This summary is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure, but rather to present selected concepts of the disclosure in a simplified form as a prelude to the more detailed description presented below. As will be appreciated, other aspects, embodiments and/or configurations of the present disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below (features).
Drawings
FIG. 1A includes a first view of an embodiment of a multi-screen user device;
FIG. 1B includes a second view of an embodiment of a multi-screen user device;
FIG. 1C includes a third view of an embodiment of a multi-screen user device;
FIG. 1D includes a fourth view of an embodiment of a multi-screen user device;
FIG. 1E includes a fifth view of an embodiment of a multi-screen user device;
FIG. 1F includes a sixth view of an embodiment of a multi-screen user device;
FIG. 1G includes a seventh view of an embodiment of a multi-screen user device;
FIG. 1H includes an eighth view of an embodiment of a multi-screen user device;
FIG. 1I includes a ninth view of an embodiment of a multi-screen user device;
FIG. 1J includes a tenth view of an embodiment of a multi-screen user device;
FIG. 2 is a block diagram of one embodiment of the hardware of the apparatus;
FIG. 3A is a block diagram of one embodiment of a status mode of the device based on the orientation and/or configuration of the device;
FIG. 3B is a table of one embodiment of a status mode of the device based on the orientation and/or configuration of the device;
FIG. 4A is a first representation of one embodiment of a user gesture received at a device;
FIG. 4B is a second representation of an embodiment of a user gesture received at a device;
FIG. 4C is a third representation of an embodiment of a user gesture received at a device;
FIG. 4D is a fourth representation of an embodiment of a user gesture received at a device;
FIG. 4E is a fifth representation of an embodiment of a user gesture received at a device;
FIG. 4F is a sixth representation of an embodiment of a user gesture received at a device;
FIG. 4G is a seventh representation of an embodiment of a user gesture received at a device;
FIG. 4H is an eighth representation of an embodiment of a user gesture received at a device;
FIG. 5A is a block diagram of one embodiment of device software and/or firmware;
FIG. 5B is a second block diagram of one embodiment of the device software and/or firmware;
FIG. 6A is a first representation of one embodiment of a device configuration generated in response to a device state;
FIG. 6B is a second representation of one embodiment of a device configuration generated in response to a device state;
FIG. 6C is a third representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6D is a fourth representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6E is a fifth representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6F is a sixth representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6G is a seventh representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6H is an eighth representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6I is a ninth representation of an embodiment of a device configuration generated in response to the device state;
FIG. 6J is a tenth representation of an embodiment of a device configuration generated in response to the device state;
7A-F are a series of portrait display orientation screen shots in accordance with one embodiment;
8A-E are a series of horizontal display orientation screen shots in accordance with one embodiment;
FIG. 9 is a flow chart illustrating one embodiment;
FIG. 10A is a representation of a logical window stack;
FIG. 10B is another representation of one embodiment of a logical window stack;
FIG. 10C is another representation of one embodiment of a logical window stack;
FIG. 10D is another representation of one embodiment of a logical window stack;
FIG. 10E is another representation of one embodiment of a logical window stack;
FIG. 11 is a block diagram of one embodiment of a logical data structure for a window stack;
FIG. 12 is a flow diagram of one embodiment of a method for creating a window stack; and
fig. 13 depicts a window stacking configuration, in accordance with one embodiment.
In the drawings, similar components and/or features may have the same reference numerals. Moreover, different components of the same type may be distinguished in similar assemblies by following the reference label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
Detailed Description
Presented herein are embodiments of an apparatus. The device may be a communication device, such as a cellular telephone, or other smart device. The device may include two screens oriented to provide a variety of unique display configurations. Further, the device may receive user input in a unique manner. The overall design and functionality of the device provides an enhanced user experience, making the device more useful and more efficient.
Mechanical properties:
fig. 1A-1J illustrate an apparatus 100 according to embodiments of the present disclosure. As will be described in greater detail below, the device 100 may be positioned in a number of different ways, each of which provides different functionality to the user. The device 100 is a multi-screen device that includes a primary screen 104 and a secondary screen 108, each of which is touch sensitive. In an embodiment, the entire front surface of screens 104 and 108 may be touch sensitive and capable of receiving input by a user touching the front surface of screens 104 and 108. The main screen 104 includes a touch sensitive display 110 that, in addition to being touch sensitive, can display information to a user. The secondary screen 108 includes a touch sensitive display 114 that also displays information to the user. In other embodiments, screens 104 and 108 may include more than one display area.
The home screen 104 also includes a configurable area 112, and when the user touches a portion of the configurable area 112, the configurable area 112 is configured for a particular input. The secondary screen 108 also includes a configurable area 116, the configurable area 116 configured for specific input. Regions 112a and 116a have been configured to receive a "back" input indicating that the user wants to view previously displayed information. Regions 112b and 116b have been configured to receive a "menu" input indicating that the user wants to view options from the menu. Areas 112c and 116c have been configured to receive a "home" input indicating that the user wants to view information associated with the "home" view. In other embodiments, in addition to the configurations described above, the regions 112a-c and 116a-c may be configured for other types of specific inputs, including controlling features of the device 100, some non-limiting examples including adjusting overall system power, adjusting volume, adjusting brightness, adjusting vibration, selecting display items (on the screen 104 or 108), operating a camera, operating a microphone, and initiating/terminating a telephone call. Also, in some embodiments, regions 112a-C and 116a-C may be configured for specific inputs based on applications running on device 100 and/or information displayed on touch-sensitive displays 110 and/or 114.
In addition to touch sensing, the primary screen 104 and the secondary screen 108 may include areas that receive input from a user without requiring the user to touch the display area of the screen. For example, primary screen 104 includes a gesture capture area 120 and secondary screen 108 includes a gesture capture area 124. These regions are capable of receiving input by recognizing gestures made by a user without requiring the user to substantially touch the surface of the display region. In contrast to touch- sensitive displays 110 and 114, gesture capture regions 120 and 124 are generally incapable of presenting display images.
The two screens 104 and 108 are connected together by a hinge 128, as best shown in FIG. 1C (illustrating a rear view of the device 100). In the embodiment shown in fig. 1A-1J, hinge 128 is a central hinge that connects screens 104 and 108 so that when the hinge is closed, screens 104 and 108 can be juxtaposed (i.e., side-by-side), as shown in fig. 1B (which illustrates a front view of device 100). The hinge 128 may be opened to position the two screens 104 and 108 in different relative positions to each other. As will be described in more detail below, device 100 may have different functions depending on the relative positions of screens 104 and 108.
Fig. 1D illustrates the right side of the device 100. As shown in FIG. 1D, secondary screen 108 also includes card slot 132 and port 136 on one side thereof. In an embodiment, card slot 132 accommodates different types of cards, including Subscriber Identity Modules (SIMs). In an embodiment, port 136 is an input/output port (I/O port) that allows device 100 to be connected to other peripheral devices, such as a display, keyboard, or printing device. As may be appreciated, these are just a few examples, and in other embodiments, the apparatus 100 may include other slots and ports, such as slots and ports for receiving additional storage devices and/or for connecting other peripheral devices. Also shown in fig. 1D is an audio jack 140, for example, which houses a core, ring, sleeve (TRS) connector to allow a user to utilize headphones or earphones.
The device 100 also includes a plurality of buttons 158. For example, fig. 1E illustrates the left side of the device 100. As shown in FIG. 1E, the side of the primary screen 104 includes three buttons 144, 148, and 152, which may be configured for specific inputs. For example, buttons 144, 148, and 152 may be configured to control various aspects of device 100, alone or in combination. Some non-limiting examples include overall system power, volume, brightness, vibration, selection of display items (on screen 104 or 108), camera, microphone, and initiating/terminating phone calls. In some embodiments, instead of separate buttons, two buttons may be combined into one rocker button. This arrangement is useful in the following cases: the buttons are configured to control a characteristic such as volume or brightness. In addition to buttons 144, 148, and 152, device 100 also includes a button 156, as shown in FIG. 1F, which illustrates the top of device 100. In one embodiment, button 156 is configured as an on/off button for controlling the overall system power of device 100. In other embodiments, the button 156 is configured to control other aspects of the apparatus 100 in addition to or instead of controlling the system power supply. In some embodiments, one or more of the buttons 144, 148, 152, and 156 can support different user commands. For example, a normal press has a duration typically less than about 1 second, and resembles a quick tap. The medium press has a duration of typically more than 1 second but less than about 12 seconds. The long press has a duration of typically more than about 12 seconds. The function of the buttons is typically specific to the application in question on the respective displays 110 and 114. For example, in a telephone application and based on a particular button, a normal, medium, or long press may mean ending a call, increasing the call volume, decreasing the call volume, and triggering microphone muting. For example, in a camera or video application and based on a particular button, a normal, medium, or long press may mean increasing zoom, decreasing zoom, and capturing or recording video.
There are also a number of hardware components within the device 100. As illustrated in fig. 1C, the apparatus 100 includes a speaker 160 and a microphone 164. The apparatus 100 also includes a camera 168 (FIG. 1B). In addition, device 100 includes two position sensors 172A and 172B that are used to determine the relative positions of screens 104 and 108. In one embodiment, the position sensors 172A and 172B are Hall effect sensors. However, in other embodiments, other sensors may be used in addition to or in place of the hall effect sensors. An accelerometer 176 may also be included as part of device 100 to determine the orientation of device 100 and/or the orientation of screens 104 and 108. Additional hardware components that may be included in the apparatus 100 will be described below with reference to fig. 2.
The overall design of device 100 allows it to provide additional functionality not available in other communication devices. Some of the functions are based on the multiple positions and orientations that the device 100 may have. As shown in fig. 1B-1G, device 100 may operate in an "open" position, in which screens 104 and 108 are juxtaposed. Such a position provides a large display area to display information to the user. When the position sensors 172A and 172B determine that the device 100 is in the open position, they may generate signals that may be used to trigger different events, such as displaying information on both screens 104 and 108. Additional events may be triggered if the accelerometer 176 determines that the device 100 is in a portrait position (fig. 1B) as opposed to a landscape position (not shown).
In addition to the open position, the device 100 may also have a "closed" position as illustrated in FIG. 1H. In addition, position sensors 172A and 172B may generate signals indicating that device 100 is in the "off" position. This may trigger an event that causes the display information on screens 104 and 108 to change. For example, device 100 may plan to stop displaying information on one of the screens, e.g., screen 108, because the user may only view one screen at a time when device 100 is in the "off" position. In other embodiments, the signals generated by the position sensors 172A and 172B indicating that the device 100 is in the "off" position may trigger the device 100 to answer an incoming telephone call. The "closed" position may also be a preferred position for using the device 100 as a mobile phone.
The device 100 may also be used in the "stand" position illustrated in FIG. 1I. In the "stand" position, screens 104 and 108 are at an angle relative to each other, and the edges of screens 104 and 108, which are substantially horizontal, face outward. In this position, the device 100 may be configured to display information on both screens 104 and 108 to allow both users to interact with the device 100 at the same time. When device 100 is in the "stand" position, sensors 172A and 172B generate signals indicating that screens 104 and 108 are positioned at an angle to each other, and accelerometer 176 may generate signals indicating that device 100 has been placed so that the edges of screens 104 and 108 are substantially horizontal. These signals can then be used in combination to generate events that trigger changes in the display of information on screens 104 and 108.
Fig. 1J illustrates the device 100 in the "modified stent" position. In the "modified stand" position, one of the screens 104 or 108 is used as a stand and is facing down on a surface of an object such as a table. This position provides a convenient way of displaying information to the user in landscape orientation. Similar to the stand position, when device 100 is in the "modified stand" position, position sensors 172A and 172B generate signals indicating that screens 104 and 108 are positioned at an angle to each other. The accelerometer 176 will generate a signal indicating that the device 100 has been positioned so that one of the screens 104 and 108 is facing down and substantially horizontal. These signals may then be used to generate events that trigger changes in the display of information on screens 104 and 108. For example, the information may no longer be displayed on a screen facing downward because the user does not see this screen.
Transitional states may also exist. The close transition state is recognized when the position sensors 172A and B and/or the accelerometer indicate that the screen is being closed or folded (from open). Conversely, an open transition state is recognized when the position sensors 172A and B indicate that the screen is being opened or folded (from closed). Typically, the close and open transition states are time-based or have a maximum duration from a sensed starting point. Typically, there is no user input possible when one of the closed and open states is active. As such, accidental user contact with the screen during the closing or opening function will not be mistaken for user input. In an embodiment, there may be another transitional state when the device 100 is turned off. This additional transitional state allows the display to switch from one screen 104 to a second screen 108 when the device 100 is turned off based on some user input, such as double-clicking on the screens 110, 114.
As can be appreciated, the description of the apparatus 100 is for illustrative purposes only, and the embodiments are not limited to the specific mechanical features of FIGS. 1A-1J and described above. In other embodiments, the device 100 may include additional features, including one or more additional buttons, slots, display areas, hinges, and/or locking mechanisms. Additionally, in embodiments, the above-described features may be located in different parts of the device 100 and still provide similar functionality. Accordingly, FIGS. 1A-1J and the description provided above are not limiting.
Hardware characteristics:
fig. 2 illustrates components of an apparatus 100 in accordance with an embodiment of the present disclosure. In general, the device 100 includes a primary screen 104 and a secondary screen 108. While the primary screen 104 and its components are typically activated in both the open and closed positions or states, the secondary screen 108 and its components are typically activated in the open state but not in the closed state. However, even in the off state, user or application triggered interrupts (e.g., in response to a phone application or camera application operation) may flip the activated screen, or deactivate the primary screen 104 and activate the secondary screen 108, by appropriate commands. Each screen 104, 108 may be touch sensitive and may include different operating regions. For example, the first operational area in each of the touch sensitive screens 104 and 108 may include a touch sensitive display 110, 114. Typically, both touch sensitive displays 110, 114 may comprise full color touch sensitive displays. The second region in each touch sensitive screen 104 and 108 may include a gesture capture region 120, 124. The gesture capture area 120, 124 may include an area or region outside of the touch sensitive display 110, 114 area and capable of receiving input (e.g., in the form of user-provided gestures). However, the gesture capture regions 120, 124 do not include pixels that may perform a display function or capability.
A third region of the touch sensitive screens 104 and 108 may include configurable areas 112, 116. The configurable area 112, 116 is capable of receiving input and has display capability or limited display capability. In embodiments, the configurable area 112, 116 may present different input options to the user. For example, the configurable area 112, 116 may display buttons or other related items. Furthermore, whether any buttons are displayed within the configurable area 112, 116 of the touch sensitive screen 104 or 108, the identity of the displayed buttons may be determined by the environment in which the device 100 is used and/or operated. In an exemplary embodiment, the touch sensitive screens 104 and 108 include liquid crystal display devices that extend to at least the above-mentioned regions of the touch sensitive screens 104 and 108, capable of providing visual outputs to a user, and capacitive input matrices that are above the above-mentioned regions of the touch sensitive screens 104 and 108, capable of receiving inputs from a user.
One or more display controllers 216a, 216b may be provided to control the operation of the touch sensitive screens 104 and 108, including input (touch sensing) and output (display) functions. In the exemplary embodiment illustrated in FIG. 2, a separate touch screen controller 216a or 216b is provided for each touch screen 104 and 108. According to an alternative embodiment, a common or shared touchscreen controller 216 may be used to control each of the included touch sensitive screens 104 and 108. According to yet another embodiment, the functionality of the touchscreen controller 216 may be incorporated into other components, such as the processor 204.
The processor 204 may include a general purpose programmable processor or controller for executing applications or instructions. In accordance with at least some embodiments, processor 204 may include multiple processor cores, and/or execute multiple virtual processors. According to yet another embodiment, the processor 204 may include multiple physical processors. As particular examples, the processor 204 may include a specially configured Application Specific Integrated Circuit (ASIC) or other integrated circuit, a digital signal processor controller, a hardwired electronic or logic circuit, a programmable logic device or gate array, a special purpose computer, or the like. In general, the processor 204 functions to execute program code or instructions that perform various functions of the apparatus 100.
The communication device 100 may also include a memory 208 for use in connection with the execution of applications or instructions by the processor 204 and for the temporary or long term storage of program instructions and/or material. By way of example, the memory 208 may include RAM, DRAM, SDRAM, or other solid state memory. Alternatively or additionally, a data store 212 may also be provided. Similar to the memory 208, the data storage 212 may include one or more solid-state memory devices. Alternatively or in addition, the data storage 212 may include a hard disk drive or other random access memory.
To support communication functions or capabilities, the device 100 may include a cellular telephone module 228. By way of example, the cellular telephone module 228 may include a GSM, CDMA, FDMA and/or analog cellular telephone transceiver capable of supporting voice, multimedia and/or data transmission via a cellular network. Alternatively or additionally, device 100 may include additional or other wireless communication modules 232. Other wireless communication modules 232 may include, by way of example, Wi-Fi, BLUETOOTH (TM), WiMax, infrared, or other wireless communication links. The cellular phone module 228 and the other wireless communication module 232 may each be associated with a shared or dedicated antenna 224.
A port interface 252 may be included. Port interface 252 may include all or a common port to support interconnection of device 100 with other devices or components, such as a dock (dock), which may or may not include additional or different capabilities depending on the ones that make up device 100. In addition to supporting the exchange of communication signals between the device 100 and another device or component, the docking port 136 and/or the port interface 252 may support the supply of power to or from the device 100. The port interface 252 also includes a smart element that includes a docking module for controlling communication or other interaction between the device 100 and a connected device or component.
An input/output module 248 and associated ports may be included to support communication over a wired network or link, such as with other communication devices, server devices, and/or peripheral devices. An input/output module 248 and associated ports may be included to support communication over a wired network or link, for example, with other communication devices, server devices, and/or peripheral devices. Examples of input/output module 248 include an ethernet port, a Universal Serial Bus (USB) port, an Institute of Electrical and Electronics Engineers (IEEE)1394 or other interface.
An audio input/output interface/device 244 may be included to provide analog audio to interconnected speakers or other devices, and to receive analog audio from a connected microphone or other device. As one example, audio input/output interface/device 244 may include an associated microphone and analog-to-digital converter. Alternatively or additionally, the device 100 may include an integrated audio input/output device 256 and/or audio jack for interconnection with an external speaker or microphone. For example, an integrated speaker and integrated microphone may be provided to support close talking or speakerphone operation.
For example, hardware buttons 158 may be included for certain control operations that are relevant. Examples include a general power switch, volume control, etc., as described in connection with fig. 1A to 1J. One or more image capture interfaces/devices 240, such as a camera, may be included for capturing still images and/or video images. Alternatively or additionally, the image capture interface/device 240 may include a scanner or code reader. The image capture interface/device 240 may include or be associated with additional elements, such as a flash or other light source.
The device 100 may also include a Global Positioning System (GPS) receiver 236. According to embodiments of the present invention, the GPS receiver 236 may further include a GPS module capable of providing absolute position information to other components of the device 100. An accelerometer 176 may also be included. For example, signals from accelerometer 176 may be used to determine the orientation and/or format in which information is displayed for a user, relative to displaying information for a user and/or other functions.
Embodiments of the present invention may also include one or more position sensors 172. The position sensor 172 may provide a signal indicative of the position of the touch sensitive screens 104 and 108 relative to each other. This information may be provided as input, for example, to a user interface application to determine the mode of operation, characteristics of the touch sensitive display 110, 114, and/or other device 100 operations. By way of example, the screen position sensor 172 can include a series of Hall effect sensors, a plurality of position switches, optical switches, Wheatstone bridges, potentiometers, or other arrangement capable of providing a signal indicative of a plurality of relative positions at which the touch screen is located.
Communication between the various components of device 100 may be carried by one or more buses 222. Additionally, power may be provided to the components of the apparatus 100 from a power supply and/or power control module 260. For example, the power control module 260 may include a battery, an AC-DC converter, power control logic, and/or a port for an external power source to interconnect the device 100 to a power source.
State of the apparatus
Fig. 3A and 3B show illustrative states of the device 100. Although a number of illustrative states and transitions from a first state to a second state are shown, it should be appreciated that the illustrative state diagram does not encompass all possible states and/or all possible transitions from a first state to a second state. As illustrated in FIG. 3, the various arrows between the states (illustrated by the states represented by the circles) represent physical changes occurring to the device 100 that are detected by one or more of hardware and software, which detection triggers one or more of hardware and/or software interrupts used to control and/or manage one or more functions of the device 100.
As illustrated in fig. 3A, there are 12 exemplary "physical" states: closed 304, transition 208 (or open transitional state), cradle 312, modified cradle 316, open 320, incoming/outgoing phone or communication 324, image/video capture 328, transition 332 (or closed transitional state), landscape 340, docked 336, docked 344, and landscape 348. Next to each illustrative state is a representation of the physical state of the device 100, which state is typically represented by the international generic icon for a telephone and the icon for a camera, respectively, except for states 324 and 328.
In state 304, the device is in a closed state with the device 100 oriented generally in the portrait orientation with the primary screen 104 and the secondary screen 108 connected back-to-back in different planes (see fig. 1H). From a closed state, for example, the device 100 may enter a docked state 336, in which the device 100 is coupled with a docking station, docking cable, or generally into or associated with one or more other devices or peripherals, the device 100 may alternatively enter a landscape state 340, in which the device 100 is generally oriented with the primary screen 104 facing the user, and the primary screen 104 and the secondary screen 108 are connected back-to-back.
In the off state, the device may also move to a transitional state in which the remainder of the device near the display moves from one screen 104 to another screen 108, e.g., a double tap on screens 110, 114, based on user input. Yet another embodiment includes a double-sided state. In the double-sided state, the rest of the device is closed, but there is also one application displaying at least one window on the first display 110 and the second display 114. The windows shown on the first and second displays 110, 114 may be the same or different based on the application and the state of the application. For example, when capturing an image with a camera, the device may display a viewfinder on the first display 110 and a preview (full screen and mirror left and right) of the photo theme on the second display 114.
In state 308, the transition state from the closed state 304 to the semi-open or stand state 312 illustrates the device 100 beginning with the rotation of the primary screen 104 and the secondary screen 108 about a point of an axis coincident with the hinge. When the stand state 312 is entered, the primary screen 104 and the secondary screen 108 are separated from each other so that, for example, the device 100 may form a stand-like structure on a surface.
In state 316, known as the modified stand position, device 100 has primary screen 104 and secondary screen 108 in a similar relative relationship to each other as stand state 312, with the difference that either primary screen 104 or secondary screen 108 is on the surface, as shown.
State 320 is an open state in which the primary screen 104 and the secondary screen 108 are generally in the same plane. From the open state, the device 100 may transition to the docked state 344 or the open landscape state 348. In the open state 320, the primary screen 104 and the secondary screen 108 are generally in a similar portrait orientation, while in the landscape state 348, the primary screen 104 and the secondary screen 108 are generally in a similar landscape orientation.
State 324 is illustrative of a communication state, such as when device 100 is answering or placing an incoming or outgoing call, respectively. Although not illustrated for clarity, it should be appreciated that the device 100 may transition from any of the states illustrated in fig. 3 to the incoming/outgoing phone state 324. In a similar manner, the image/video capture state 328 may be entered from any other state of fig. 3, as the image/video capture state 328 allows the device 100 to take one or more images via a camera and/or take video with the video capture device 240.
The transition state 322 illustratively shows the primary screen 104 and the secondary screen 108 being closed in accordance with the primary screen 104 and the secondary screen 108 entering, for example, the closed state 304 with one another.
Referring to the keywords, FIG. 3B illustrates inputs received for detecting a transition from a first state to a second state. In FIG. 3B, the various combined states are generally represented by a portion of a column pointing toward the portrait state 352, the landscape state 356, and a portion of a row pointing toward the portrait state 360 and the landscape state 364.
In fig. 3B, the keyword indicates "H" represents input from one or more hall effect sensors, "a" represents input from one or more accelerometers, "T" represents input from a timer, "P" represents a communication trigger input, and "I" represents an image and/or video capture request input. Thus, in the center portion 376 of the graph, one input or combination of inputs is shown to indicate how the device 100 detects a transition from a first physical state to a second physical state.
As a discussion, in the central portion of the chart 376, for example, a received input initiates a detection of a transition from the longitudinally open state to the laterally cradled state-shown in bold "HAT". For this exemplary transition from the longitudinally open knife lateral support state, a hall effect sensor ("H"), an accelerometer ("a") and a timer ("T") may be required. For example, the timer input may come from a clock associated with the processor.
In addition to the portrait and landscape states, a docked state 368 is also shown that is triggered based on the acceptance of the docking signal 372. As described above and with respect to fig. 3, the docking signal may be triggered by the apparatus 100 being associated with one or more of the apparatus 100, accessories, peripherals, smart docks, and the like.
User interaction
Fig. 4A-4H are various illustrations of gesture inputs that may be recognized by the screens 104, 108. The gesture may be performed not only by a body part of the user, such as a finger, but also by other means, such as a stylus, which may be sensed by a contact sensing portion of the screen 104, 108. In general, gestures are interpreted differently based on where the gesture is performed (either directly on the display 110, 114 or in the gesture capture region 120, 124). For example, a gesture in the display 110, 114 may be directed to a desktop or application, while a gesture in the gesture capture area 120, 124 may be interpreted for the system.
Referring to fig. 4A-4H, the first gesture, touch gesture 420, is actually a fixed selection duration on the screen 104, 108. The circle 428 represents a touch or other contact type received at a particular location of the screen contact sensing portion. Circle 428 may include a boundary 432 whose thickness indicates the length of time that the contact remains substantially fixed at the contact location. For example, a tap 420 (or short press) has a boundary 432a that is thinner than a boundary 432b of a long press 424 (or normal press). The long press 424 may include a fixed contact that actually remains on the screen for a longer period of time than the tap 420. As will be appreciated, differently defined gestures may be recorded based on the touch remaining for a fixed duration before the contact stops or moves across the screen.
Referring to FIG. 4C, a drag gesture 400 on the screens 104, 108 is an initial contact (represented by circle 428) with the contact movement 436 in a selected direction. The initial contact 428 may remain stationary on the screen 104, 108 for an amount of time represented by the boundary 432. Typically, a drag gesture requires the user to contact an icon, window, or other displayed image at a first location, and then move the contact along the drag direction to a new, second location desired for the selected displayed image. The contact movement does not have to be along a straight line but a movement with an arbitrary path as long as the contact is practically continuous from the first to the second position.
Referring to FIG. 4D, a flick gesture 404 on the screens 104, 108 is an initial contact (represented by circle 428) of a truncated contact movement 436 (relative to the drag gesture) along a selected direction. In an embodiment, the flick has a higher output speed for the last movement of the gesture relative to the drag gesture. For example, a flick gesture may be an abrupt movement along an initially contacted finger. In contrast to a drag gesture, a flick gesture generally does not require constant contact with the screen 104, 108 from a first location where an image is displayed to a predetermined second location. And moving the contacted displayed image to a predetermined second position along the direction of the flick gesture by the flick gesture. Although both of these gestures can generally move the displayed image from a first position to a second position, the duration and stroke of the scratch contact on the screen is typically shorter for flicks than for a drag gesture.
Referring to fig. 4E, a pinch gesture 408 on the screens 104, 108 is depicted. The pinch gesture 408 may begin with a first contact 428a by, for example, a first finger to the screen 104, 108 and a second contact 428b by, for example, a second finger to the screen 104, 108. The first and second contacts 428a, b may be detected by a contact sensing portion that is common to both screens 104, 108, different portions of the common screen 104 or 108, or different touch sensing portions of different screens. The first contact 428a is held for a first amount of time, as represented by boundary 432a, and the second contact 428b is held for a second amount of time, as represented by boundary 432 b. Typically, the first and second amounts of time are substantially the same, and typically, the first and second contacts 428a, b occur substantially simultaneously. The first and second contacts 428a, b also generally include corresponding first and second contact movements 436a, b, respectively. The first and second contact movements 436a, b are generally in opposite directions. In other words, the first contact movement 436a is toward the second contact 436b, and the second contact movement 436b is toward the first contact 436 a. Stated yet more simply, the pinch gesture 408 may be accomplished by a user's finger touching the screen 104, 108 during a pinching activity.
Referring to FIG. 4F, a spread gesture 410 on the screens 104, 108 is depicted. The expand gesture 410 may begin with a first contact 428a to the screen 104, 108 by, for example, a first finger and a second contact 428b to the screen 104, 108 by, for example, a second finger. The first and second contacts 428a, b may be detected by a contact sensing portion that is common to the common screens 104, 108, a different contact sensing portion that is common to the screens 104, 108, or a different contact sensing portion of a different screen. The first contact 428a is held for a first amount of time, as represented by boundary 432a, and the second contact 428b is held for a second amount of time, as represented by boundary 432 b. Typically, the first and second amounts of time are substantially the same, and typically, the first and second contacts 428a, b occur substantially simultaneously. Typically, the first and second contacts 428a, b also include corresponding first and second contact movements 436a, b, respectively. The first and second contact movements 436a, b are generally in a common direction. In other words, the first and second contact movements 436a, b are decoupled from the first and second contacts 428a, b. Stated more simply, the expand gesture 410 may be accomplished by a user's finger touching the screen 104, 108 during the expand activity.
The above-described gestures, such as those shown in fig. 4G and 4H, may be combined in any manner to generate a determined functional result. For example, in FIG. 4G, a flick gesture 420 is combined with a drag or flick gesture 412 in a direction away from the flick gesture 420. In FIG. 4H, a tap gesture 420 is combined with a drag or flick gesture 412 in a direction proximate to the tap gesture 420.
The functional result of receiving a gesture may change based on a number of factors including the state of the device 100, the display 110, 114 or screen 104, 108, the environment associated with the gesture, or the location sensed by the gesture. The state of the device typically involves one or more of the following: the configuration of device 100, the orientation of the display, and other inputs received by the user and device 100. The environment typically involves one or more of the following: the particular application selected by the gesture and the portion of the application currently executing, whether the application is a single-screen application or a multi-screen application, and whether the application is displaying one or more windows in one or more screens or in one or more stacks, the sensed location of the multi-screen application gesture generally relates to whether the sensed set of gesture location coordinates is located on the touch- sensitive display 110, 114 or the gesture capture area 120, 124, whether the sensed set of gesture location coordinates is associated with a common or different display or screen 104, 108, and/or which portion of the gesture capture area contains the sensed set of gesture location coordinates.
When a tap is received by the touch sensitive display 110, 114, it may be used, for example, to select an icon to begin or terminate execution of the corresponding application, maximize or minimize windows, reorder windows in the stack, and provide user input, such as via a keyboard display or other display image. When the touch sensitive display 110, 114 receives the drag, it may be used, for example, to reposition icons or windows to predetermined locations within the display, to reorder a stack on the display, or to span both displays (such that the selected window occupies a portion of each display simultaneously). When a flick is received by the touch sensitive display 110, 114 or the gesture capture region 120, 124, it may be used to reposition a window from the first display to the second display or to bridge the two displays (such that the selected window occupies a portion of each display simultaneously). However, unlike a drag gesture, in general, a flick gesture cannot be used to move a displayed image to a specific user-selected location, but can be moved to a default location that is not configurable by the user.
When a pinch gesture is received by the touch- sensitive display 110, 114 or the gesture capture area 120, 124, it may be used to maximize or increase the display area or size of the window (typically when received entirely by a common display), switch the window displayed at the top of the stack of each display to the top of the stack of the other display (typically when received by a different display or screen), or display an application manager (a "pop-up window" that displays windows in the stack). When an expand gesture is received by a touch- sensitive display 110, 114 or gesture capture area 120, 124, it may be used to minimize or reduce the display area or size of the windows, switch the window displayed at the top of the stack of each display to the top of the stack of the other display (typically when received by a different display or screen), or display an application manager (typically when received by an off-screen gesture capture area of the same or a different screen).
When received by a common display capture area in a common display or screen 104, 108, the combined gestures of FIG. 4G may be used to save a first window stack position as a first stack constant for the display receiving the gesture while reordering a second window stack position in a second window stack to include the window in the display receiving the gesture. When received by a common display or screen 104, 108 or different display capture regions in different displays or screens, the combined gestures of FIG. 4H may be used to save a first window stack position as a first window stack constant for a display receiving a tap portion of the gesture, while reordering a second window stack position in a second window stack to include a window in the display receiving the flick or drag gesture. Although the specific gestures and gesture capture regions in the above examples have been associated with respective sets of functional results, it should be appreciated that these associations may be redefined in any manner to generate different associations between gestures and/or gesture capture regions and/or functional results.
Firmware and software
The memory 508 may store and the processor 504 may execute one or more software components. These components may include at least one Operating System (OS)516a and/or 516b, framework 520, and/or one or more applications 564a and/or 564b from application store 560. The processor 504 may receive input from a driver 512, as previously described in connection with fig. 2. The OS516 may be any software comprised of programs and data that manages computer hardware resources and provides common services for execution by multiple applications 564. The OS516 may be any operating system, and in at least some embodiments is dedicated to mobile devices, Linux, ANDROID (TM), iPhone OS (IOS (TM)), WINDOWS PHONE 7 (TM), and the like. The OS516 may be operable by performing one or more operations to provide functionality to the phone, as described herein.
The applications 564 may be any high-level software that performs specific functions for a user. The applications 564 may include programs such as email clients, web browsing programs, text applications, games, media playing programs, office program suites, and so forth. The applications 564 may be stored in the application memory 560, and the application memory 560 may represent any memory or data store in which the applications 564 are stored and with which management software is associated. Once executed, the applications 564 may run in different areas of the memory 508.
The framework 520 may be any software or data that allows multiple tasks to run on the device to interact. In embodiments, the framework 520 and at least a portion of the separate components described below may be considered part of the OS516 or the applications 564. However, these parts will be described as part of the frame 520, but these parts are not limiting. The framework 520 can include, but is not limited to, a Multiple Display Management (MDM) module 524, a surface cache module 528, a window management module 532, an input management module 536, a task management module 540, a display controller, one or more frame buffers 548, a task stack 552, one or more window stacks 550 (which are logical arrangements of windows and/or desktops in a display area), and/or an event buffer 556.
The MDM module 524 includes one or more modules that are operable to manage the display of applications or other data on the screen of the device. An embodiment of the MDM module 524 is described in conjunction with FIG. 5B. In an embodiment, the MDM module 524 receives input from the OS516, the driver 512, and the applications 564. These inputs assist the MDM module 524 in determining how to configure and allocate the displays according to the preferences and requirements of the application, as well as the user's actions. Once the display structure is determined, the MDM module 524 may bind the applications 564 with the display structure. The configuration may then be provided to one or more other components to generate a display.
The surface cache module 528 includes any memory or storage device and software associated therewith to store or cache one or more images from the display screen. Each display screen may associate a screen with a series of active and inactive windows (or other display objects (e.g., desktop displays)). An active window (or other display object) is a window that is currently being displayed. The inactive window (or other display object) has been opened and/or displayed for a while, but is now located "behind" the active window (or other display object). To enhance the user experience, a "screen shot" of the last generated image of a window (or other display object) may be stored before being overlaid by another active window (or other display object). The surface cache module 528 may be operable to store the last active image of a window (or other display object) instead of the currently displayed image. Accordingly, the surface cache module 528 stores the image of the inactive window (or other display object) in a data store (not shown).
In an embodiment, the window management module 532 may be operable to manage active or inactive windows (or other display objects) on each screen. Based on information from the MDM module 524, the OS516, or other components, the window management module 532 determines when a window is active or inactive. The window management module 532 then places the un-visualized windows (or other display objects) in an "inactive state," and in conjunction with the task management module task management 540, stops the operation of the application. In addition, the window management module 532 may assign a screen identifier to a window (or other display object) or manage one or more other items of data associated with the window (or other display object). The window management module 532 may also provide stored information to the application 564, the task management module 540, or other components interacting with or associated with the window (or other display object).
The input management module 536 is operable to manage events occurring with the device. An event is any input in the context of a window, e.g., a user interface interacting with a user. The input management module 536 receives the events and logically stores the events in the event buffer 556. An event may include such user interface interaction as a "down event" that occurs when the screen 104, 108 receives a touch signal from the user, a "move event" that occurs when the screen 104, 108 determines that the user's finger is moving along the screen, an "up event" that occurs when the screen 104, 108 determines that the user has stopped touching the screen 104, 108, and so on. These events may be received, stored, and transmitted to other modules via the input management module 536.
A task may be an application component that provides a screen for a user to interact with in order to accomplish something, such as make a phone call, take a picture, send an email, or view a map. Each task may be given a window in which to obtain a user interface. Typically, the window fills the display 110, 114, but may be smaller than the display 110, 114 and float above the other windows. Typically, an application consists of multiple actions that are loosely constrained to each other. Typically, a task in an application is designated as a "primary" task, which is presented to the user when the application is first launched. Each task may then start another task to perform a different operation.
The task management module 540 is operable to manage the operation of one or more applications 564 that may be executed by the device. Accordingly, the task management module 540 may receive a signal to execute an application stored in the application memory 560. The task management module 540 may then instantiate one or more tasks or components of the application 564 to begin operation of the application 564. Further, the task management module 540 can suspend the application 564 based on user interface changes. The pause application 564 may save application data in memory, but may limit or stop the period of processor access by the application 564. Once the application becomes active again, the task management module 540 may in turn provide access to the processor.
The display controller 544 is operable to present and output displays for the multi-screen device. In an embodiment, the display controller 544 creates and/or manages one or more frame buffers 548. Frame buffer 548 may be a display output that drives a display from a portion of memory that contains an entire frame of display data. In an embodiment, the display controller 544 manages one or more frame buffers. The frame buffer may be a composite frame buffer, which may represent the entire display area of both screens. This composite frame buffer may be presented to the OS516 as a single frame. The display controller 544 may sub-divide this composite frame buffer as needed for each display 110, 114 use. Thus, by using the display controller 544, the device 100 may have multiple screen displays without changing the main software of the OS 516.
The application manager 562 may be a service that provides a presentation layer for a window environment. Thus, the application manager 562 provides a graphical model for rendering by the window management module 556. Likewise, the desktop 566 provides a presentation layer for the application store 560. Thus, the desktop provides a graphical model of the surface with selectable application icons for the applications 564 in the application store 560, which can be provided to the window management module 556 for rendering.
Figure 5B illustrates an embodiment of the MDM module 524. The MDM module 524 is operable to determine environmental status for the device including, but not limited to, the orientation of the device, what applications 564 are being executed, how the applications 564 are to be displayed, what operations the user is directing, the tasks being displayed, and the like. To configure the display, the MDM module 524 interprets these environmental factors and determines the display configuration, as described in connection with FIGS. 6A-6J. The MDM module 524 may then bind the applications 564 or other device components with the display. This configuration may then be sent to the display controller 544 and/or the OS516 to generate a display. The MDM module 524 may include one or more of, but is not limited to, a display configuration Module 568, a preferences Module 572, a device State Module 574, a gesture Module 576, a Requirements Module 580, an event Module 584, and/or a binding Module 588.
The display configuration Module 568 determines the layout of the display. In embodiments, the display configuration Module 568 can determine the environmental factors. The environmental factors may be received from one or more other MDM module 524 modules or other sources. The display configuration Module 568 can then determine the optimal configuration for display based on the factor catalog. Some embodiments of possible configurations and factors associated therewith are described in connection with fig. 6A-6F.
The preferences Module 572 is operable to determine display preferences for the application 564 or other components. For example, an application may have a preference for single display or dual display. The preferences module 572 can determine or receive preferences for applications and store the preferences. As the device configuration changes, the preferences may be rechecked to determine if a better display configuration may be achieved for the application 564.
The device status module 574 is operable to determine or receive the status of the device. The state of the device may be as described in connection with fig. 3A and 3B. The display configuration Module 568 can use the state of the device to determine the configuration for the display. Likewise, the device status module 574 may receive input and interpret the status of the device. The status information is then provided to the display configuration Module 568.
The gesture Module 576 is operable to determine whether the user is performing any operations on the user interface. Thus, the gesture Module 576 can receive task information from either the task stack 552 or the input management module 536. These gestures may be as defined in connection with fig. 4A through 4H. For example, moving a window causes the display to present a series of display frames that illustrate the movement of the window. The gesture Module 576 can receive and interpret gestures associated with such user interface interactions. Information about the user gesture is then sent to the task management module 540 to modify the display binding of the task.
Similar to the preferences module 572, the requirements module 580 may be operable to determine requirements for a display for the application 564 or other components. An application may have a defined display requirement that must be observed. Some applications require a particular display orientation. For example, the application "wild birds" can only be displayed in landscape orientation. This type of display requirement may be determined or received by the requirements module 580. As the orientation of the device changes, the requirements module 580 can reassert the display requirements of the application 564. The display configuration Module 568 can generate a display configuration that is consistent with the application display requirements, as provided by the Requirements Module 580.
Similar to the gesture Module 576, the event Module 584 is operable to determine one or more events occurring with the application or other component that may affect the user interface. Accordingly, the gesture Module 576 can receive event information from either the event buffer 556 or the task management Module 540. These events may change how tasks are bound to the display. For example, an email application receiving an email may cause the display to present a new message on the secondary screen. The event module 584 may receive and interpret events associated with such application execution. Information about the event can then be sent to the display configuration Module 568 to modify the configuration of the display.
The binding Module 588 is operable to bind the applications 564 or other components to the configuration determined by the display configuration Module 568. In memory, bindings associate the display configuration of each application with the display and mode of the application. Thus, the binding module 588 can associate an application with a display configuration of the application (e.g., landscape, portrait, multi-screen, etc.). The binding module 588 may then assign a display identifier to the display. The display identifier associates the application with a device-specific screen. This binding is then saved and provided to the display controller 544, the OS516, or other component to render the display appropriately. The binding is dynamic and may be changed or updated based on configuration changes associated with events, gestures, state changes, application preferences or requirements, and the like.
User interface configuration
Referring now to fig. 6A-J, various types of output configurations that may be produced by the apparatus 100 will be described below.
Fig. 6A and 6B depict two different output configurations of the apparatus 100 in a first state. Specifically, FIG. 6A depicts the device 100 in a closed portrait state 304, wherein data is displayed on the primary screen 104. In this example, the apparatus 100 displays data via the touch sensitive display 110 in the first portrait configuration 604. As can be appreciated, the first portrait configuration 604 may be only a desktop or operating system home screen. Alternatively, there may be one or more windows in the portrait orientation while the device 100 is displaying data in the first portrait configuration 604.
Fig. 6B depicts the device 100 still in the closed portrait state 304, but instead data is displayed on the secondary screen 108. In this example, the apparatus 100 displays data via the touch sensitive display 114 in the second portrait configuration 608.
It may be possible to display similar or different data in the first or second portrait configuration 604, 608. Transitions between the first portrait configuration 604 and the second portrait configuration 608 may also exist by providing the device 100 with user gestures (e.g., double-tap gestures), menu selections, or other means. Other suitable gestures may also be used to transition between configurations. Further, based on the state to which the apparatus 100 is moved, a transition of the apparatus 100 from the first or second portrait configuration 604, 608 to any other configuration described herein may also exist.
Alternative output configurations may be provided by the device 100 in the second state. In particular, fig. 6C depicts a third portrait configuration in which data is displayed simultaneously on both the primary screen 104 and the secondary screen 108. The third portrait configuration may be considered a dual Portrait (PD) output configuration. In the PD output configuration, the touch sensitive display 110 of the primary screen 104 depicts data in a first portrait configuration 604, while the touch sensitive display 114 of the secondary screen 108 depicts data in a second portrait configuration 608. Simultaneous presentation of the first portrait configuration 604 and the second portrait configuration 608 may occur when the device 100 is in the open portrait state 320. In this configuration, the apparatus 100 may display one application window, two application windows (one on each display 110 and 114), one application window and one desktop, or one desktop within one display 110 or 114. Other configurations are possible. It should be appreciated that a transition of the device 100 from the simultaneously displayed configurations 604, 608 to any other configuration described herein may also exist based on the state to which the device 100 is moved. Furthermore, in this state, the display of the application may preferably place the device in a two-sided mode, where both displays are active, so that different windows are displayed with the same application. For example, a camera application may display a viewfinder and control on one side while the other side displays a mirrored preview, which is viewable through the photo subject. Games involving simultaneous play by two players may also be in a two-sided mode.
Fig. 6D and 6E depict two further output configurations of the apparatus 100 in the third state. In particular, FIG. 6D depicts the device 100 in a closed landscape state 340, where data is displayed on the primary screen 104. In this example, the apparatus 100 displays data via the touch-sensitive display 110 in a first landscape configuration 612. Like other configurations described herein, the first landscape configuration 612 may display a desktop, a home screen, one or more windows displaying application data, and the like.
Fig. 6E depicts the device 100 still in the closed landscape state 340, but instead data is displayed on the secondary screen 108. In this example, the apparatus 100 displays data via the touch sensitive display 114 in the second landscape configuration 616. It may be possible to display similar or different data in the first or second portrait configuration 612, 616. Transitions between the first landscape configuration 612 and the second landscape configuration 616 may also exist by providing the device 100 with one or both of a bend and flick gesture or a flick or swipe gesture. Other suitable gestures may also be used to transition between configurations. Further, based on the state to which the apparatus 100 is moved, a transition of the apparatus 100 from the first or second landscape configuration 612, 616 to any other configuration described herein may also exist.
FIG. 6F depicts a third landscape configuration in which data is displayed simultaneously on both the primary screen 104 and the secondary screen 108. The third landscape configuration may be considered a dual Landscape (LD) output configuration. In the LD output configuration, the touch sensitive display 110 of the primary screen 104 describes data in a first landscape configuration 612, while the touch sensitive display 114 of the secondary screen 108 describes data in a second landscape configuration 616. Simultaneous presentation of the first landscape configuration 612 and the second landscape configuration 616 may occur when the apparatus 100 is in the open landscape state 340. It should be appreciated that a transition of the device 100 from the simultaneously displayed configurations 612, 616 to any other configuration described herein may also exist based on the state to which the device 100 is moved.
Fig. 6G and 6H depict two views of the device 100 in yet another state. Specifically, the device 100 is depicted in the cradle state 312. Fig. 6G illustrates that a first cradle output configuration 618 may be displayed on the touch-sensitive display 110. Fig. 6H illustrates that a second support output configuration 620 can be displayed on the touch-sensitive display 114. The apparatus 100 may be configured to describe the first leg output configuration 618 or the second leg output configuration 620 separately. Alternatively, the two stent output configurations 618, 620 may be presented simultaneously. In some embodiments, the support output configurations 618, 620 may be similar or identical to the lateral output configurations 612, 616. The apparatus 100 may also be configured to display one or both of the stent output configurations 618, 620 when in the improved stent state 316. It should be appreciated that employing both cradle output configurations 618, 620 may facilitate two-player games (e.g., battleship, chess, checkers, etc.), multiplayer conferences (where two or more users share the same device 100), and other applications. As can be appreciated, transitions of the device 100 from displaying one or both configurations 618, 620 to other configurations described herein may also exist based on the state to which the device 100 is moved.
FIG. 6I depicts yet another output configuration that may be provided when the device 100 is in the open portrait state 320. In particular, the device 100 may be configured to present a single continuous image across both touch sensitive displays 110, 114 in a portrait configuration, referred to herein as a portrait maximum (PMax) configuration 624. In this configuration, data (e.g., a single image, application, window, icon, video, etc.) may be divided and displayed partially on one of the touch sensitive displays while other portions of the data are displayed on the other touch sensitive display. The Pmax configuration 624 can facilitate a larger display and/or better resolution for displaying a particular image on the device 100. Similar to the other output configurations, a transition of the device 100 from the Pmax configuration 624 to any of the other output configurations described herein may exist based on the state to which the device 100 is moved.
FIG. 6J depicts yet another output configuration that may be provided when the device 100 is in an open landscape state 348. In particular, the apparatus 100 may be configured to present a single continuous image across both touch sensitive displays 110, 114 in a landscape configuration, referred to herein as a landscape maximum (LMax) configuration 628. In this configuration, data (e.g., a single image, application, window, icon, video, etc.) may be divided and displayed partially on one of the touch sensitive displays while other portions of the data are displayed on the other touch sensitive display. The Lmax configuration 628 may facilitate a larger display and/or better resolution for displaying a particular image on the device 100. Similar to other output configurations, a transition of the device 100 from the Lmax configuration 628 to any other output configuration described herein may exist based on the state to which the device 100 is moved.
The device 100 manages desktops and/or windows using at least one window stack 1700, 1728, as shown in fig. 10A and 10B. The window stacks 1700, 1728 are logical arrangements of active and/or inactive windows of a multi-screen device. For example, the window stacks 1700, 1728 may logically resemble a stack of cards, with one or more windows or desktops arranged in order, as shown in FIGS. 10A and 10B. The active window is a window being displayed on at least one of the touch sensitive displays 110, 114. For example, windows 104 and 108 are active windows, and windows 104 and 108 are displayed on touch- sensitive displays 110 and 114. An inactive window is a window that has been opened and displayed, but is now "behind" the active window and is not being displayed. In embodiments, an inactive window may be used for a suspended application, and thus, the window no longer displays active content. For example, windows 1712, 1716, 1720, and 1724 are inactive windows.
The window stacks 1700, 1728 may have a variety of arrangements or organizations. In the embodiment shown in FIG. 10A, the device 100 includes a first stack 1760 associated with the first touch sensitive display 110 and a second stack associated with the second touch sensitive display 114. Thus, each touch sensitive display 110, 114 may have an associated window stack 1760, 1764. The two window stacks 1760, 1764 may have different multiple windows arranged in the respective stacks 1760, 1764. Further, the two window stacks 1760, 1764 may also be identified differently and managed separately. Thus, a first window stack 1760 can be arranged in order from the first window 1704, to the next window 1720, to the last window 1724, and finally to a desktop 1722, where in an embodiment desktop 1722 is at the "bottom" end of window stack 1760. In an embodiment, desktop 1722 is not always at the "bottom" because application windows may be arranged in a window stack below desktop 1722, and desktop 1722 may reach the "top" of the stack and over other windows when the desktop is displayed. Likewise, second stack 1764 may be arranged from first window 1708, to the next window 1712, to the last window 1716, and finally to desktop 1718, where in an embodiment desktop 1718 is a single desktop region, located with desktop 1722 below all windows in window stack 1760 and window stack 1764. The logical data structure for managing the two window stacks 1760, 1764 may be as described in connection with fig. 11.
Another arrangement of window stacks 1728 is shown in fig. 10B. In this embodiment, there is a single window stack 1728 for both touch sensitive displays 110, 114. Thus, the window stack 1728 is arranged from the desktop 1758, to the first window 1744, to the last window 1756. The windows may be arranged in positions between all windows without being associated with a particular touch sensitive display 110, 114. In this embodiment, the windows are in the order of the windows. Further, at least one window is determined to be active. For example, a single window may be presented in two portions 1732 and 1736 that are displayed on the first touch sensitive screen 110 and the second touch sensitive screen 114. The single window may occupy only a single position in the window stack 1728, although it is displayed on both displays 110, 114.
Fig. 10C through 10E illustrate yet another arrangement of the window stack 1760. The window stack 1760 is shown in three "front view" views. In FIG. 10C, the top of the window stack 1760 is shown. Both sides of the window stack 1760 are shown in fig. 10D and 10E. In this embodiment, the window stack 1760 resembles a large number of chunks. The windows are stacked on top of each other. Looking from the top of the window stack 1760 of figure 10C, only the topmost window of the window stack 1760 is seen at a different portion of the composite display 1764. The composite display 1764 depicts a logical model for the entire display area of the device 100, which may include a touch sensitive display 110 and a touch sensitive display 114. The desktop 1786 or window may occupy part or all of the composite display 1764.
In the illustrated embodiment, the desktop 1786 is the lowest display or "chunk" in the window stack 1760. Thus, window 11782, window 21782, window 31768, and window 41770 are layered. Window 11782, window 31768, window 21782, and window 41770 only occupy a portion of composite display 1764. Thus, another portion of the stack 1760 includes the window 81774 and windows 5-7 shown in section 1790. In practice, only the top window in any portion of the composite display 1764 is presented and displayed. Thus, as shown in the top view of FIG. 10C, window 41770, window 81774 and window 31768 are displayed at the top of the display of different portions of the window stack 1760. The window size may be adjusted to occupy only a portion of the composite display 1760, thereby "displaying" windows that are lower in the window stack 1760. For example, window 31768 is lower in the stack than both window 41770 and window 81774, but may still be displayed. The logical data structure that manages the window stack may be as described in connection with FIG. 11.
When a new window is opened, the reactivated window is typically placed at the top of the stack. However, where and how windows are placed within the stack may vary with the orientation of the device 100, the context of what programs, functions, software, etc. are executing on the device 100, how the stack is placed when a new window is opened, etc. To insert a window into the stack, the position of the window in the stack is determined, and the touch sensitive display 110, 114 to which the window is associated may also be determined. Using this information, a logical data structure for the window may be created and stored. When the user interface or other event or task changes the arrangement, the window stack may be changed to reflect the change in arrangement. It should be noted that these same concepts as described above may be used to manage one or more desktops of device 100.
FIG. 11 illustrates a logical data structure 1800 for managing a window or desktop arrangement. Logical data structure 1800 may be any data structure used to store data, whether objects, records, files, etc. Logical data structure 1800 may be stored in any type of database or data storage system, regardless of protocol or standard. In an embodiment, logical data structure 1800 includes one or more, fields, attributes, and the like. Data is stored in a reasonable arrangement that allows for easy storage and retrieval of information. Hereinafter, these one or more parts, fields, attributes, etc. should be simply referred to as fields. This field may store data for a window identifier 1804, a size 1808, a stack position identifier 1812, a display identifier 1816, and/or an activity indicator 1820. Each window in the window stack may have an associated logical data structure 1800. Although only a single logical data structure 1800 is shown in FIG. 11, there may be more or less logical data structures 1800 used with a window stack (based on the number of windows or desktops in the stack), represented by ellipses 1824. Further, there may be more or fewer fields than those shown in FIG. 11, represented by ellipses 1828.
The window identifier 1804 can include any Identifier (ID) that uniquely identifies an associated window relative to other windows in the window stack. The window identifier 1804 can be a Globally Unique Identifier (GUID), a numeric ID, an alphanumeric ID, or other type of identifier. In an embodiment, the window identifier 1804 may be one, two, or more numbers based on the number of windows that may be opened. In an alternative embodiment, the size of the window identifier 1804 may vary based on the number of windows that are open. The window identifier 1804 may be static and remain unchanged when the window is opened.
The size 1808 may include the size of the window in the composite display 1760. For example, dimension 1808 may include coordinates of two or more corners of the window, or may include one coordinate and dimensions of the width and height of the window. These dimensions 1808 may depict which portions of the composite display 1760 a window may occupy, which may occupy the entire composite display 1760 or only a portion of the composite display 1760. For example, the window 41770 may have a size 1880 that indicates that the window 1770 will only occupy a portion of the display area of the composite display 1760, as shown in fig. 10C-10E. The size 1808 may change since the window may be moved or inserted into the window stack.
The stack location identifier 1812 may be any identifier that can identify the location of a window in a stack, or can be inferred from a control record of the window within a data structure, such as a directory or stack. The stack location identifier 1812 may be a GUID, numeric ID, alphanumeric ID, or other type of identifier. Each window or desktop may include a stack location identifier 1812. For example, as shown in FIG. 10A, window 11704 in stack 11760 may have a stack position identifier 1812 of 1, 1 identifying that window 1704 is the first window of stack 1760 and is the active window. Likewise, window 61724 may have a stack position identifier 1812 of 3, with 3 indicating that window 1724 is the third window of stack 1760. Window 21708 may also have a stack position identifier 1812 of 1, which indicates that window 1708 is the first window of second stack 1764. As shown in FIG. 10B, window 11744 may have a stack position identifier 1812 of 1, window 3 presented at portions 1732 and 1736 may have a stack position identifier 1812 of 3, and window 61756 may have a stack position identifier 1812 of 6. Thus, based on the type of stack, the stack position identifier 1812 may represent the position of the window in the stack.
The display identifier 1816 may identify that the window or desktop is associated with a particular display, such as the first display 110 or the second display 114, or a composite display 1760 of the two displays. Although this display identifier 1816 is not required for a multi-stack system, as shown in FIG. 10A, the display identifier 1816 may indicate whether a window in the continuous stack of FIG. 10B is displayed on a particular display. Thus, in fig. 10B, window 3 may have two portions 1732 and 1736. The first portion 1732 may have a display identifier 1816 for the first display, while the second portion 1736 may have a display identifier 1816 for the second display 114. However, in alternative embodiments, a window may have two display identifiers 1816 indicating that the window may be displayed on both displays 110, 114, or one display identifier 1816 identifying the composite display. In another alternative embodiment, the window may have a single display identifier 1816 to indicate that the window is displayed on both displays 110, 114.
Similar to the display identifier 1816, the active indicator 1820 may not be necessary for the dual stack system of FIG. 10A because the window in stack position 1 is active and displayed. In the system of FIG. 10B, the active indicator 1820 may indicate which window/windows in the stack are being displayed. Thus, in fig. 10B, window 3 may have two portions 1732 and 1736. The first portion 1732 may have an active indicator 1820 while the second portion 1736 may also have an active indicator 1820. However, in alternative embodiments, window 3 may have a single active indicator 1820. The active indicator 1820 may be a simple flag or bit indicating that the window is active or displayed.
FIG. 12 illustrates an embodiment of a method 1900 for creating a window stack. Meanwhile, fig. 12 shows a general order of the steps of method 1900. Generally, the method 1900 begins at the start step 904 and ends at the end step 1928. The method 1900 may include more or fewer steps or may arrange the order of the steps differently than shown in fig. 12. Method 1900 may be implemented as a set of computer-executable instructions executed by a computer system and encoded or stored on a computer-readable medium. The method 1900 will be described hereinafter with reference to the systems, components, modules, software, data structures, user interfaces, etc. described in connection with fig. 1-11.
The multi-screen device 100 can receive the activity of the window at step 1908. In an embodiment, the multi-screen device 100 can receive the activity of the window by receiving input from the touch- sensitive display 110 or 114, the configurable area 112 or 116, the gesture capture area 120 or 124, or some other hardware sensor operable to receive user interface inputs. The processor may execute the task management module 540 may receive the input. The task management module 540 may interpret the input as a request to execute an application task that will open a window in a window stack.
In an embodiment, the task management Module 540 places the user interface interactions in the task stack 552 followed by the display configuration Module 568 of the Multi-display management Module 524. In addition, the task management module 540 waits for information from the multi-display management module 524 to send instructions to the window management module 532 to create a window in the window stack.
Upon receiving the instruction from the task management module 540, the multi-display management module 524 determines which touch portion of the composite display 1760 the re-active window should be associated with, in step 1912. For example, the window 41770 is associated with a portion of the composite display 1764. In an embodiment, the device state module 574 of the multi-display management module 524 may determine how to determine the device orientation, or what state the device is in, e.g., open, closed, portrait, etc. Additionally, the preference module 572 and/or the requirements module 580 can determine how the window is to be displayed. The gesture Module 576 can determine the user's intent regarding how the window will be opened based on the type of gesture and the location where the gesture was made.
The display configuration Module 568 can use the inputs from these modules and evaluate the current window stack 1760 to determine the best location and best size based on the visibility algorithm to open the window. Thus, the display configuration Module 568 determines the best position to place the window at the top of the window stack 1760, at step 1916. In an embodiment, the visibility algorithm determines for all portions of the composite display that have their windows at the top of the stack. For example, the visibility algorithm determines that window 31768, window 41770, and window 81774 are at the top of stack 1760, as seen in fig. 10C-10E. When determining where to open the window, the display configuration Module 568 can assign the window the display identifier 816 and possibly the size 808. The display identifier 816 and the size 808 are then returned to the task management module 540. The task management module 540 may then assign a stack location identifier 812 to the window, the stack location identifier 812 indicating the location of the window at the top of the window stack.
In an embodiment, the task management module 540 sends window stack information and instructions to present the window to the window management module 532. The window management module 532 and the task management module 540 may create a logical data structure 800, in step 1924. Both the task management module 540 and the window management module 532 may create and manage copies of the window stack. These copies of the window stack may be synchronized or kept similar through communication between the window management module 532 and the task management module 540. Thus, based on the information determined by the multi-display management module 524, the window management module 532 and the task management module 540 may assign a size 808, a stack location identifier 812 (e.g., window 11782, window 41770, etc.), a display identifier 816 (e.g., touch sensitive display 1110, touch sensitive display 2114, composite display identifier, etc.), and an activity indicator 820, typically, the activity indicator 820 is always specified when the window is at the "top" of the stack. Both the window management module 532 and the task management module 540 may then store the logical data structure 800. Further, in the following, the window management module 532 and the task management module 540 may manage the window stack and logical data structure 800.
FIG. 13 depicts a further window stack configuration. Multiple windows 1, 2, 3, 4, 5, 6, 7, and 8 are depicted, whether from the same or different multi-screen or single-screen applications. Currently, touch sensitive display 110 has window 4 in the active display position, while currently touch sensitive display 114 has window 5 in the active display position. From top to bottom, the stack of touch sensitive displays 110 has window 4 in the active display position, and windows 3, 2, and 1 placed behind it. From top to bottom, the stack of touch sensitive displays 114 has window 5 at the active display position, and windows 6, 7, and 8 placed behind it.
Desktops D1, D2, D3, D4, D5, and D6 are placed behind the window stack. The desktop may be viewed as a desktop stack other than a window stack. In this manner, touch sensitive display 110 has a respective desktop stack including desktops D3, D2, and D1, with desktop D1 in a bottom 2300 stack position and desktop D3 in a top stack position capable of being displayed with window 4 (based on window position and size (whether maximized or minimized)), and touch sensitive display 114 has a respective desktop stack having a respective desktop stack including desktops D4, D5, and D6, with desktop D6 in a bottom 2304 stack position and desktop D4 in a top stack position capable of being displayed with window 5 (based on window position and size (whether maximized or minimized)). Conceptually, in this example, the desktop can be viewed as a canvas divided into six segments, two of which can be displayed at any one time on the touch- sensitive display 110, 114. This conceptual mode is adhered to in one configuration when the device 100 is in the off state. In this configuration, only one window and desktop stack (equivalent to the main screen) is visible, but the other windows and desktop stacks are virtual; that is, they are saved in memory, but cannot be seen because the secondary screen is not activated.
The displayed image transition indicator, also known as a well, will be displayed before the image (e.g., window or desktop) is displayed because the displayed image needs to be moved from the start to the target touch sensitive display 110, 114. In response to the user gesture, the displayed image transition indicator previews the user movement of the displayed image. For example, when a window movement gesture is received from the user, the transition indicator unfolds or slides from behind the window (to be moved) and moves along the planned path traveled by the window or toward the target touch-sensitive display to move to the final window destination. A portion of the target touch-sensitive display occupied by the moved window is occupied by the transition indicator. After completing the movement of the transition indicator or at some other point of transition indicator movement, the window is moved to occupy a portion of the target touch-sensitive display occupied by the transition indicator. In one configuration, the displayed image transition indicator actually moves at the same speed (or speed of movement from a linear or other function) as the tracking user gesture that caused the display object to move. In a configuration, the displayed image transition indicator is used for multi-screen applications, which need to be extended to two screens or touch sensitive displays. In this configuration, no displayed image transition indicator is used to move the displayed image associated with the single screen application. In that case, the display image and output of the actual application are moved without including the transition indicator.
Typically, the transition indicator is a display image that is generally similar in size and shape to the display image to be moved, and cannot receive or provide dynamic user input or output, respectively. Typically, though not necessarily, it is a substantially monochromatic display image having a different appearance than the display image to be moved. The transition indicator may display a trademark or other trademark image associated with the manufacturer, distributor or retailer of the device 100. In other configurations, the transition indicator is user configurable. The user may select the color or set of colors, modes, designs, icons, photographs or other images to be displayed as the transition indicator. Thus, the user may personalize the transition indicator to suit his or her preferences, whereby different users have different transition indicators on their respective devices 100. Further, the user may select one or more audible sounds that are played at one or more selected points of travel loyalty to transition the indicator to the target touch-sensitive display. For example, a user-selected sound may be played to announce the movement of the start transition indicator, at an intermediate point along the transition indicator travel path, and/or when the transition indicator is at a destination in the target touch-sensitive display. Additionally, the user may make a selection to disable the transition indicator, adjust the size of the transition indicator so that its size is smaller or larger than the repositioned displayed image, and/or re-time the movement of the transition indicator so that its movement is faster or slower than a default setting.
In one configuration, the wells are available, typically for use as a multi-screen application rather than a single-screen application. In one configuration, the transition indicator is initiated only when the display image is moved corresponding to a gesture received by the gesture capture region 120, 124. In one configuration, the transition indicator is initiated only when the displayed image is moved in response to a gesture received by the touch sensitive display 110, 114. In other configurations, the transition indicator is associated with only certain display image movements or transitions.
Various examples will now be discussed with reference to fig. 7-8.
In FIG. 7A, the touch- sensitive displays 110, 114 are in a portrait display orientation and display Window 1 and second desktop D2, respectively. The gesture 700 is received by the gesture capture region 120, 124. Alternatively, the gesture 700 is received by the touch- sensitive display 110, 114. The gesture may be any suitable gesture, including, without limitation, those gestures described above. By gesture 700, the user indicates his or her command to move window 1 from (origin) touch-sensitive display 110 to (target) touch-sensitive display 114.
Referring to FIG. 7B, continuing in response to the acceptance of the gesture, transition indicator 704 begins a left-to-right or right-to-left movement (based on the orientation of device 100 and touch-sensitive display as described), typically from a point that appears to be behind window 1, and typically begins movement at the same speed of movement as the user gesture is tracked. Typically, the transition indicator 704 is not in a previously displayed image stack associated with the (starting) touch sensitive display 110. In other words, when a gesture is received, the transition indicator is not presented in the active or inactive display position of the touch sensitive display 110 or 114. When and as the transition indicator 704 expands or moves, the seam 708 between the first and second touch sensitive displays 110, 114 and their respective displayed images is completely darkened to show the transition indicator background. In other words, the transition indicator 704 moves outward from the seam 708 in a direction that ultimately covers, typically completely covers, the touch-sensitive display 114 and the second desktop surface D2. As shown in fig. 7B and 7C, the transition indicator begins to overlay the second desktop D2.
In FIG. 7D, the transition indicator 704 has completely covered the second desktop D2 in the touch-sensitive display 114 while Window 1 remains unchanged in the touch-sensitive display 110. In other words, window 1 is in the active display position of touch sensitive display 110 while transition indicator 704 is in the active display position of touch sensitive display 114. The first and second desktop surfaces D1 and D2 are both in a display position in which the touch- sensitive displays 110 and 114 are inactive, respectively. In other configurations, the transition indicator covers only a portion of the target touch-sensitive display (114), prior to initiating or initiating window movement.
When the transition indicator 704 has moved and partially or fully occupies the target touch sensitive display (in this example, touch sensitive display 114) such that the previous display image (in this example, the previous display image second desktop D2) is partially or fully darkened or covered by the transition indicator 704, the first window 1 vacates to occupy the target touch sensitive display 114, as shown progressively in FIGS. 7E and 7F. Until a fixed placement of the transition indicator triggers movement, the first window 1 continues as the displayed image of the source touch sensitive display (in this example, touch sensitive display 110). In other words, window 1 remains in the active display position of the touch sensitive display 110 until the transition indicator completes its movement to the target touch sensitive display 114. At that point, sliding window 1 to cover transition indicator 704 slowly uncovers first desktop D1 to occupy the active display position of touch-sensitive display 114. When window 1 completes its movement to the target touch sensitive display 114, the first desktop D1 is in the active display position for touch sensitive display 110 and the second desktop D2 is in the inactive display position for touch sensitive display 114.
Fig. 8A-E illustrate the above-described steps of the apparatus 100 in a landscape display orientation, in which the first window is being maximized to cover at least a portion of the first and second touch- sensitive displays 110 and 114. In FIG. 8A, the touch- sensitive displays 110, 114 display Window 1 and a second desktop D2, respectively. The gesture 700 is received by the gesture capture region 120, 124. Alternatively, the gesture 700 is received by the touch- sensitive display 110, 114. The gesture may be any suitable gesture, including, without limitation, those gestures described above. By gesture 700, the user indicates his or her command to move window 1 from (origin) touch-sensitive display 110 to (target) touch-sensitive display 114.
Referring to FIG. 8B, in response to the acceptance of the gesture, transition indicator 704 initiates a top-to-bottom or bottom-to-top movement from a point that appears to be behind window 1 (based on the orientation of device 100 and the touch-sensitive display as described). When and as the transition indicator 704 expands or moves, the seam 708 between the first and second touch sensitive displays 110, 114 and their respective displayed images is completely darkened to display the transition indicator background. As shown in FIG. 8B, the transition indicator begins to cover the viewable area of the second desktop D2.
In FIG. 8C, the transition indicator 704 has partially or completely covered the second desktop D2 in the touch-sensitive display 114 while Window 1 remains unchanged in the touch-sensitive display 110. In other words, window 1 is in the active display position of touch sensitive display 110 while transition indicator 704 is in the active display position of touch sensitive display 114. The first and second desktop surfaces D1 and D2 are both in a display position in which the touch- sensitive displays 110 and 114 are inactive, respectively.
When the transition indicator 704 has moved and partially or fully occupies the target touch sensitive display (in this example, touch sensitive display 114) such that the previous display image (in this example, the previous display image second desktop D2) is completely dimmed or covered by the transition indicator 704, the first window 1 vacates to occupy the target touch sensitive display 114 and the originating touch sensitive display 110, as progressively shown in FIGS. 8D and 8E. Until that time, the first window 1 continues to be the displayed image of the source touch sensitive display (in this example, touch sensitive display 110). In other words, window 1 remains in the active display position of the touch sensitive display 110 until the transition indicator completes its movement to the target touch sensitive display 114. At that time, window 1 is slid to cover transition indicator 704 to occupy the active display positions of the starting and target touch sensitive displays 110 and 114.
In various examples, middleware 520, in particular one or more of: the Multiple Display Management (MDM) class 524, the surface cache class 528, the window management class 532, the activity management class 536, and the application management class 540, individually or collectively, detect receipt of a user gesture (step 900 of fig. 9), and determine that the received gesture controls movement of a display image, such as a window or desktop, to a target touch sensitive display. In response, the middleware 520 causes movement of the transition indicator 704 from the originating touch-sensitive display to the target touch-sensitive display (step 1904). When the transition indicator 704 has covered the selected range of the target touch-sensitive display, the middleware 520 moves the display image to cover the target touch-sensitive display (and to cover the transition indicator). The logic ends at step 1912.
Exemplary systems and methods of the present disclosure have been described in relation to communication devices. However, to avoid unnecessarily obscuring the present disclosure, the above description omits many known structures and devices. This omission is not to be construed as a limitation of the scope of the claims. Specific details are set forth in order to provide an understanding of the present disclosure. However, it should be appreciated that the present disclosure may be practiced in various ways beyond the specific details set forth herein.
Moreover, while the exemplary aspects, embodiments, and/or configurations illustrated herein represent various components of a configured system, certain components of the system may be located remotely, at remote locations on a distributed network, such as a LAN and/or the Internet, or within a dedicated system. It will thus be appreciated that the components of the system may be incorporated into one or more devices, such as a communications device, or deployed on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network. It will be appreciated from the foregoing description, and for the sake of computational efficiency, that the components of the system may be arranged at any location within a distributed network of components without affecting the operation of the system. For example, the different components may be located in a switch, such as a PBX and media server, a gateway, in one or more communication devices, at one or more user premises, or some combination thereof. Likewise, one or more functional portions of the system may be distributed between the remote communication device and the associated computing device.
Further, it should be appreciated that the various links of the connecting element may be wired or wireless links or any combination thereof, or any other known or later developed element capable of providing and/or transmitting data to and from the connecting element. These wired or wireless links may also be secure links and may be capable of transmitting encrypted information. For example, the transmission medium used as the link may be any suitable carrier of electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or optical waves, such as those generated during radio wave and infrared data communications.
Also, while flow diagrams have been discussed and illustrated with respect to a particular temporal sequence, it should be appreciated that changes, additions, and omissions to this sequence may occur without materially affecting the operation of the disclosed embodiments, configurations, and aspects.
A number of variations and modifications of the present disclosure may be used. It will be possible to provide some of the features of the present disclosure without providing others.
For example, in an alternative embodiment, a prior movement of the transition indicator overlooks movement of the displayed image in addition to the window and desktop.
In other embodiments, the transition indicator preview is maximized from only one touch sensitive display, covering the windows of both touch sensitive displays.
In another alternative embodiment, the transition indicator covers the entire touch sensitive display during the transition, when the device 100 is closed and only the main screen is active, and/or the transition indicator covers both touch sensitive displays when the device 100 is opened. The latter situation may occur, for example, when a window is maximized or opened and both touch sensitive displays are covered, or when a transition affects both the primary and secondary screens. The transition indicator may steal the touch sensitive display from the edge when the window is maximized on a single touch sensitive display of the closed device or over two touch sensitive displays of the open device.
In other embodiments, the present disclosure is applicable to other display image transitions besides window movement. In such a transition, the touch sensitive display changes the displayed image, at least in part. After removing a previous display image and before inputting a new display image, indicating a change in the display image by overlaying at least a portion of the display with a transition indicator.
In yet another embodiment, the disclosed systems and methods may be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and external integrated circuit elements, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, e.g., a PLD, PLA, FPGA, PAL, a programmable logic device or gate array of a special purpose computer, any similar device, or the like. In general, any device or apparatus capable of performing the methods illustrated herein may be used to perform the various aspects of this disclosure. Exemplary hardware that may be used for the disclosed embodiments, configurations and aspects includes computers, handheld devices, telephones (e.g., cellular, internet enabled, digital, analog, hybrid, etc.), and other hardware known in the art. Some of these devices include a processor (e.g., a single or multiple microprocessors), memory, non-volatile memory, input devices, and output devices. Further, alternative software implementations include, but are not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to perform the methods described herein.
In yet another embodiment, the disclosed methods can be readily performed in conjunction with software using a targeted or targeted software development environment that provides portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic or VLSI design. Whether software or hardware is employed to implement a system in accordance with this disclosure depends upon the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
In yet another embodiment, the disclosed methods may be implemented in part in software stored on a storage medium, executed on a programmed general purpose computer in cooperation with a controller and memory, a special purpose computer, a microprocessor, or the like. In these cases, the disclosed systems and methods may be implemented as programs embedded on a personal computer, such as applets, JAVA, or CGI scripts, as resources residing on a server or computer workstation, as routines embedded in a dedicated measurement system, system component, or the like. The system may also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
Although this disclosure describes components and functions implemented in connection with particular standards and protocols, the aspects, embodiments and/or configurations are not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are also present and are considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein may be periodically replaced by faster or more effective equivalents having substantially the same functionality. Such replacement standards and protocols having the same functions are considered equivalents included in this disclosure.
The present disclosure, in various aspects, embodiments, and/or configurations, includes components, methods, steps, systems and/or apparatus substantially as depicted and described herein, including various aspects, embodiments, configurations embodiments, self-combinations, and/or subsets thereof. Those of skill in the art will understand how to make and use the disclosed aspects, embodiments, and/or configurations after understanding the present disclosure. The present disclosure in various aspects, embodiments, and/or configurations includes providing devices and steps in the absence of items not depicted and/or described herein or in various aspects, embodiments, and/or configurations hereof, including in the absence of such items as may have been used in the above-described devices or steps, e.g., for improving performance, easing implementation, and/or reducing implementation costs.
The foregoing discussion has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. For example, in the foregoing detailed description, various features of the disclosure may be grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. Features of aspects, embodiments and/or configurations of the present disclosure may be incorporated into alternative aspects, embodiments and/or configurations to those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Moreover, although the description includes a description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects and/or configurations within the scope of the license, including alternative, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternative, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims (16)

1. A method for moving a window between multiple display devices, the method comprising:
there is provided a multi-display device including:
a first touch sensitive display, and
a second touch sensitive display;
receiving, by the first touch-sensitive display, a drag gesture indicating movement of the window from the first touch-sensitive display to the second touch-sensitive display; and
in response to receiving the drag gesture and prior to movement of the window to the second touch-sensitive display:
displaying, by a microprocessor, movement of at least a portion of a transition indicator from the first touch-sensitive display to the second touch-sensitive display, wherein the transition indicator is displayed to move to a selected location along a travel path of the window at a speed that is dependent on a speed of movement of the drag gesture to preview movement of the window,
continuing to display the window on the first touch-sensitive display while displaying movement of at least a portion of the transition indicator from the first touch-sensitive display to the second touch-sensitive display, and
moving, by the microprocessor, a window from the first touch sensitive display to the second touch sensitive display to overlay the transition indicator when display of the transition indicator reaches a predetermined point on the second touch sensitive display.
2. The method of claim 1, wherein the transition indicator is substantially the same size and substantially the same shape as the window.
3. The method of claim 1, wherein the transition indicator is unable to receive or provide dynamic user input or output, respectively, wherein the transition indicator has a different appearance than the window, and wherein the transition indicator covers only a portion of the second touch sensitive display before the window is moved to fully cover the second touch sensitive display.
4. The method of claim 1, wherein the transition indicator is not in a display image stack associated with the first touch sensitive display and second touch sensitive display when the gesture is received, wherein the window and the transition indicator are concurrently in an active display position of the first touch sensitive display and the second touch sensitive display, respectively, prior to initiating movement of the window, and wherein the transition indicator comprises a user-configured color, mode, design, and/or photograph.
5. The method of claim 1, wherein the transition indicator has graphical affordance, wherein the window is controlled by a multi-display application, and wherein the transition indicator does not respond to user commands or requests during movement from the first touch sensitive display to the second touch sensitive display.
6. A non-transitory computer-readable storage medium having stored thereon instructions, which when executed by a processor, are configured to move a window by performing at least the following:
receiving, by a first region of a dual-screen communication device, a fling gesture indicating moving the window, wherein the fling gesture has a direction of movement and a speed of movement, wherein the dual-screen communication device comprises at least a first screen and a second screen, wherein the first screen comprises a first touch-sensitive display, wherein the second screen comprises a second touch-sensitive display, and wherein the fling gesture indicates moving the window from the first touch-sensitive display to the second touch-sensitive display;
displaying movement of a transition indicator from the first touch-sensitive display to the second touch-sensitive display, wherein the transition indicator is displayed to move to a selected location along a travel path of the window at a speed that depends on a speed of movement of the drag gesture to preview movement of the window;
continuing to display the window on the first touch-sensitive display while displaying movement of at least a portion of the transition indicator from the first touch-sensitive display to the second touch-sensitive display, and
moving a window from the first touch sensitive display to the second touch sensitive display to cover the transition indicator when the display of the transition indicator reaches a predetermined point on the second touch sensitive display.
7. The medium of claim 6, wherein the transition indicator is substantially the same size and substantially the same shape as the window.
8. The medium of claim 6, wherein the transition indicator is unable to receive or provide, respectively, dynamic user input or output, wherein the transition indicator has a different appearance than the window, and wherein the transition indicator covers only a portion of the second touch sensitive display before the window is moved to fully cover the second touch sensitive display.
9. The media of claim 6, wherein the transition indicator is not in a display image stack associated with the first touch sensitive display and the second touch sensitive display when the gesture is received, wherein the window and the transition indicator are concurrently in an active display position of the first touch sensitive display and the second touch sensitive display, respectively, prior to initiating movement of the window, and wherein the transition indicator comprises a user-configured color, mode, design, and/or photograph.
10. The media of claim 8, wherein the transition indicator has graphical affordances, wherein the window is controlled by a multi-screen application, and wherein the transition indicator is not responsive to user commands or requests during movement from the first touch-sensitive display to the second touch-sensitive display.
11. A dual display communication device, comprising:
a first touch sensitive display operable to display a display image, wherein the display image is a window of an application;
a second touch sensitive display operable to display a display image;
middleware operable to perform at least one of the following:
receiving, by the first touch-sensitive display, a fling gesture indicating an expansion of a window from the first touch-sensitive display to cover at least a majority of the second touch-sensitive display, and
in response to receiving the drag gesture and prior to expansion of the window to the second touch-sensitive display:
displaying an expansion of a transition indicator to cover at least a majority of the second touch-sensitive display, wherein the transition indicator moves according to a direction and speed associated with the drag gesture, wherein the transition indicator moves to a selected location along a path of travel of the window to preview the expansion of the window,
continuing to display the window on the first touch-sensitive display while displaying the expansion of the transition indicator to cover at least a majority of the second touch-sensitive display, and
displaying an extension of the window to cover at least a portion of the second touch sensitive display when the transition indicator reaches a predetermined point on the second touch sensitive display.
12. The dual display communication device of claim 11, wherein the displayed image is a minimized window, wherein the drag gesture is received through a gesture capture area, wherein the window is maximized to cover at least a majority of the first touch sensitive display and second touch sensitive display.
13. The dual display communication device of claim 11, wherein the transition indicator is substantially the same size as the window and the same shape as the window.
14. The dual display communication device of claim 11, wherein the transition indicator is unable to receive or provide dynamic user input or output, respectively, and wherein the transition indicator has a different appearance than the window.
15. The dual display communication device of claim 11, wherein the transition indicator is not in a displayed image stack associated with the first touch sensitive display and second touch sensitive display when the drag gesture is received, wherein the window and the transition indicator are simultaneously in an active display position of the first touch sensitive display and the second touch sensitive display, respectively, prior to initiating expansion of the window, and wherein the transition indicator comprises a user configured color, pattern, design, and/or photograph.
16. The dual display communication device of claim 11, wherein the transition indicator has graphical affordances, wherein the window is controlled by a multi-display application, and wherein the transition indicator is not responsive to user commands or requests during expansion from the first touch sensitive display to the second touch sensitive display.
CN201810310376.0A 2011-09-01 2012-09-03 Method for moving window between multi-screen devices and dual-display communication device Active CN108228035B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US13/223,778 2011-09-01
US13/223,778 US20120081309A1 (en) 2010-10-01 2011-09-01 Displayed image transition indicator
US38911710A 2011-10-01 2011-10-01
US38908710A 2011-10-01 2011-10-01
US38900010A 2011-10-01 2011-10-01
CN201210458810.2A CN103116460B (en) 2011-09-01 2012-09-03 The method of moving window and dual screen communicator between multi-screen device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201210458810.2A Division CN103116460B (en) 2011-09-01 2012-09-03 The method of moving window and dual screen communicator between multi-screen device

Publications (2)

Publication Number Publication Date
CN108228035A CN108228035A (en) 2018-06-29
CN108228035B true CN108228035B (en) 2021-05-04

Family

ID=48428813

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810310376.0A Active CN108228035B (en) 2011-09-01 2012-09-03 Method for moving window between multi-screen devices and dual-display communication device
CN201210458810.2A Active CN103116460B (en) 2011-09-01 2012-09-03 The method of moving window and dual screen communicator between multi-screen device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201210458810.2A Active CN103116460B (en) 2011-09-01 2012-09-03 The method of moving window and dual screen communicator between multi-screen device

Country Status (1)

Country Link
CN (2) CN108228035B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10664162B2 (en) 2013-11-18 2020-05-26 Red Hat, Inc. Multiple display management
CN109375890B (en) * 2018-09-17 2022-12-09 维沃移动通信有限公司 Screen display method and multi-screen electronic equipment
US11157047B2 (en) * 2018-11-15 2021-10-26 Dell Products, L.P. Multi-form factor information handling system (IHS) with touch continuity across displays
CN110618769B (en) * 2019-08-22 2021-11-19 华为技术有限公司 Application window processing method and device
US20210216102A1 (en) * 2020-01-10 2021-07-15 Microsoft Technology Licensing, Llc Conditional windowing model for foldable computing devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994015276A1 (en) * 1992-12-23 1994-07-07 Taligent, Inc. Balloon help system
CN101827503A (en) * 2009-03-03 2010-09-08 Lg电子株式会社 Portable terminal
CN101847075A (en) * 2010-01-08 2010-09-29 宏碁股份有限公司 Multi-screen electronic device and image display method thereof
CN101957720A (en) * 2009-07-16 2011-01-26 索尼公司 Display device, display packing and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7739604B1 (en) * 2002-09-25 2010-06-15 Apple Inc. Method and apparatus for managing windows
US7176943B2 (en) * 2002-10-08 2007-02-13 Microsoft Corporation Intelligent windows bumping method and system
US20090278806A1 (en) * 2008-05-06 2009-11-12 Matias Gonzalo Duarte Extended touch-sensitive control area for electronic device
US8860632B2 (en) * 2008-09-08 2014-10-14 Qualcomm Incorporated Multi-panel device with configurable interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994015276A1 (en) * 1992-12-23 1994-07-07 Taligent, Inc. Balloon help system
CN101827503A (en) * 2009-03-03 2010-09-08 Lg电子株式会社 Portable terminal
CN101957720A (en) * 2009-07-16 2011-01-26 索尼公司 Display device, display packing and program
CN101847075A (en) * 2010-01-08 2010-09-29 宏碁股份有限公司 Multi-screen electronic device and image display method thereof

Also Published As

Publication number Publication date
CN103116460A (en) 2013-05-22
CN103116460B (en) 2018-05-04
CN108228035A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
US11537259B2 (en) Displayed image transition indicator
JP5980784B2 (en) Migrating application display between single and multiple displays
JP6073792B2 (en) Method and system for viewing stacked screen displays using gestures
JP5998146B2 (en) Explicit desktop by moving the logical display stack with gestures
US9122440B2 (en) User feedback to indicate transitions between open and closed states
CN108228035B (en) Method for moving window between multi-screen devices and dual-display communication device
JP6073793B2 (en) Desktop display simultaneously with device release

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant