CN117914979A - Electronic device and method with translating flexible display for automatic position change - Google Patents

Electronic device and method with translating flexible display for automatic position change Download PDF

Info

Publication number
CN117914979A
CN117914979A CN202311312203.XA CN202311312203A CN117914979A CN 117914979 A CN117914979 A CN 117914979A CN 202311312203 A CN202311312203 A CN 202311312203A CN 117914979 A CN117914979 A CN 117914979A
Authority
CN
China
Prior art keywords
electronic device
imager
blade assembly
document
processors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311312203.XA
Other languages
Chinese (zh)
Inventor
阿米特·库马尔·阿格拉沃尔
丹尼尔·P·格勒贝
马尔切洛·苏福
杰弗里·T·斯诺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Mobility LLC
Original Assignee
Motorola Mobility LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/088,705 external-priority patent/US11838433B1/en
Application filed by Motorola Mobility LLC filed Critical Motorola Mobility LLC
Publication of CN117914979A publication Critical patent/CN117914979A/en
Pending legal-status Critical Current

Links

Landscapes

  • Devices For Indicating Variable Information By Combining Individual Elements (AREA)

Abstract

The invention relates to an electronic device and method with a translating flexible display that automatically changes positions. An electronic device includes a device housing, a forward imager, one or more sensors, and a blade assembly carrying and slidably coupled to the device housing and operable to slidably transition between an extended position in which the blade extends beyond an edge of the device housing, a retracted position in which a major surface of the blade abuts a major surface of the device housing, and a peeping position in which the blade reveals the forward imager. The electronic device includes one or more processors. The one or more processors cause the blade assembly to invoke image capture operations in response to a document scanning application operating on the one or more processors and the one or more sensors determine that the forward-facing imager is oriented toward the document, causing the blade assembly to transition to the peeping position.

Description

Electronic device and method with translating flexible display for automatic position change
Cross Reference to Related Applications
The present application claims priority and benefit from the following U.S. provisional applications in accordance with 35U.S. c. ≡119 (e): U.S. Pat. No. 63/416,925, filed on 10 months, 17, 2022, and U.S. Pat. No. 63/419,994, filed on 27, 10, 2022, each of which is incorporated by reference for all purposes.
Technical Field
The present disclosure relates generally to electronic devices, and more particularly to electronic devices having flexible displays.
Background
Portable electronic communication devices, particularly smart phones, have become ubiquitous. People around the world use such devices to stay in touch. These devices are designed in various mechanical configurations. The first configuration is known as a "bar-type" which is generally rectangular in shape, has a rigid form factor, and has a display disposed along a major face of the electronic device. In contrast, "flip" devices have a mechanical hinge that allows one housing to pivot relative to the other. A third type of electronic device is a "slider" in which two different device housings slide, one of which slides relative to the other.
Some consumers prefer bar-type devices, while others prefer flip-type devices. Still others prefer the slider type. The latter two types of devices are convenient because they are smaller in the closed position than in the open position, and thus easier to place in the pocket. While flip-type and slider-type devices are relatively mechanically simple, they tend to be bulky when in the closed position due to the need for two device housings. Accordingly, there is a need for an improved electronic device that not only provides a compact geometric form factor, but also allows for the use of a larger display surface area.
Disclosure of Invention
An electronic device according to the present invention includes: an equipment housing; a forward imager; one or more sensors; a blade assembly carrying a blade and slidably coupled to the device housing and operable to slidably transition between an extended position, a retracted position, and a peeping position: in the extended position, the blade extends beyond an edge of the device housing, in the retracted position, a major surface of the blade abuts a major surface of the device housing, and in the peeping position, the blade reveals the forward-facing imager; and one or more processors; wherein the one or more processors cause the blade assembly to transition to the peeping position in response to a document scanning application operating on the one or more processors invoking an image capture operation and the one or more sensors determining that the forward imager is oriented toward a document.
A method in an electronic device according to the invention comprises: detecting, by one or more processors, a call by a document scanning application operating on the one or more processors requesting use of a service mode of an imager by the application; determining, by one or more sensors of the electronic device, whether a front side of a device housing of the electronic device or a rear side of the device housing of the electronic device faces a document; and causing, by the one or more processors, a translation mechanism to translate a blade assembly slidably coupled to the electronic device from a retracted position to a peeping position in which a forward-facing imager covered by the blade assembly in the retracted position is revealed when the front side of the device housing faces the document.
An electronic device according to the present invention includes: an equipment housing; a blade assembly slidably coupled to the device housing and slidable between an extended position, a retracted position, and a peeping position; and a forward-facing imager that is covered when the blade assembly is in the extended position and the retracted position and that is revealed when the blade assembly is in the peeping position; one or more sensors; and one or more processors cooperable with a translation mechanism to cause translation of the blade assembly about the device housing; wherein the one or more processors cause the translating mechanism to translate the blade assembly to the peep position when: receiving an imager call request from a document scanning application operating on the one or more processors; and the one or more sensors determine that the forward imager is oriented toward the document.
Drawings
FIG. 1 shows an illustrative electronic device in accordance with one or more embodiments of the present disclosure.
Fig. 2 shows one illustrative electronic device having a translating display moved to a first sliding position in which portions of the translating display extend distally away from a device housing of the electronic device.
Fig. 3 shows the illustrative electronic device of fig. 2 with the translating display moved to a second sliding position in which the translating display is wrapped around and abutted against the device housing of the electronic device.
Fig. 4 shows the electronic device of fig. 3 seen from the rear.
FIG. 5 shows the illustrative electronic device of FIG. 2 with the translating display moved to a third slide position, referred to as a "peeping" position, exposing the image capture device, which is positioned below the translating display when the translating display is in either the first slide position or the second slide position.
Fig. 6 shows one or more illustrative physical sensors suitable for use alone or in combination in an electronic device in accordance with one or more embodiments of the present disclosure.
FIG. 7 shows one or more illustrative context sensors suitable for use alone or in combination in an electronic device in accordance with one or more embodiments of the present disclosure.
Fig. 8 shows an exploded view of portions of an illustrative display assembly in accordance with one or more embodiments of the present disclosure.
Fig. 9 shows an exploded view of portions of an illustrative display assembly in accordance with one or more embodiments of the present disclosure.
Fig. 10 shows an exploded view of portions of an illustrative display assembly in accordance with one or more embodiments of the present disclosure.
FIG. 11 shows an illustrative display component in accordance with one or more embodiments of the present disclosure.
FIG. 12 shows one illustrative display assembly in an undeformed state.
Fig. 13 shows the illustrative display assembly of fig. 12 in a deformed state.
Fig. 14 shows the illustrative display assembly of fig. 12 in another deformed state, with an exploded view of the deformable portion of the display assembly shown in an enlarged view.
Fig. 15 shows a front elevation view of an illustrative electronic device with a blade assembly in an extended position in accordance with one or more embodiments of the present disclosure.
Fig. 16 shows a left side elevation view of an illustrative electronic device with a blade assembly in an extended position in accordance with one or more embodiments of the present disclosure.
Fig. 17 shows a rear elevation view of an illustrative electronic device with a blade assembly in an extended position in accordance with one or more embodiments of the present disclosure.
FIG. 18 shows a front elevation view of an illustrative electronic device with a blade assembly in a retracted position in accordance with one or more embodiments of the present disclosure.
FIG. 19 shows a left side elevation view of an illustrative electronic device with a blade assembly in a retracted position in accordance with one or more embodiments of the present disclosure.
FIG. 20 shows a rear elevation view of an illustrative electronic device with a blade assembly in a retracted position in accordance with one or more embodiments of the present disclosure.
Fig. 21 shows a front elevation view of an illustrative electronic device with a blade assembly in a peeping position revealing a forward-facing image capture device in accordance with one or more embodiments of the present disclosure.
Fig. 22 shows a rear elevation view of an illustrative electronic device with a blade assembly in a peeping position revealing a forward-facing image capture device in accordance with one or more embodiments of the present disclosure.
FIG. 23 shows one illustrative method in accordance with one or more embodiments of the present disclosure.
Fig. 24 illustrates one or more method steps in accordance with one or more embodiments of the present disclosure.
FIG. 25 shows another illustrative method in accordance with one or more embodiments of the present disclosure.
FIG. 26 shows yet another illustrative method in accordance with one or more embodiments of the present disclosure.
FIG. 27 shows yet another illustrative method in accordance with one or more embodiments of the present disclosure.
Fig. 28 illustrates one or more method steps in accordance with one or more embodiments of the present disclosure.
Fig. 29 illustrates various embodiments of the present disclosure.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve the understanding of the embodiments of the present disclosure.
Detailed Description
Before describing in detail embodiments that are in accordance with the present disclosure, it should be observed that the embodiments reside primarily in combinations of method steps and apparatus components related to a flexible display that is automatically translatable about a single device housing, between an extended position, a retracted position, and a peeping position, to the peeping position when the forward imager needs to scan a document. Any process descriptions or blocks in a flowchart should be understood to represent modules, segments, or portions of code including one or more executable instructions for implementing specific logical functions or steps in the process.
Alternative embodiments are included, and it will be apparent that the functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. Accordingly, where appropriate, apparatus components and method steps have been represented by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. Moreover, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such methods and devices with minimal experimentation.
Embodiments of the present disclosure will now be described in detail. Referring to the drawings, like numbers indicate like parts throughout the views. As used in the description herein and throughout the claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise: the meaning of "a", "an", and "the" includes plural references, "in …" including "in …" and "on …".
Relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. As used herein, components may be "operatively coupled" when information may be sent between the components, even though there may be one or more intervening or intervening components between or along the connection paths.
The terms "substantially," "about," or any other form thereof are defined as being approximately as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined as being within ten percent, within five percent in another embodiment, within one percent in another embodiment, and within five percent in another embodiment. The term coupled, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. In addition, reference numerals shown in parentheses herein indicate components shown in figures other than the presently discussed figures. For example, talking about a device (10) when discussing figure a will refer to an element 10 shown in a figure other than figure a.
Embodiments of the present disclosure provide an electronic device that includes a single device housing. In one or more embodiments, the flexible display is then incorporated into a "blade" assembly that is wrapped around the single device housing. In one or more embodiments, the blade assembly accomplishes this by way of a translation mechanism coupled to the single device housing.
The translating mechanism is operable to transition the blade assembly about the surface of the device housing between an extended position in which the blade of the blade assembly extends distally from the device housing, a retracted position in which the blade assembly abuts the device housing and is wrapped about the surface of the device housing with a flexible display, and a position between the positions in which movement of the translating mechanism reveals the blade assembly below the blade assembly in front of the single device housing.
For example, in one illustrative embodiment, the blade assembly slides around a single device housing such that the blade slides off the single device housing to change the overall length of the flexible display present on the front of the electronic device. In other embodiments, the blade assembly may be slid around a single device housing in opposite directions to a retracted position, where the amount of flexible display visible on the front side of the electronic device and the back side of the electronic device is similar. Thus, in one or more embodiments, an electronic device includes a single device housing, a blade assembly coupled to both major surfaces of the single device housing and wrapped around at least one minor surface of the electronic device, wherein a translating mechanism is positioned such that the blade assembly can slide around and relative to the single device housing between a retracted position and an extended position and a peeping position revealing a forward-facing image capture device.
In one or more embodiments, a flexible display is coupled to the blade assembly. In one or more embodiments, the flexible display is also surrounded by a silicone bezel that is co-molded onto the blade substrate and protects the side edges of the flexible display. In one or more embodiments, the blade assembly engages at least one rotor of the translation mechanism at one end of the single device housing. When a translation mechanism located in the single device housing drives an element coupled to the blade assembly, the flexible display wraps around the rotor and moves to extend the blade of the blade assembly farther from or retract toward the single device housing.
In one or more embodiments, one end of the flexible display is fixedly coupled to the blade assembly. Meanwhile, the other end of the flexible display is coupled to the tensioner via a flexible substrate that protrudes beyond the terminal edge of the flexible display. In one or more embodiments, the flexible substrate is a stainless steel substrate, although other materials may be used.
For example, in one or more embodiments, the flexible substrate of the flexible display is longer along its major axis than the flexible display in at least one dimension. Thus, at least a first end of the flexible substrate protrudes distally beyond at least one terminal end of the flexible display. This allows the first end of the flexible substrate to be rigidly coupled to the tensioner. In one or more embodiments, the adhesive is used to couple one end of the flexible display to the blade assembly while the one or more fasteners are used to couple a second end of the flexible display to a tensioner carried by the blade assembly.
In one or more embodiments, the translation mechanism includes an actuator that causes a portion of the blade assembly that abuts the first major surface of the single device housing and another portion of the blade assembly that abuts the second major surface of the single device housing to symmetrically slide in opposite directions along the single device housing as the blade assembly transitions between the extended position, the retracted position, and the peeping position.
In one or more embodiments, the one or more processors of the electronic device cause the blade assembly to automatically transition to the peep position in response to two input conditions: a document scanning application operating on the one or more processors invokes an image capture operation; and one or more sensors of the electronic device determine that the forward imager is oriented toward the document. Since, in one or more embodiments, switching the blade assembly to the peeping position is the only way to expose the forward-facing imager, automatically switching based on the forward-facing imager being invoked by the document scanning application and the sensor determining that the forward-facing portion of the electronic device is facing the document saves the user time and effort by eliminating the need to manually switch the blade assembly to the peeping position. In addition, automatic conversion allows forward images to be captured more quickly.
In one or more embodiments, the forward imager is hidden under the blade assembly when the blade assembly is in the retracted position, the extended position, or any position therebetween. The forward-facing imager is exposed only when the blade assembly is switched to the peep position. In one or more embodiments, the one or more processors determine that the trigger causes the forward imager to be invoked in the service mode. "service mode" invocation refers to an application invoking the forward imager rather than the user manually actuating the forward imager. For example, when a document scanning application requests to scan a document using an image capture device, it may invoke the image capture device. Embodiments of the present disclosure assume that if a user manually actuates a forward imager to simply take a picture or capture video, they are aware of which imager to utilize (forward or backward) and then hold the device accordingly. However, when a service mode call occurs, embodiments of the present disclosure automatically transition the blade assembly to the peep position if the front side of the electronic device is oriented toward the document.
Once the trigger is determined, one or more sensors of the electronic device can determine which major surface (forward major surface or backward major surface) of the electronic device is oriented toward the document. The one or more sensors used for this task may include passive imager operation of the exposed imager, grip detection, finger detection, touch detection, and other techniques. When the one or more processors of the electronic device determine that the document faces the front major surface of the electronic device upon detection of the trigger, in one or more embodiments, the one or more processors automatically cause the blade assembly to transition to the peeping position.
In one or more embodiments, an electronic device includes a device housing, a forward imager, one or more sensors, and one or more processors. The electronic device also includes a blade assembly carrying the blade, wherein the blade assembly is slidably coupled to the device housing and is operable to slidably transition between an extended position in which the blade extends beyond an edge of the device housing, a retracted position in which a major surface of the blade abuts a major surface of the device housing, and a peeping position in which the blade reveals the forward-facing imager. In one or more embodiments, the one or more processors cause the blade assembly to automatically transition to the peeping position in response to a document scanning application operating on the one or more processors invoking an image capture operation and the one or more sensors determining that the forward-facing imager is oriented toward the document.
Advantageously, embodiments of the present disclosure provide an improved sliding mechanism for a flexible display integrated into a blade assembly in a sliding electronic device having a single device housing that eliminates the need to manually transition the blade assembly to a peep position in response to a document scanning application presenting a prompt, such as "this application needs to use the front-facing imager to scan–would you please enable it by transitioning the blade assembly to the peek position?( that the application needs to scan using a forward-facing imager—please enable you by transitioning the blade assembly to the peep position? ) ". In one or more embodiments, a method for an electronic device includes detecting, by one or more processors, a document scanning application operating on the one or more processors requesting invocation of an imager for service mode use of the application. In one or more embodiments, one or more sensors of an electronic device determine whether a front side of a device housing of the electronic device or a back side of the device housing of the electronic device faces a document. In one or more embodiments, when the front side of the device housing faces the document, the one or more processors, when a blade assembly slidably coupled to the electronic device is in a retracted position, cause the translating mechanism to translate the blade assembly from the retracted position to the peeping position.
The actuator of the translation mechanism may take a variety of forms. In some embodiments, the translation mechanism may include a dual-shaft motor. In one or more embodiments, the dual-shaft motor may be threaded to move the translator of the translation mechanism in the same and opposite directions. In other embodiments, the dual-shaft motor may be coupled to at least one timing belt.
In another embodiment, the actuator includes a first drive screw and a second drive screw. The drive screws may be coupled together by a gear assembly. When a first portion of the blade assembly is coupled to a translator positioned about the first drive screw and a second portion of the blade assembly is coupled to another translator positioned about the second drive screw, actuation of either causes the first portion of the blade assembly that abuts the first major surface of the single device housing and the second portion of the blade assembly that abuts the second major surface of the single device housing to move symmetrically in opposite directions as the first and second drive screws rotate.
In still other embodiments, the actuator includes a first rack, a second rack, and a pinion. The first rack may be coupled to a first portion of the blade assembly and the second rack may be coupled to a second portion of the blade assembly. When the pinion engages both the first rack or the second rack, actuation of either causes the first portion of the blade assembly that abuts the first major surface of the single device housing and the second portion of the blade assembly that abuts the second major surface of the single device housing to move symmetrically in opposite directions as the first rack and the second rack move symmetrically in opposite directions. Other configurations of the actuator will be described below. Still other embodiments will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In one or more embodiments, the blade assembly is coupled to a translator of the translation mechanism. When the translator is actuated, a first portion of the blade assembly abutting a first major surface of the single device housing and a second portion of the blade assembly abutting a second major surface of the single device housing move symmetrically in opposite directions.
Advantageously, embodiments of the present disclosure provide an improved sliding mechanism for a flexible display in an electronic device. A flexible display and rotor sliding assembly constructed in accordance with embodiments of the present disclosure maintains a J-shaped flat upper portion defined by the flexible display and/or blade assembly while maintaining operability and functionality of the flexible display during a sliding operation.
Embodiments of the present disclosure contemplate that in such electronic devices having a translating display, a user is typically required to manually select whether the display is to be transitioned to an extended position, a retracted position, or a peeping position. For example, the user may have to press a button once to cause the translating display to transition to the extended position, while pressing twice causes the translating display to transition to the retracted position. A "long press" of the button may be required to cause the translating display to switch to the peeping position, and so on.
Such manual actuation requires the user to take manual action to change the state of the electronic device. Furthermore, the above requirements potentially delay the usability of the electronic device in a new state due to the time it takes to cause a transition of the translation display by pressing a button to manually "inject" the trigger.
Advantageously, embodiments of the present disclosure provide systems and methods that automatically and preemptively move a translating display to a peeping position based on the orientation of an imager invoked by a document scanning application and the device relative to a document to be scanned. For example, in one or more embodiments, one or more processors of the electronic device may transition the translating display to the peeping position when an imager call request is received from a document scanning application operating on the one or more processors, and one or more sensors of the electronic device determine that the forward-facing imager is oriented toward the document. As described above, in one or more embodiments, the imager invocation request is a service mode invocation request.
In one or more embodiments, an artificial intelligence classifier may also be used to determine the device orientation that causes an automatic transition to the peeping position. This may be based on specific user preferences identified from the operating context at the time the imager invocation request is received. In one or more embodiments, the artificial intelligence model is trained using the following inputs as weighted variable inputs: current foreground application, device orientation in three-dimensional space, type of application operating on the one or more processors (e.g., whether the application is a gaming application, video productivity application, media application, etc.), application display mode, e.g., whether the display is used in immersive or non-immersive mode, and causes the translating display to transition to a peep position when an imager call request is received.
In one or more embodiments, the artificial intelligence classifier can continuously learn user preferences for the extended position based on user actions. In one or more embodiments, the artificial intelligence classifier may automatically trigger movement of the translation display to the peeping position in response to the imager invocation request. Other advantages will be described below. Still other advantages will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Turning now to fig. 1, one illustrative electronic device 100 constructed in accordance with one or more embodiments of the present disclosure is shown. The electronic device 100 of fig. 1 is a portable electronic device. For illustration purposes, the electronic device 100 is shown as a smartphone. However, the electronic device 100 may be any number of other devices including a tablet computer, gaming device, multimedia player, and the like. Other types of electronic devices may also be constructed in accordance with one or more embodiments of the present disclosure, as will be readily appreciated by those of ordinary skill in the art having the benefit of the present disclosure.
The electronic device 100 includes a single device housing 101. In one or more embodiments, the blade assembly 102 carrying the flexible display 104 is wrapped around a single device housing 101. As will be described in more detail below, in one or more embodiments, the blade assembly 102 is configured to "slide" along a first major surface (which is covered by a flexible display in a front view of the electronic device 100 on the left side of fig. 1) and a second major surface 103 (located on the rear side of the single device housing 101) of the single device housing 101.
In one or more embodiments, the single device housing 101 is fabricated from a rigid material such as a rigid thermoplastic, metal, or composite material, although other materials may be used. For example, in one illustrative embodiment, a single device housing 101 is fabricated from aluminum. Other constructions will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In the illustrative embodiment of fig. 1, the blade assembly 102 carries a flexible display 104. The flexible display 104 may optionally be touch sensitive. The user may communicate user input to the flexible display 104 of such an embodiment by communicating touch input from a finger, stylus, or other object disposed proximate to the flexible display 104.
In one embodiment, the flexible display 104 is configured as an Organic Light Emitting Diode (OLED) display fabricated on a flexible plastic substrate. Blade assembly 102 is also fabricated on a flexible substrate. This allows the blade assembly 102 and flexible display 104 to deform around the display roller mechanism 105 when the first portion 106 of the blade assembly 102 abutting the first major surface of the single device housing 101 and the second portion 107 of the blade assembly 102 abutting the second major surface 103 of the single device housing 101 move symmetrically around the single device housing 101 in opposite directions. In one or more embodiments, the blade assembly 102 and the flexible display 104 are each constructed on a flexible metal substrate that may allow each to bend at various bending radii around the display roller mechanism 105.
In one or more embodiments, the flexible display 104 may be formed from multiple layers of flexible material, such as a flexible polymer sheet or other material. In the illustrative embodiment, flexible display 104 is fixedly coupled to blade assembly 102, and blade assembly 102 is wrapped around display roller mechanism 105.
Features may be incorporated into a single device housing 101. Examples of such features include one or more cameras or image capture devices 108 or optional speaker ports. In this illustrative embodiment, the user interface components 109, 110, 111 (which may be buttons, fingerprint sensors, or touch sensitive surfaces) may also be disposed along a surface of the single device housing 101. Any of these features are shown as being provided on a side surface of the electronic device 100, but may be located elsewhere. In other embodiments, these features may be omitted.
Also shown in fig. 1 is a schematic block diagram 112 of electronic device 100. The schematic block diagram 112 includes one or more electronic components that may be coupled to a printed circuit board assembly disposed within the single device housing 101. Alternatively, the electronic components may be carried by the blade assembly 102. For example, in one or more embodiments, the electronic components may be positioned under a "backpack" 113 carried by the blade assembly 102.
The components of schematic block diagram 112 may be electrically coupled together by conductors or buses disposed along one or more printed circuit boards. For example, some of the components of schematic block diagram 112 may be configured as first electronic circuitry fixedly located within a single device housing 101, while other components of schematic block diagram 112 may be configured as second electronic circuitry in backpack 113 carried by the blade assembly. The flexible substrate may then extend from the first electronic circuitry in the single device housing 101 to the second electronic circuitry in the backpack 113 carried by the blade assembly to electrically couple the first electronic circuitry to the second electronic circuitry.
The illustrative schematic block diagram 112 of fig. 1 includes many different components. Embodiments of the present disclosure contemplate that the number and arrangement of these components may vary depending on the particular application. Accordingly, an electronic device constructed in accordance with embodiments of the present disclosure may include some components not shown in fig. 1, and other components not shown may not be required, and thus may be omitted.
In one or more embodiments, the electronic device 100 includes one or more processors 114. In one embodiment, the one or more processors 114 may include an application processor and optionally one or more auxiliary processors. One or both of the application processor or the auxiliary processor may include one or more processors. One or both of the application processor or the auxiliary processor may be a microprocessor, a set of processing elements, one or more ASICs, programmable logic, or other types of processing devices.
The application processor and the auxiliary processor may operate with various components of the electronic device 100. Each of the application processor and the auxiliary processor may be configured to process and execute executable software code to perform various functions of the electronic device 100. A storage device, such as memory 115, may optionally store executable software code used by the one or more processors 114 during operation.
In one embodiment, the one or more processors 114 are responsible for running the operating system environment of the electronic device 100. The operating system environment may include a kernel and one or more drivers, an application services layer, and an application layer. The operating system environment may be configured as executable code that operates on one or more processors or control circuits of the electronic device 100. The application layer may be responsible for executing application service modules. An application service module may support one or more applications or "apps". An application of the application layer may be configured as a client of the application service layer to communicate with a service through an Application Program Interface (API), message, event, or other interprocess communication interface. Where auxiliary processors are used, they may be used to perform input/output functions, activate user feedback devices, and so forth.
In this illustrative embodiment, the electronic device 100 also includes a communication device 116 that may be configured to communicate, either wired or wireless, with one or more other devices or networks. The network may include a wide area network, a local area network, and/or a personal area network. The communication device 116 may also communicate using wireless technology such as, but not limited to, point-to-point or ad hoc communication (e.g., homeRF, bluetooth, and IEEE 802.11) as well as other forms of wireless communication (e.g., infrared technology). The communication device 116 may include one of a wireless communication circuit, a receiver, a transmitter, or a transceiver, and one or more antennas 117.
In one embodiment, the one or more processors 114 may be responsible for executing the primary functions of the electronic device 100. For example, in one embodiment, the one or more processors 114 include one or more circuits operable by one or more users. An interface device, which may include a flexible display 104, presents images, video, or other presentation information to a user. Executable software code used by the one or more processors 114 may be configured as one or more modules 118 operable with the one or more processors 114. Such modules 118 may store instructions, control algorithms, logic steps, and the like.
In one embodiment, the one or more processors 114 are responsible for running the operating system environment of the electronic device 100. The operating system environment may include a kernel and one or more drivers, as well as an application services layer and an application layer. The operating system environment may be configured as executable code that operates on one or more processors or control circuits of the electronic device 100. The application layer may be responsible for executing application service modules. An application service module may support one or more applications or "apps". An application of the application layer may be configured as a client of the application service layer to communicate with a service through an Application Program Interface (API), message, event, or other interprocess communication interface. Where auxiliary processors are used, they may be used to perform input/output functions, actuate user feedback devices, and so forth.
In one embodiment, the one or more processors 114 may generate commands or perform control operations based on information received from various sensors of the electronic device 100. As shown in fig. 1, these sensors may be classified into physical sensors 120 and context sensors 121.
In general, physical sensor 120 includes a sensor configured to sense or determine a physical parameter indicative of a condition in the environment surrounding electronic device 100. For example, physical sensor 120 may include sensors for determining, for example, motion, acceleration, orientation, proximity to people and other objects, lighting, capturing images, and the like. The physical sensors 120 may include various combinations of microphones, position detectors, temperature sensors, barometers, proximity sensor components, proximity detector components, health sensors, touch sensors, cameras, audio capturing devices, and the like. Many examples of physical sensors 120 are described below with reference to fig. 6. Other sensors will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In contrast, context sensor 121 does not measure a physical condition or parameter. Instead, they infer context from the data of the electronic device. For example, when the physical sensor 120 comprises a camera or smart imager, the context sensor 121 may use the data captured in the image to infer a context cue. The emotion detector may be used to analyze data from the captured image to determine an emotional state. The emotion detector may identify facial expressions such as smiles or eyebrows to infer an emotional state that a person is silently conveying, e.g., happy, angry, depressed, etc. Other context sensors 121 may analyze other data to infer context, including calendar events, user profiles, device operating states, energy storage within the battery, application data, data from third parties (e.g., web services and social media servers), alarm clocks, time of day, user repeated behavior, and other factors.
The context sensor 121 may be configured as a hardware component or alternatively as a combination of hardware and software components. The context sensor 121 may be configured to collect and analyze non-physical parameter data.
Examples of physical sensors 120 and context sensors 121 are shown in fig. 6 and 7. These examples are merely illustrative, as other physical sensors 120 and context sensors 121 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Turning briefly to fig. 6, various examples of physical sensors 120 are shown. In one or more embodiments, physical sensor 120 senses or determines a physical parameter indicative of a condition in the environment surrounding the electronic device. Fig. 6 shows several example physical sensors 120. It should be noted that those shown in fig. 6 are not comprehensive, as physical sensors will be apparent to those of ordinary skill in the art having the benefit of this disclosure. In addition, it should be noted that the various physical sensors 120 shown in fig. 6 may be used alone or in combination. Thus, many electronic devices will employ only a subset of the physical sensors 120 shown in FIG. 6, with the particular subset selected being defined by the device application.
A first example of a physical sensor is a touch sensor 601. Touch sensor 601 may include a capacitive touch sensor, an infrared touch sensor, a resistive touch sensor, or another touch sensitive technology. The capacitive touch sensitive device includes a plurality of capacitive sensors, such as electrodes, disposed along a substrate. Each capacitive sensor is configured in conjunction with associated control circuitry (e.g., the one or more processors (114)) to detect objects proximate to or touching a surface of a display of the electronic device or a housing of the electronic device by establishing electric field lines between pairs of capacitive sensors and then detecting perturbations of those electric field lines.
The electric field lines may be established from periodic waveforms, such as square waves, sine waves, triangular waves, or other periodic waveforms emitted by one sensor and detected by another sensor. For example, a capacitive sensor may be formed by disposing indium tin oxide patterned into electrodes on a substrate. Indium tin oxide is useful for such systems because it is transparent and electrically conductive. Furthermore, it can be deposited in thin layers by a printing process. Capacitive sensors may also be deposited on the substrate by electron beam evaporation, physical vapor deposition, or other various sputter deposition techniques.
Another example of a physical sensor 120 is a geolocation that serves as a location detector 602. In one embodiment, the position detector 602 is operable to determine position data at the time of capturing an image from a constellation of one or more earth-orbiting satellites or to determine an approximate position from a network of ground base stations. Examples of satellite positioning systems suitable for use with embodiments of the present invention include, inter alia, the time and range navigation system (NAVSTAR) Global Positioning System (GPS) of the united states of america and the like. The position detector 602 may make position determinations autonomously or with the aid of a terrestrial base station, such as a terrestrial base station associated with a cellular communication network or other terrestrial-based network, or as part of a Differential Global Positioning System (DGPS), as is well known to those of ordinary skill in the art. The location detector 602 is also capable of determining location by locating or triangulating a conventional cellular network or a terrestrial base station from another local area network (e.g., wi-Fi network).
The other physical sensor 120 is a near field communication circuit 603. Near field communication circuitry 603 may be included for communicating with a local area network to receive information regarding the context of the environment in which the electronic device is located. For example, the near field communication circuit 603 may obtain information such as weather information and location information. For example, if the user is at a museum, they may be standing near an exhibit that can be identified by near field communication. The identification may indicate that the electronic device is indoors and at a museum. Thus, if the user requests additional information about the artist or a picture, the problem is that the likelihood of device commands requiring the one or more processors (114) to search with a web browser to obtain information is high. Alternatively, near field communication circuitry 603 may be used to receive contextual information from kiosks and other electronic devices. The near field communication circuitry 603 may also be used to obtain images or other data from a social media network. Examples of suitable near field communication circuits include bluetooth communication circuits, IEEE801.11 communication circuits, infrared communication circuits, magnetic field modulation circuits, and Wi-Fi circuits.
Another example of a physical sensor 120 is a motion detector 604. For example, an accelerometer, gyroscope, or other device may be used as the motion detector 604 in the electronic device. Using an accelerometer as an example, an accelerometer may be included to detect movement of an electronic device. In addition, accelerometers may also be used to sense some gestures of the user, such as hand movements while speaking, running, or walking.
The motion detector 604 may also be used to determine the electronic device and the spatial orientation in three-dimensional space by detecting the direction of gravity. An electronic compass may be included in addition to or in place of the accelerometer to detect the spatial orientation of the electronic device with respect to the earth's magnetic field. Similarly, one or more gyroscopes may be included to detect rotational motion of the electronic device.
Another example of a physical sensor 120 is a force sensor 605. The force sensor may take various forms. For example, in one embodiment, the force sensor includes a resistive switch or force switch array configured to detect contact with a display or housing of the electronic device. The resistive switch array may be used as a force sensing layer because a change in impedance of any switch may be detected when in contact with a surface of a display of the electronic device or a housing of the electronic device. The switch array may be any of a resistive sense switch, a membrane switch, a force sense switch such as a piezoelectric switch, or other equivalent type of technology. In another embodiment, the force sensor may be capacitive. In yet another embodiment, the piezoelectric sensor may also be configured to sense a force. For example, in the case of coupling with a lens of a display, a piezoelectric sensor may be configured to detect an amount of displacement of the lens to determine the force. The piezoelectric sensor may also be configured to determine a contact force to a housing of the electronic device, rather than to the display.
Another example of a physical sensor 120 includes a proximity sensor. Proximity sensors belong to one of two camps: active proximity sensors and "passive" proximity sensors. These are shown in fig. 6 as a proximity detector component 606 and a proximity sensor component 607. The proximity detector component 606 or the proximity sensor component 607 can generally be employed for gesture control and other user interface protocols, some examples of which are described in more detail below.
As used herein, a "proximity sensor component" includes only a signal receiver that does not include a corresponding transmitter for transmitting a signal to reflect from an object to the signal receiver. Since the body of the user or other heat generating object external to the device (e.g. a wearable electronic device worn by the user) acts as a transmitter, only a signal receiver can be used. For example, in one example, the proximity sensor component 607 includes a signal receiver to receive a signal from an object external to the housing of the electronic device. In one embodiment, the signal receiver is an infrared signal receiver for receiving infrared emissions from an object such as a person when the person is in proximity to the electronic device. In one or more embodiments, the proximity sensor component is configured to receive infrared wavelengths of about four microns to about ten microns. This wavelength range is advantageous in one or more embodiments because it corresponds to the wavelength of the heat emitted by the human body.
In addition, wavelengths within this range can be detected from greater distances than, for example, detection of reflected signals from emitters in close proximity to the detector assembly. In one embodiment, the proximity sensor component 607 has a relatively long detection range to detect heat emitted from a person's body when the person is within a predefined heat receiving radius. For example, in one or more embodiments, the proximity sensor component is capable of detecting a person's body heat from a distance of approximately ten feet. The ten foot size can be extended depending on the optics designed, the sensor active area, gain, lens gain, etc.
The proximity sensor component 607 is sometimes referred to as a "passive IR system" due to the fact that the person is an active emitter. Thus, the proximity sensor component 607 does not require a transmitter, as an object disposed outside the housing conveys the emissions received by the infrared receiver. Since no transmitter is required, each proximity sensor component 607 can operate at very low power levels.
In one embodiment, the signal receiver of each proximity sensor component 607 may operate at various sensitivity levels such that at least one proximity sensor component 607 is operable to receive infrared emissions from different distances. For example, the one or more processors (114) may cause each proximity sensor component 607 to operate at a first "effective" sensitivity to receive infrared emissions from a first distance. Similarly, the one or more processors (114) may cause each proximity sensor component 607 to operate at a second sensitivity that is less than the first sensitivity in order to receive infrared emissions from a second distance that is less than the first distance. The sensitivity change may be accomplished by having the one or more processors (114) interpret readings from the proximity sensor component 607 in different ways.
In contrast, the proximity detector component 606 includes a signal transmitter and a corresponding signal receiver. Although each proximity detector component 606 can be any of a variety of types of proximity sensors, such as, but not limited to, capacitive, magnetic, inductive, optical/optoelectronic, imager, laser, acoustic/acoustic, radar-based, doppler-based, thermal and radiation-based proximity sensors, in one or more embodiments, the proximity detector component 606 includes an infrared emitter and receiver. In one embodiment, the infrared emitter is configured to emit an infrared signal having a wavelength of about 860 nanometers that is one to two orders of magnitude shorter than the wavelength received by the proximity sensor component. The proximity detector component may have a signal receiver that receives a similar wavelength (i.e., approximately 860 nanometers).
In one or more embodiments, each proximity detector component 606 can be an infrared proximity sensor set that uses a signal emitter that emits an infrared beam that is reflected from nearby objects and received by a corresponding signal receiver. The proximity detector component 606 can be employed to calculate a distance to any nearby object, for example, based on characteristics associated with the reflected signal. The reflected signals are detected by corresponding signal receivers, which may be infrared photodiodes, for detecting reflected Light Emitting Diode (LED) light, responding to modulated infrared signals, and/or performing triangulation of received infrared signals.
Another example of a physical sensor is a moisture detector 608. The moisture detector 608 may be configured to detect an amount of moisture on or around a display or housing of the electronic device. This may indicate various forms of context. Sometimes it may indicate that rain or hair rain is being applied to the environment surrounding the electronic device. Thus, if the user is crazy asking for "Call a cab-! (called a taxi |) ", the fact that moisture is present increases the likelihood that the request is a device command. The moisture detector 608 may be implemented in the form of an impedance sensor that measures impedance between the electrodes. Since moisture may be due to external conditions (e.g., rain or user conditions, perspiration), the moisture detector 608 may operate concurrently with the ISFET or electrical sensor 609 configured to measure the pH or NaOH amount in the moisture to determine not only the amount of moisture, but also whether the moisture is due to external factors, perspiration, or a combination thereof.
The smart imager 610, configured as an imager or image capture device, may be configured to capture an image of an object and determine whether the object matches a predetermined criteria. For example, the smart imager 610 functions as a recognition module configured with optical recognition (e.g., including image recognition, character recognition, visual recognition, facial recognition, color recognition, shape recognition, etc.). Advantageously, the smart imager 610 may be used as a facial recognition device to determine the identity of one or more persons detected around an electronic device.
For example, in one embodiment, when one or more proximity sensor components 607 detects a person, the intelligent imager 610 may capture a photograph of the person. The smart imager 610 may then compare the image to a reference file stored in memory (115) to confirm that the person's face sufficiently matches the reference file beyond a threshold probability of authenticity. Advantageously, the optical recognition allows the one or more processors (114) to perform the control operation only if one of the persons surrounding the electronic device is detected to be sufficiently recognized as an owner of the electronic device.
The smart imager 610 may function in other ways besides capturing photographs. For example, in some embodiments, the smart imager 610 may capture multiple consecutive pictures to capture more information that may be used to determine social cues. Alternatively, the smart imager 610 may capture frames or video frames with or without accompanying metadata such as motion vectors. This additional information captured by the smart imager 610 may be used to detect richer social cues that may be inferred from the captured data.
Barometer 611 may sense changes in barometric pressure due to environmental and/or weather changes. In one embodiment, barometer 611 includes a cantilever mechanism made of piezoelectric material and disposed within the chamber. The cantilever mechanism acts as a pressure sensitive valve, flexing as the pressure differential between the chamber and the environment changes. When the pressure differential between the chamber and the environment is zero, the deflection of the cantilever beam ceases. Since the cantilever material is a piezoelectric material, the deflection of the material can be measured with an electrical current.
Gaze detector 612 may include a sensor for detecting a gaze point of a user. Gaze detector 612 may optionally include a sensor for detecting alignment of the user's head in three-dimensional space. The electronic signal may then be passed from the sensor to a gaze detection process to calculate the gaze direction of the user in three-dimensional space. Gaze detector 612 may also be configured to detect a gaze cone corresponding to the detected gaze direction, which is a field of view that a user may easily see without diverting their eyes or head from the detected gaze direction. Gaze detector 612 may be configured to instead estimate gaze direction by inputting an image representing a photograph of a selected area near or around the eye into a gaze detection process. It will be apparent to those of ordinary skill in the art having the benefit of this disclosure that these techniques are merely illustrative, as other modes of detecting gaze direction may be substituted in gaze detector 612 of fig. 6.
The light sensor 613 can detect a change in light intensity, color, light, or shade in the environment of the electronic device. This may be used to infer weather or other clues, etc. For example, if the light sensor 613 detects a low light condition at noon when the position detector 602 indicates that the electronic device is outdoors, this may be due to a cloudy condition, fog or haze. An infrared sensor may be used in conjunction with light sensor 613 or in place of light sensor 613. The infrared sensor may be configured to detect thermal emissions from an environment surrounding the electronic device. For example, when the infrared sensor detects heat on a warm day, but the light sensor detects a low light condition, this may indicate that the electronic device is located in a room where the air conditioner is not properly set. Similarly, the temperature sensor 614 may be configured to monitor the temperature surrounding the electronic device.
The physical sensor 120 may also include an audio capture device 615. In one embodiment, the audio capture device 615 includes one or more microphones to receive acoustic input. While the one or more microphones may be used to sense voice inputs, voice commands, and other audio inputs, in some embodiments they may be used as environmental sensors to sense environmental sounds such as rain, wind, and the like.
In one embodiment, the one or more microphones comprise a single microphone. However, in other embodiments, the one or more microphones may include two or more microphones. Where multiple microphones are included, they may be used for selective beam steering, for example to determine from which direction sound emanates. For example, a first microphone may be located on a first side of the electronic device for receiving audio input from a first direction, and a second microphone may be located on a second side of the electronic device for receiving audio input from a second direction. The one or more processors (114) may then select between the first microphone and the second microphone to direct the audio receive beam to the user. Alternatively, the one or more processors (114) may process and combine signals from two or more microphones to perform beam steering.
In one embodiment, the audio capture device 615 includes a "always on" audio capture device. In this way, the audio capturing device 615 is able to capture audio input at any time the electronic device is operating properly. As described above, in one or more embodiments, the one or more processors, which may include a digital signal processor, may identify whether one or more device commands are present in the audio input captured by the audio capture device 615.
Yet another example of a physical sensor 120 is a hygrometer 616. Hygrometer 616 may be used to detect humidity, which may indicate that the user is outdoors or is sweating. As noted above, the illustrative physical sensor of fig. 6 is not comprehensive. Many other physical sensors may also be added. For example, a wind speed monitor may be included to detect wind. Accordingly, the physical sensor 120 of fig. 6 is merely illustrative, as many other physical sensors will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Turning briefly now to FIG. 7, various examples of context sensors 121 are illustrated. As with fig. 6, the example shown in fig. 7 does not constitute a comprehensive list. Many other context sensors 121 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In one embodiment, mood detector 701 may infer a person's mood based on contextual information received from physical sensors (120). For example, if the smart imager (501) captures a picture, a number of consecutive pictures, video, or other information that may identify a person as the owner of the electronic device, and she is crying in the picture, number of consecutive pictures, video, or other information, the mood detector 701 may infer whether she is happy or sad. Similarly, if the audio capture device captures the user's voice and the user is shouting or cursing, mood detector 701 may infer that the user may be angry or uneasy.
Emotion detector 702 may function in a similar manner to infer an emotional state of a person from contextual information received from physical sensors (120). For example, if the smart imager (501) captures a picture, a plurality of consecutive pictures, video, or other information related to the owner of the electronic device, the emotion detector 702 may infer an emotional state that they silently convey, e.g., happy, angry, frustrated, etc. This can be inferred from, for example, facial expressions (e.g., eyebrows, smiles, or other features). In one or more embodiments, such emotional cues may indicate that a user intends to issue commands to an electronic device. Alternatively, emotion may be detected by a change in sound or words used. For example, if someone is screaming "I am mad at you (I am your gas)", then negative emotional problems may be involved.
Calendar information and events 720 may be used to detect social cues. For example, if a calendar event indicates that a birthday party is being held, this may mean social cues for holidays and happiness. However, if a funeral is being held, it is unlikely that the user will issue a device command to the electronic device, as the funeral is often a silent social activity.
Health information 703 may be used to detect social cues. For example, if the health information 703 indicates that the person's heart rate is high and they are sweating, and the location information 715 indicates that the person is in a lane of a city, and the time of day information 708 indicates that it is now 3 a.m., then the person may be stressed. Thus, the command "Call 911 (Call 911)" is likely to be a device command.
Alarm clock information 704 may be used to detect social cues. If the alarm clock just sounded in 6:00 a.m., then the command "snooze" is likely to be a device command. The personal identity information 705 may also be used to detect social cues. If one is a diabetic, the health sensor shows him to be sweaty with wet cold, possibly due to low insulin levels. Thus, the command "Call 911 (Call 911)" is likely to be a device command.
The device usage data 706 may indicate social cues. If a person is searching for a network and an incoming call is received, the command "reject" is likely to be a device command. An energy store 707 within the electronic device may be used to indicate social cues. The device operation mode information 709 may be used in a similar manner. For example, when energy storage drops to ten percent, then the command "shut down all non-CRITICAL APPS (turn off all non-critical apps)" may be a device command.
Consumer purchase information 711 may of course indicate a social cue. For example, if a person is a wait raiser and often purchases wine, then command "buy that wine now (purchase the wine immediately)" may be a device command when looking at the web browser and finding a bottle of Lafite for 82 years, which costs less than $1000.
The device usage profile 712 may also be used to infer social cues. For example, if a person never uses an electronic device between 10:00 a.m. and 6:00 a.m. for sleeping, then if they happen to say in a night dream: "order a pizza-I' M STARVING (point pizza, I hungry)", which is unlikely to be a device command.
Organizations may have formal rules and policies 710, e.g., meetings cannot last for more than one hour without rest, have to go on noon and 2:00 pm, and a brainstorming session is between 9:00 and 10:00 am each day. Similarly, a family may have similar rules and policies 713, such as dinner occurring between 6:00 and 7:00 pm. This information may be used to infer social cues, such as whether a person may be talking to others. In this case, the verbal question is unlikely to be a device command. In contrast, when the user may be alone, the verbal command is more likely to be a device command.
The application data 734 may indicate social cues. If a person interacts with a word processing application often during the day, then the "cut" and "paste" commands are more likely to be device commands for him than those playing a bird electronic game. The device settings 716 may also indicate social cues. If the user sets their electronic device to alarm clock mode, they may be sleeping and not issuing device commands.
Social media 718 information may indicate social cues. For example, in one embodiment, multimodal social cue related information about the environment surrounding the electronic device may be inferred by retrieving information from a social media server. For example, a real-time search (which may be a keyword search, an image search, or other search) of the social media service may find images, posts, and comments related to the location determined by the location information 715. Publishing images captured at the same location on a social media service server may reveal multi-modal social cues. Alternatively, comments about the location may suggest social cues. Information from third party server 717 may also be used in this manner.
Yet another example of a context sensor 121 is repetitive behavior information 719. For example, if a person is always stopped at a coffee shop on the way to work between 8:00 and 8:15 in the morning, the command "Pay for the coffee (pay coffee)" may be a device command. As with fig. 6 above, the physical sensors of fig. 6 do not form a comprehensive list. The context sensor 121 may be any type of device that infers a context from data of an electronic device. The context sensor 121 may be configured as a hardware component or alternatively as a combination of hardware and software components. Context sensor 121 may analyze information, such as not only detect a user, but also determine social cues and emotional impact of other people in the vicinity of the electronic device, further informing which inferences about user intent and executable control commands are appropriate given such compound social context.
The context sensor 121 may be configured to collect and analyze non-physical parameter data. Although some have been shown in fig. 7, many other contents may be added. Accordingly, the context sensor 121 of FIG. 7 is merely illustrative, as many other sensors will be apparent to those of ordinary skill in the art having the benefit of this disclosure. It should be noted that when physical sensor (120) and context sensor 121 are used in combination, one or both of physical sensor (120) or context sensor 121 may cascade in a predefined order to detect multiple multi-modal social cues to determine whether a device command is intended for an electronic device.
Returning now to FIG. 1, in one or more embodiments, heuristic sensor processor 119 may be operable with both physical sensor 120 and contextual sensor 121 to detect, infer, capture, and otherwise determine when multi-modal social cues occur in the environment surrounding the electronic device. In one embodiment, heuristic sensor processor 119 determines the context and framework of the assessment from one or both of physical sensor 120 or context sensor 121 using an adjustable algorithm employing a context assessment of information, data, and events. These evaluations can be learned by repeated data analysis. Alternatively, the user may employ the user interface of the electronic device 100 to input various parameters, configurations, rules, and/or examples that instruct or otherwise direct the heuristic sensor processor 119 to detect multi-modal social cues, emotional states, moods, and other contextual information. In one or more embodiments, heuristic sensor processor 119 may include an artificial neural network or other similar technique.
In one or more embodiments, the heuristic sensor processor 119 may be capable of operating with the one or more processors 114. In some embodiments, the one or more processors 114 may control a heuristic sensor processor 119. In other embodiments, heuristic sensor processor 119 may operate independently to communicate information collected from detecting multi-modal social cues, emotional states, moods, and other contextual information to the one or more processors 114. Heuristic sensor processor 119 may receive data from one or both of physical sensor 120 or context sensor 121. In one or more embodiments, the one or more processors 114 are configured to perform the operations of the heuristic sensor processor 119.
In one or more embodiments, the schematic block diagram 112 includes a speech interface engine 122. In one embodiment, the speech interface engine 122 may include hardware, executable code, and speech monitor executable code. The speech interface engine 122 may include a basic speech model stored in the memory 115, a trained speech model, or other modules used by the speech interface engine 122 to receive and recognize speech commands received with audio input captured by an audio capture device. In one embodiment, the speech interface engine 122 may comprise a speech recognition engine. Regardless of the particular implementation used in the various embodiments, the speech interface engine 122 may access various speech models to recognize speech commands.
In one embodiment, the voice interface engine 122 is configured to implement voice control features that allow a user to speak specific device commands to cause the one or more processors 114 to perform control operations. For example, the user may say, "How TALL IS THE WILLIS Tower? (how high is a wilis building. Thus, the device commands may cause the one or more processors 114 to access an application module (e.g., a web browser) to search for an answer, which is then passed as an audible output via the audio output of the other component 124. Briefly, in one embodiment, the voice interface engine 122 listens for voice commands, processes the commands, and in conjunction with the one or more processors 114, returns an output as a result of the user's intent.
Schematic block diagram 112 may also include an image/gaze detection processing engine 123. The image/gaze detection processing engine 123 may operate with a physical sensor 120, such as a camera or smart imager, to process information to detect a user's gaze point. The image/gaze detection processing engine 123 may optionally include a sensor for detecting alignment of the user's head in three-dimensional space. The electronic signals may then be passed from the sensor to the image/gaze detection processing engine 123 for calculating the direction of the user's gaze in three-dimensional space. In one or more embodiments, the one or more processors 114 may cause the blade assembly 102 to transition to the peeping position when an imager invocation request is received and the image/gaze detection processing engine 123 determines that the user is gazing at the front surface of the electronic device 100. The image/gaze detection processing engine 123 may also be configured to detect a gaze cone corresponding to the detected gaze direction, which is a field of view that a user may easily see without diverting their eyes or head from the detected gaze direction. The image/gaze detection processing engine 123 may be configured to estimate the gaze direction instead by inputting an image representing a photograph of a selected area near or around the eye.
The one or more processors 114 may also generate commands or perform control operations based on information received from a combination of physical sensors 120, context sensors 121, flexible display 104, other components 124, and/or other input devices. Alternatively, the one or more processors 114 may generate commands or perform control operations based on information received from one or more sensors or only from the flexible display 104. Further, the one or more processors 114 may process the received information alone or in combination with other data (e.g., information stored in the memory 115). This information may be used to automatically transition the blade assembly 102 to the peep position, as will be explained in more detail below with reference to fig. 23-28.
Other components 124 that are capable of operating with the one or more processors 114 may include output components such as video output, audio output, and/or mechanical output. Examples of output components include audio outputs such as speaker ports, earphone speakers or other alarms and/or buzzers and/or mechanical output components such as vibration or motion-based mechanisms. Other components will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
As described above, in one or more embodiments, the blade assembly 102 is coupled to the flexible display 104. In contrast to a sliding device that includes multiple device housings, the electronic device 100 of fig. 1 includes a single device housing 101, with the blade assembly 102 coupled to the single device housing 101. The blade assembly 102 is configured as a mechanical chassis that allows the flexible display 104 to translate along a translation surface defined by the major and minor surfaces of the single device housing 101. In one or more embodiments, when the blade assembly 102 and flexible display 104 are in the extended position shown in fig. 1, the blade assembly 102 also provides mechanical support for the portion 130 of the flexible display 104 that protrudes beyond the top edge 131 of the single device housing 101. When actuated, the display roller mechanism 105 causes the blade assembly 102 and flexible display 104 to translate along the rear major surface 103, the bottom minor surface, and the front major surface between the extended position shown in fig. 1, the retracted position shown in fig. 3, and the peeping position shown in fig. 5.
The blade assembly 102 may include a blade substrate 125, the blade substrate 125 including a flexible portion and a rigid portion, and the blade substrate 125 positioned between the flexible display 104 and a translation surface defined by the single device housing 101. The blade substrate 125 may also include a silicone bezel 127 that surrounds and protects the edges of the flexible display 104. In one or more embodiments, the blade substrate 125 includes a steel backing plate, and the silicone bezel 127 is co-molded around the perimeter of the steel backing plate. In one or more embodiments, the low friction dynamic bending lamination stack 128 and the blade 126 are positioned between the blade assembly 102 and a translation surface defined by the single equipment housing 101.
In one or more embodiments, the blade substrate 125 is partially rigid and partially flexible. For example, the portions of the blade substrate 125 that slide along the major surfaces of the individual device housings 101 are configured to be substantially rigid, while the portions of the blade substrate 125 that bypass the small surfaces of the individual device housings 101 are configured to be flexible so that they can wrap around the small surfaces. In one or more embodiments, some portions of the blade substrate 125 abut the translation surface defined by the single device housing 101, while other portions abut the display roller mechanism 105, which in this illustrative embodiment, is located at the bottom small surface of the single device housing 101.
In one or more embodiments, the blade 126 and the low friction dynamic bending lamination stack 128 are positioned between the blade assembly 102 and a translation surface defined by the single device housing 101. When the blade assembly 102 is transitioned to the extended position shown in fig. 11, the blade 126 supports the blade assembly and the portion of the flexible display 104 that extends beyond the top edge 131 of the single device housing 101. Since the blade 126 needs to be rigid to support those portions of the blade assembly 102 and flexible display 104, it cannot bend around the display roller mechanism 105. To prevent gaps or steps from occurring at the termination of the blade 126, in one or more embodiments, the low friction dynamic bending lamination stack 128 spans the remainder of the blade assembly 102 and abuts the conversion surface defined by the single equipment housing 101.
The blade assembly 102 may be fixedly coupled to the flexible display 104 by an adhesive or other coupling mechanism. Wherein the blade substrate 132 defines both rigid and flexible portions. The blade substrate 132 may define a first rigid section extending along a major surface of the single device housing 101 and a second flexible section configured to wrap around a small surface of the single device housing 101 provided with the display roller mechanism 105.
In one or more embodiments, the blade assembly 102 defines a mechanical assembly that provides a slider frame that allows the flexible display 104 to move between the extended position of fig. 1, the retracted position of fig. 3, and the peeping position of fig. 5. As used herein, the term "frame" is defined in plain english, i.e., a mechanical support structure that supports other components coupled to the slider frame. These components may include a blade 126, a silicone frame 127, and a low friction dynamic bending lamination stack 128. Other components may also be included. These other components may include, for example, electronic circuitry for powering the flexible display 104. In one or more embodiments, it may also include a tensioner that ensures that the flexible display 104 remains flat against the single device housing 101 when translated.
In one or more embodiments, the display roller mechanism 105 causes a first portion of the blade assembly 102 and the flexible display 104 display (shown on the rear side of the electronic device 100 in fig. 1) and a second portion of the blade assembly 102 and the flexible display 104 (located on the front side of the electronic device 100 in fig. 1) to slide symmetrically in opposite directions along a translation surface defined by the single device housing 101.
Thus, the electronic device 100 of fig. 1 includes a single device housing 101 having a flexible display 104 incorporated into a blade assembly 102. The blade assembly 102 is then coupled to a translation mechanism defined by the display roller mechanism 105 and located within the single device housing 101. In the illustrative embodiment of fig. 1, the display scroll wheel mechanism 105 is located at the bottom edge of the single device housing 101.
In one or more embodiments, the one or more processors 114 invoke image capture operations in response to a document scanning application operating on the one or more processors 114 and the one or more sensors 120, 121 determine that the forward imager is oriented toward the document, thereby causing the blade assembly 102 to transition to the peeping position.
In one or more embodiments, the electronic device 100 includes a forward imager (as shown in fig. 5 below) and a backward imager 108. In one or more embodiments, the rearward imager 108 is exposed, whether the blade assembly 102 is in the extended, retracted, or peeped position, as shown in fig. 1. In contrast, the forward-facing imager is only exposed when the blade assembly 102 is in the peep position. Conversely, when the blade assembly 102 is in the retracted position, the extended position, or a position therebetween, the forward imager is hidden, as also shown in FIG. 1.
The one or more sensors 120, 121 may determine the forward imager orientation toward the document in a variety of ways. For example, in one or more embodiments, the one or more processors 114 cause the backward imager 108 to capture at least one image in response to an application operating on the one or more processors 114 invoking an image capture operation. The one or more processors 114 determine that the forward imager is oriented toward the document if the at least one image matches a predefined criterion.
The predefined criteria may also vary. In one or more embodiments, the predefined criteria includes that the at least one image depicts a document within a field of view of the backward imager. In other embodiments, the predefined criteria includes at least one image depicting the document having a size exceeding a predefined image area threshold. Other predefined criteria will be described below. Still other predefined criteria will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In other embodiments, the one or more sensors 120, 121 determine that the forward imager is oriented toward the document when the one or more processors 114 cause the backward imager 108 to capture at least one image in response to a document scanning application operating on the one or more processors invoking the image capturing operation and the at least one image fails to match a predefined criteria. In one or more embodiments, the predefined criteria includes one or more of the at least one image depicting a human hand, a finger, or an inanimate object. Other predefined criteria will be described below. Still other predefined criteria will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In other embodiments, as will be described below, translation of the blade assembly may be initiated by operation of the user interface component 110. Embodiments of the present disclosure contemplate that in such an electronic device 100, manual actuation of the user interface component 110 potentially delays usability of the electronic device 100 in a new state by manually "injecting" a trigger to cause a transition of the blade assembly 102 and flexible display 104 by requiring actuation of the user interface component 110.
Advantageously, embodiments of the present disclosure provide systems and methods for automatically and proactively moving flexible display 104 to an optimal state based on a document scanning application's service mode invocation of an image capture device and device orientation rather than requiring operation of user interface component 110. For example, in one or more embodiments, the one or more processors 114 of the electronic device 100 detect that a document scanning application operating on the one or more processors 114 requests to invoke an imager for service mode use of the application. The one or more sensors 120, 121 of the electronic device determine whether the front side of the device housing 101 of the electronic device 100 or the back side of the device housing 101 of the electronic device 100 faces the document. The one or more processors 114 cause the translation mechanism to translate the blade assembly 102 to a peeping position that reveals a forward-facing imager that is covered by the blade assembly 102 when the blade assembly 102 is in the retracted position, the extended position, or a position between the retracted position and the extended position when the front side of the device housing 101 faces the document.
Advantageously, embodiments of the present disclosure provide for intuitive operation of a translating display in an electronic device 100. In the case of triggering automatic panning of the panning display, the only user action required to move the panning display to the peep position is the service mode call of the imager by the document scanning application and the orientation of the forward imager of the electronic device 100 toward the document. Thereafter, the device automatically changes to a location that the user may desire.
As shown in fig. 1, the blade assembly 102 is capable of sliding around a single device housing 101 such that the blade 126 slides off the single device housing 101 to change the apparent overall length of the flexible display 104 when viewed from the front of the electronic device 100. In contrast, in other states (e.g., the state shown in fig. 3), blade assembly 102 may be slid around single device housing 101 in the opposite direction to a retracted position, where a similar amount of flexible display 104 is visible on the front side of electronic device 100 and the back side of electronic device 100.
In fig. 1, the electronic device 100 includes a single device housing 101, the single device housing 101 having a blade assembly 102, the blade assembly 102 coupled to both major surfaces of the single device housing 101 and wrapped around at least one small surface of the electronic device 100 where a display scroll wheel mechanism 105 is disposed. This allows the blade assembly 102 to slide relative to the single device housing 101 between the retracted position of fig. 3, the extended position of fig. 1, and the peeping position of fig. 5 revealing the forward-facing image capture device.
It should be understood that fig. 1 is provided for illustrative purposes only and is used to illustrate the components of one electronic device 100 according to embodiments of the present disclosure, and is not intended to be a complete schematic diagram of the various components required for an electronic device. Thus, other electronic devices according to embodiments of the present disclosure may include various other components not shown in fig. 1 or may include a combination of two or more components or divide a particular component into two or more separate components and still be within the scope of the present disclosure.
Turning now to fig. 2, there is shown the electronic device 100 in an extended position 200, which is also shown in fig. 1. In the extended position 200, the blade (126) slides outwardly and away from the single device housing 101, revealing more and more of the flexible display 104. In this configuration, portions of the flexible display 104 that bypass the display roller mechanism (105) elongate to a flat position as they travel along a translation surface defined by the front of the single device housing 101.
Turning now to fig. 3-4, the electronic device 100 is shown with the flexible display 104 in a retracted position 300. Fig. 3 shows the front side of the electronic device 100, while fig. 4 shows the rear side.
In this state, the blade (126) slides back toward and then along the translation surface defined by the single device housing 101. This results in the apparent overall length of the flexible display 104 becoming shorter as more and more of the flexible display 104 bypasses the display scroll wheel mechanism (105) located at the bottom of the single device housing 101 and passes through the translating surface defined by the rear side of the single device housing 101.
Turning now to fig. 5, the electronic device 100 is shown with the flexible display in a peeping position 500. When in the peep position, blade assembly 102 and flexible display 104 translate through the retracted position (300) of fig. 4. In one or more embodiments, when this occurs, blade assembly 102 and flexible display 104 reveal image capture device 501 or a forward imager, and when blade assembly 102 and flexible display 104 are in the retracted position (300) of fig. 3, image capture device 501 or forward imager is located below blade assembly 102 and flexible display 104 and is hidden by blade assembly 102 and flexible display 104. In this illustrative embodiment, speaker 502 is also shown.
Advantageously, by positioning the image capture device 501 below the blade assembly 102 and flexible display 104 when the blade assembly 102 and flexible display 104 are in the retracted position (300) of fig. 3-4 or the extended position (200) of fig. 2, privacy of the user of the electronic device 100 is ensured because the image capture device 501 cannot see through the blade (126) of the blade assembly 102. Thus, even if the electronic device 100 is accessed by a hacker or other malicious party, the user may be assured that the image capture device 501 cannot capture images or video when the blade assembly 102 and flexible display 104 are in the retracted position (300), the extended position (200), or a position therebetween. Only when the blade assembly 102 and flexible display 104 are transitioned to the peeping position 500, thereby revealing the image capturing device 501, the image capturing device 501 can capture a forward-facing image or forward-facing video.
Referring collectively to fig. 2-5, it can be seen that the electronic device 100 includes a single device housing having a flexible display 104 incorporated into the blade assembly 102. The blade assembly 102 is coupled to a translation mechanism within the single device housing 101 as described above.
In response to actuation of a user interface device (an example of which is a button positioned on one side of the single device housing 101), or alternatively in response to a service mode invocation and device orientation of an image capture device (as described below with reference to fig. 23 and 28) such that the translating mechanism is automatically operable to transition the blade assembly 102 about the surface of the single device housing 101 between the extended position 200 in which the blade (126) of the blade assembly 102 extends distally from the single device housing 101, the retracted position 300 in which the blade assembly 102 abuts the single device housing 101, and the device orientation (as described below with reference to fig. 28) in which the blade assembly 500 causes the blade assembly 102 to reveal the image capture device 501 (and the speaker 502 in this example) below the blade assembly 102 on the front side of the single device housing 101.
Another feature that can be seen when collectively reviewing fig. 2-5 is how the presentation of content varies depending on the location of the blade assembly 102. Embodiments of the present disclosure contemplate that the position of blade assembly 102 and flexible display 104 relative to single device housing 101 varies the amount of flexible display visible from the front, from the back, and in the curved end. In other words, the visual size of the flexible display 104 from each side of the electronic device 100 will vary depending on the position of the blade assembly 102 relative to the single device housing 101. Advantageously, embodiments of the present disclosure provide applications, methods, and systems for dynamically resizing and adjusting interface layout and content presentation.
This may be accomplished by resizing a main visible portion (e.g., the forward portion shown in fig. 2, 3, 5) of the flexible display 104. The application may be windowed over this main area of the flexible display 104 that will be resized as the flexible display 104 transitions between the extended position 200 of fig. 2, the retracted position 300 of fig. 3-4, and the peeping position of fig. 5.
In fig. 2-5, the one or more processors (114) of the electronic device 100 segment the flexible display 104 into three, separate, usable portions. These include the forward portion of the flexible display 104 shown in fig. 2, 3 and 5, the rearward portion of the flexible display 104 shown in fig. 5, and the curved portion of the flexible display 104 at the bottom of the electronic device 100 and wrapped around the rotor, as shown in fig. 2-5. Such curved portions of the flexible display 104 are sometimes referred to as "curled" portions of the display.
In one or more embodiments, each of these available portions is dynamically remapped as the flexible display 104 changes position relative to the single device housing 101. In one or more embodiments, an application may request a window on which it is intended to present an available portion of content.
In one or more embodiments, the orientation of the rearward and curled portions is different from the orientation of the forward portion when the flexible display 104 is translated along the single device housing 101 from the extended position 200 shown in fig. 2 to the retracted position 300 shown in fig. 3-4 or the peeping position 500 of fig. 5. To address this problem, it can be seen by comparing fig. 3-4 that in one or more embodiments, the content presented on the rearward portion is rotated 180 degrees such that its "upper" side is the same as the "upper" side on the forward portion.
In one or more embodiments, the orientation of the content presented on the hemming portion may vary based on the orientation of the electronic device 100. If, for example, the forward side is up, the orientation of the content presented on the roll will have a first orientation. Conversely, if the rearward side is upward, the orientation of the same content presented on the curl will have a second orientation rotated 180 degrees relative to the first orientation.
In one or more embodiments, any content presented on the forward portion of the flexible display 104 is directed according to user preferences. In one or more embodiments, the forward portion is oriented according to an orientation of the electronic device 100 in three-dimensional space.
On the curled portion of the translating display, in one or more embodiments, the segment is oriented with the same orientation as the forward portion when the electronic device 100 is oriented with the forward side not oriented in three dimensions with the forward side facing the negative z-direction (in this case rotated 180 degrees). In one or more embodiments, the hemming portion is not subject to user preference for display orientation and automatic rotation/device orientation.
In one or more embodiments, the content presented on the rearward portion of the flexible display 104 is always rotated 180 degrees relative to the content presented on the forward portion when the electronic device 100 is held upright, in this case as shown in fig. 3-4. In one or more embodiments, the backward portion is not subject to user preference for display orientation and automatic rotation/device orientation.
Thus, in one or more embodiments, the one or more processors (114) of the electronic device (100) dynamically remaps the plurality of translational display root segments based on the position of the flexible display 104 relative to the single device housing 101. The one or more processors 114 may independently manage the orientation and rotation on each of the root segments of the flexible display 104, whether they are forward portions, rearward portions, or curled portions. In one or more embodiments, this management occurs independently based on which side of the electronic device 100 the segment is currently positioned on, in conjunction with sensor input to identify whether the electronic device 100 is face-down or face-up.
As shown in fig. 2, the blade assembly 102 is operable to slide around a single device housing 101 such that the blade 126 slides off the single device housing 101 to change the overall length of the flexible display 104 when viewed from the front of the electronic device 100. As shown in fig. 3-4, the blade assembly 102 may be slid (optionally in response to other sliding gestures) around a single device housing 101 in opposite directions to a retracted position 300, where a similar amount of flexible display 104 can be seen on the front side of the electronic device 100 and the back side of the electronic device 100.
Thus, in one or more embodiments, the electronic device 100 includes a single device housing 101 having a blade assembly 102 coupled to both major surfaces of the single device housing 101 and wrapped around at least one small surface of the electronic device 100 such that the blade assembly 102 is slidable relative to the single device housing 101 between a retracted position 300, an extended position 200, and a peeping position 500 exposing the forward-facing image capture device 501.
Turning now to fig. 8, there is shown a flexible display 104 and blade assembly 102 shown in an exploded view. As shown in fig. 8, in one or more embodiments, the flexible display 104 includes one or more layers coupled or laminated together to complete the flexible display 104. In one or more embodiments, these layers include a flexible protective cover 801, a first adhesive layer 802, a flexible display layer 803, a second adhesive layer 804, and a flexible substrate 805. Other configurations of layers suitable for manufacturing the flexible display 104 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Starting from the top of the layer stack, in one or more embodiments, the flexible protective cover 801 comprises an optically transparent substrate. In one or more embodiments, the flexible protective cover 801 may be made of an optically transparent material, such as a thin film sheet of thermoplastic material. For example, in one embodiment, the flexible protective cover 801 is fabricated from an optically transparent polyamide layer having a thickness of about eighty microns. In another embodiment, the flexible protective cover 801 is fabricated from an optically transparent polycarbonate layer having a thickness of about eighty microns. Other materials suitable for manufacturing the flexible protective cover 801 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In one or more embodiments, the flexible protective cover 801 functions as a panel by defining a cover for the flexible display layer 803. In one or more embodiments, the flexible protective cover 801 is optically transparent, wherein light may pass through the flexible protective cover 801 such that objects behind the flexible protective cover 801 may be clearly seen. The flexible protective cover 801 may optionally include an ultraviolet light barrier. In one or more embodiments, such a barrier may be used to improve the visibility of the flexible display layer 803.
Beneath the flexible protective cover 801 is a first adhesive layer 802. In one or more embodiments, the first adhesive layer 802 includes an optically clear adhesive. An optically clear adhesive may be applied to both sides of the thin optically clear substrate such that the first adhesive layer 802 functions as an optically clear layer with optically clear adhesive on both sides. When so configured, in one or more embodiments, the first adhesive layer 802 has a thickness of about fifty microns. The optically clear version of the "double sided tape" may then be wrapped around and applied between the flexible protective cover 801 and the flexible display layer 803 to couple the two together.
In other embodiments, the first adhesive layer 802 will instead be applied between the flexible protective cover 801 and the flexible display layer 803 as an optically clear liquid, gel, as a uniform adhesive layer, or in the form of another medium. When so configured, the first adhesive layer 802 may optionally be cured by heat, ultraviolet light, or other techniques. Other examples of materials suitable for use as the first adhesive layer 802 will be apparent to those of ordinary skill in the art having the benefit of this disclosure. In one or more embodiments, the first adhesive layer 802 mechanically couples the flexible display layer 803 to the flexible protective cover 801.
In one or more embodiments, the flexible display layer 803 is located between the flexible substrate 805 and the flexible protective cover 801. In one or more embodiments, the flexible display layer 803 is longer along the major axis 806 of the flexible display layer 803, and thus the flexible display 104 itself, than the image producing portion 808 of the flexible display 104. For example, as shown in fig. 8, the flexible display layer 803 includes a T-shaped tongue 807 that protrudes beyond an image producing portion 808 of the flexible display layer 803. As will be shown in fig. 10 below, in one or more embodiments, electronic circuit components, connectors, and other components configured to operate the image-producing portion 808 of the flexible display layer 803 may be coupled to the T-shaped tongue 807 in one or more embodiments. Thus, in this illustrative embodiment, the tee tongue 807 protrudes distally beyond the terminal ends of the other layers of the flexible display 104. While the T-shaped tongue 807 is T-shaped in this illustrative embodiment, it will be apparent to those of ordinary skill in the art having the benefit of this disclosure that other shapes may be used.
The flexible display layer 803 may optionally be touch sensitive. In one or more embodiments, the flexible display layer 803 is configured as an Organic Light Emitting Diode (OLED) display layer. When coupled to the flexible substrate 805, the flexible display layer 803 may be bent according to various bending radii. For example, some embodiments allow a bend radius of between thirty and six hundred millimeters. Other substrates allow a bend radius of about five millimeters to provide a display that can be folded by active bending. Other displays may be configured to accommodate bending and folding.
In one or more embodiments, the flexible display layer 803 may be formed from multiple layers of flexible material, such as a flexible polymer sheet or other material. For example, the flexible display layer 803 may include a layer of optically transparent electrical conductors, a polarizer layer, one or more optically transparent substrates, and electronic control circuitry layers, such as thin film transistors to drive the pixels and one or more capacitors for storing energy. In one or more embodiments, flexible display layer 803 has a thickness of about 130 microns.
In one or more embodiments, to be touch sensitive, the flexible display layer 803 includes a layer that includes one or more optically transparent electrodes. In one or more embodiments, the flexible display layer 803 includes an organic light emitting diode layer configured to display images and other information to a user. The organic light emitting diode layer may comprise one or more pixel structures arranged in an array, wherein each pixel structure comprises a plurality of electroluminescent elements, such as organic light emitting diodes. These various layers may be coupled to one or more optically transparent substrates of the flexible display layer 803. Other layers suitable for inclusion in the flexible display layer 803 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In one or more embodiments, the flexible display layer 803 is coupled to the flexible substrate 805 by a second adhesive layer 804. In other embodiments, the layers above flexible display layer 803 may be configured to have sufficient stiffness so that flexible substrate 805 is not required. For example, in embodiments where the flexible protective cover 801 is configured to have sufficient stiffness to provide sufficient protection for the flexible display 104 during bending, the flexible substrate 805 may be omitted.
In one or more embodiments, the flexible substrate 805 includes a thin steel layer. For example, in one or more embodiments, the flexible substrate 805 includes a steel layer having a thickness of approximately thirty microns. While thin flexible steel works well in practice, it will be apparent to those of ordinary skill in the art having the benefit of this disclosure that other materials may be used for the flexible substrate 805. For example, in another embodiment, the flexible substrate 805 is fabricated from a thin layer of thermoplastic material.
In one or more embodiments, to simplify manufacturing, the second adhesive layer 804 is the same as the first adhesive layer 802 and includes an optically clear adhesive. However, since the second adhesive layer 804 is coupled between the flexible display layer 803 and the flexible substrate 805, i.e., under the flexible display layer 803, an optically transparent adhesive is not required. In other embodiments, the second adhesive layer 804 may be partially optically clear or not optically clear at all.
Regardless of whether the second adhesive layer 804 is optically clear, in one or more embodiments, the adhesive of the second adhesive layer 804 is applied to both sides of a thin flexible substrate. When so configured, in one or more embodiments, the second adhesive layer 804 has a thickness of about fifty microns. This extremely thin version of the "double sided tape" may then be wound and applied between the flexible display layer 803 and the flexible substrate 805 to couple the two together.
In other embodiments, as with the first adhesive layer 802, the second adhesive layer 804 will alternatively be applied between the flexible display layer 803 and the flexible substrate as a liquid, gel, as a homogenous layer, or in another medium. When so configured, the second adhesive layer 804 may optionally be cured by heat, ultraviolet light, or other techniques. Other examples of materials suitable for use as the second adhesive layer 804 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In the illustrative embodiment, the flexible display 104 is supported not only by the flexible substrate 805, but also by the blade assembly 102. As previously described, in one or more embodiments, the blade assembly 102 includes a blade substrate 125. In one or more embodiments, the blade substrate 125 includes a steel layer. In one or more embodiments, the blade substrate 125 is thicker than the flexible substrate 805. For example, in one or more embodiments, when the flexible substrate 805 includes a steel layer having a thickness of about thirty microns, the blade substrate 125 includes a steel layer having a thickness of about one hundred microns.
In one or more embodiments, the blade substrate 125 includes a rigid, substantially planar support layer. For example, in one or more embodiments, the blade substrate 125 may be made of stainless steel. In another embodiment, the blade substrate 125 is made of a thin rigid thermoplastic sheet. Other materials may also be used to fabricate the blade substrate 125. For example, nitinol, a material that is a nickel-titanium alloy, may be used to fabricate the blade substrate 125. Other rigid, substantially planar materials will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Thus, the blade substrate 125 defines another mechanical support for the flexible display 104. In one or more embodiments, the blade substrate 125 is the hardest layer in the overall assembly of fig. 8. In one or more embodiments, the blade substrate 125 is made of stainless steel having a thickness of about one hundred microns. In another embodiment, the blade substrate 125 is made of a flexible plastic. Other materials capable of manufacturing the blade substrate 125 will be apparent to those of ordinary skill in the art having the benefit of this disclosure. For example, in another embodiment, the blade substrate 125 is made of carbon fiber or the like. In one or more embodiments, the blade substrate 125 includes a reinforcing bezel that includes a thicker layer of material to further protect the flexible display 104 when the blade assembly 102 is in the extended position (200).
In one or more embodiments, the flexible substrate 805 is slightly longer along the major axis of the flexible substrate 805 than the image-producing portion 808 of the flexible display 104. Since the tee tongue 807 is tee-shaped, this allows one or more holes 809 to be exposed on either side of the base of the tee tongue 807. In one or more embodiments, this additional length along the primary axis provided by the flexible substrate 805 allows one or more fasteners to rigidly couple the first end of the flexible substrate 805 to the tensioner.
Embodiments of the present disclosure contemplate that some of the layers comprising flexible display 104 are stiffer than others. Similarly, some other layers of the flexible display 104 are softer than others. For example, when the flexible substrate 805 is made of metal (one example of the metal is stainless steel), the layer is harder than the first adhesive layer 802 or the second adhesive layer 804. In one or more embodiments, the stainless steel is also harder than the flexible display layer 803. In one or more embodiments, the flexible substrate 805 is the hardest layer in the flexible display 104, and the first adhesive layer 802 and the second adhesive layer 804 are the softest layers in the flexible display 104. In one or more embodiments, the hardness of the flexible protective cover 801 and the flexible display layer 803 fall between the hardness of the flexible substrate 805 and the adhesive layer.
In one or more embodiments, the various layers of the flexible display 104 are laminated together in a substantially planar configuration. In other words, in one or more embodiments, the flexible substrate 805 is configured as a substantially planar substrate. A second adhesive layer 804 may be attached to the substantially planar substrate, and then a flexible display layer 803 is attached to the second adhesive layer 804. The first adhesive layer 802 may be attached to the flexible display layer 803, and the flexible protective cover 801 is attached to the first adhesive layer 802.
To ensure proper coupling, the resulting flexible display layer 803 may be cured, for example, in an autoclave at a predetermined temperature for a predetermined duration. When such curing is employed, any bubbles or other defects in the layers may be corrected. In one or more embodiments, because flexible substrate 805 is configured as a substantially planar substrate, the resulting flexible display 104 is also substantially planar.
In one or more embodiments, the blade substrate 125 of the blade assembly 102 includes a flexible portion 810 and a rigid portion 811. Since in one or more embodiments the blade substrate 125 is made of metal (one example of which is steel having a thickness of 100 microns), the rigidity of the rigid portion 811 comes from the material from which it is made. For example, if the blade substrate 125 is made of a thermoplastic material, in one or more embodiments, the thermoplastic material will be sufficiently rigid that the rigid portion 811 will be rigid. Since the rigid portion 811 slides only along the flat main surface of the translation surface defined by the single device housing (101), it does not need to be bent. Furthermore, the rigidity helps to protect the portion of the flexible display 104 that protrudes beyond the end of the single device housing (101).
In contrast, the flexible portion 810 needs to be wrapped around the facet of the single device housing (101) where the display scroll wheel mechanism (105) is located. Because the flexible portion 810 is made of the same material as the rigid portion 811 when the blade substrate 125 is manufactured as a single, unitary component, in one or more embodiments the flexible portion 810 includes a plurality of holes cut through the blade substrate 125, allowing the material to flex. For example, in one or more embodiments where the blade substrate 125 is fabricated from steel, a plurality of chemically or laser etched holes may allow the flexible portion 810 to be tightly wrapped around the facet of the single device housing (101) where the display roller mechanism (105) is located.
Thus, in one or more embodiments, the blade substrate 125 is partially rigid and partially flexible. The portions of the blade substrate 125 that slide along the major surfaces of the individual device housings (101) are configured to be substantially rigid, while the small surface portions of the blade substrate 125 that bypass the individual device housings (101) are configured to be flexible so that they can be curled around these small surfaces.
In one or more embodiments, the blade assembly 102 further includes a silicone bezel 127 positioned around the perimeter of the blade substrate 125. In one or more embodiments, when the flexible display 104 is attached to the blade substrate 125 of the blade assembly 102, the silicone bezel 127 surrounds and protects the edges of the flexible display 104. In one or more embodiments, the silicone bezel 127 is co-molded around the perimeter of the blade substrate 125.
In one or more embodiments, the rigid portion 811 of the blade substrate 125 can define one or more apertures. These holes may be used for a variety of purposes. For example, some holes may be used to rigidly secure the blade assembly 102 to a translation mechanism, one example of which is the display roller mechanism (105) of fig. 1. In addition, some of the holes may contain magnets. Hall effect sensors located in a single device housing (101) to which the blade assembly 102 is coupled may then detect the positions of these magnets so that the one or more processors (114) may determine whether the blade assembly 102 and the flexible display 104 are in the extended position (200), the retracted position (300), the peeping position (500), or somewhere in between.
In one or more embodiments, the flexible display 104 is coupled to the blade substrate 125 of the blade assembly 102 within the confines of the silicone bezel 127. For example, in one or more embodiments, the first end of the flexible display 104 is adhesively coupled to the rigid portion 811 of the blade substrate 125 of the blade assembly 102. While the other end of the flexible display 104 may be rigidly coupled to the tensioner by passing fasteners through holes 809 in the flexible substrate.
Turning now to fig. 9, there is shown a blade substrate 125 and a silicone bezel 127 shown in an exploded view. As shown, the silicone bezel 127 defines a single, continuous, unitary piece of silicone. In the illustrative embodiment of fig. 9, the silicone bezel 127 surrounds the three sides 901, 902, 903 of the blade substrate 125 and extends beyond the short side 904 to define a receiving recess 905 that can house mechanical and electrical components, such as electronic circuit components, to power and control the flexible display (104), which flexible display (104) will be located within the perimeter defined by the silicone bezel 127, tensioners, flexible circuits, and other components for maintaining the flexible display (104) flat on the flexible portion 810 of the blade substrate 125.
In this illustrative embodiment, portions 906, 907, 908 of the silicone bezel 127 surrounding the receiving recess 905 that protrude beyond the short side 904 of the blade substrate 125 are thicker than other portions of the silicone bezel 127 that would surround the flexible display (104). This allows for placement of the component within the receiving recess 905.
Turning now to fig. 10, there is shown a flexible display 104 and blade assembly 102 with a silicone bezel 127 overmolded onto a blade substrate 125. As shown, the silicone bezel 127 surrounds the three sides 901, 902, 903 of the blade substrate 125 and extends beyond the short side 904 to define a receiving recess 905 that can accommodate mechanical and electrical components.
Electronic circuitry 1001 operable to power and control the flexible display 104 has been coupled to the tee tongue 807 of the flexible display layer (803). In addition, mechanical connector 1002 has been attached to the top of the T on T-tongue 807. In this illustrative embodiment, the flexible substrate 805 protrudes beyond the distal end of the flexible display layer (803) such that the aperture 809 defined therein may be coupled to a tensioner to ensure that the flexible display 104 remains flat around the flexible portion 810 of the blade substrate 125 as the flexible portion 810 of the blade substrate 125 bypasses the rotor at the end of the single device housing (101).
In one or more embodiments, the blade assembly 102 may be fixedly coupled to the flexible display 104. For example, where the blade substrate 125 defines both a rigid portion 811 and a flexible portion 810, in one or more embodiments, the flexible display 104 is coupled to the rigid portion 811 by an adhesive or other coupling mechanism. The tensioner may then be positioned in the receiving recess 905. In one or more embodiments, the tensioner is rigidly coupled to the aperture 809 of the flexible substrate 805 with fasteners to keep the flexible display 104 flat on the flexible portion 810, regardless of how the flexible portion 810 bends around a small surface of a single device housing or its corresponding rotor.
Turning now to fig. 11, the flexible display 104 is shown after being coupled to the blade assembly 102. As shown, a silicone bezel 127 surrounds the flexible display 104, wherein the silicone bezel 127 surrounds and abuts three sides of the flexible display layer (803).
The flexible substrate is then connected to an electronic circuit 1001 carried by a T-tongue 807. In addition, a tensioner may be coupled to the flexible substrate 805. Thereafter, the cover 1101 is attached to the silicone bezel 127 atop the electronic circuit 1001 and other components located on or around the tee tongue. The portion of the blade assembly 102 where components are stored under the cover 1101 is referred to as the "backpack". Turning to fig. 12, a fully constructed blade assembly 102 of a backpack 1201 is shown.
In one or more embodiments, the flexible display 104 and blade assembly 102 are configured to wrap around a small surface of the device housing where the display roller mechanism resides. In one or more embodiments, the display roller mechanism includes a rotor positioned within the curved section of the flexible display 104 and the blade assembly 102. When placed within the device housing of an electronic device, translation of the translation mechanism results in translation of the blade assembly 102, which in turn results in rotation of the rotor. The result is that by pulling the flexible display 104 and the blade assembly 102 around the rotor, the flexible display 104 and the blade assembly 102 translate linearly across the translating surface of the device housing.
The blade substrate (125) of the blade assembly 102 includes a flexible portion (810) that allows the blade assembly 102 and the flexible display 104 to deform around a device housing, an example of which is the single device housing (101) of fig. 1. For example, turning now to fig. 13-14, the blade assembly 102 and flexible display are shown deformed to produce a curved section 1301 and two linear sections 1302, 1303. In fig. 13, flexible display 104 and blade assembly 102 are shown in a retracted position 300. In fig. 14, the flexible display 104 and the blade assembly 102 are shown in an extended position 200. The enlarged view 1401 of fig. 14 shows how the holes defined by the chemical etching of the blade substrate 125 allow the blade substrate 125 to bend easily around the curved section 1301 while maintaining a rigid support structure under the flexible display 104 in the two linear sections 1302, 1303.
In one or more embodiments, the first linear section 1302 and the second linear section 1303 are configured to slide between the retracted position 300 of fig. 13 and the extended position 200 of fig. 14. The flexible display 104 is coupled to the blade assembly 102 and thus translates along a translation surface defined by the device housing of the electronic device with the blade assembly 102.
In one or more embodiments, the linear sections 1302, 1303 of the blade assembly 102 are positioned between the flexible display 104 and the translating surface. The rotor is then positioned within the curved section 1301 of the blade assembly 102. As the translation mechanism causes the linear sections 1302, 1303 of the blade assembly 102 to move across the translation surface defined by the device housing, the rotor rotates with the flexible portion 810 rotor that travels along the rotor as it rotates.
As shown in fig. 13-14, in one or more embodiments, the cross-section of both the blade assembly 102 and the flexible display 104 define a J-shape, wherein the curved portion of the J-shape defined by the curved section 1301 is configured to wrap around the rotor, while the upper portion of the J-shape defined by the linear section 1302 travels across the translating surface defined by the device housing. As the translator of the translation mechanism drives the blade assembly 102, the upper portion of the J-shape becomes longer as the flexible display 104 translates around the rotor, with the blade assembly 102 protruding farther from the device housing. This can be seen in fig. 13-14 by comparing the extended position 200 of fig. 14 with the retracted position 300 of fig. 13.
When the translator of the translation mechanism drives the blade assembly 102 in the opposite direction, for example, the blade assembly 102 is driven from the extended position 200 of fig. 14 to the retracted position 300 of fig. 13, the upper portion of the J-shape becomes shorter as this reverse operation occurs. Thus, when the translation mechanism drives the blade assembly 102 carrying the flexible display 104, the flexible display 104 deforms at different positions as the flexible display 104 wraps around and bypasses the rotor.
It should be appreciated that the more conventional "J-shape" is generally defined when the blade assembly 102 is transitioned to the extended position 200 of FIG. 14. Depending on the length of the blade assembly 102 and flexible display 104, and the amount by which the blade assembly 102 can be slid around the rotor in conjunction with the translation mechanism, the J-shape can also be converted to other shapes, including a U-shape, wherein the upper and lower portions of the blade assembly 102 and/or flexible display 104 are substantially symmetrical. This U-shape is formed when the blade assembly is in the peeping position, but is formed substantially in the retracted position 300 of FIG. 3. In other embodiments, depending on the configuration, the blade assembly 102 may even transition to an inverted J-shape, wherein an upper portion of the blade assembly 102 and/or flexible display 104 is shorter than a lower portion of the blade assembly 102 and/or flexible display 104, and so forth.
In one or more embodiments, the translator and rotor of the translation mechanism not only facilitate "extension" of the flexible display 104 that occurs during an extend or "lift" operation, but also serve to improve the reliability and usability of the flexible display 104. This is true because the rotor defines a service ring 1304 in the curved section 1301, the service ring 1304 having a relatively large radius compared to the minimum bend radius of the flexible display 104. The service ring 1304 prevents the flexible display 104 from being damaged or developing memory in the flexed state when the flexible display 104 is defined around the curved section 1301 of the rotor in the extended position 200, the retracted position 300, and the peeping position (500).
With such a mechanical assembly, the flexible display 104 retains the J-shaped flat upper portion defined by the first linear section 1302 when slid. In addition, the flexible display 104 is tightly wrapped around the rotor, with the lower portion of the J-shape defined by the second linear section 1303 also remaining flat against the lower surface of the device housing. The blade assembly 102 and tensioner combination, which is rigidly fixed to the translation mechanism, prevents the flexible display 104 from wrinkling or bunching as it slides around the device housing between the extended position 200, the retracted position 300, and the peeping position (500). This rigid coupling in combination with the moving tensioner ensures that the flexible display 104 translates in a straight line and practically on the first major surface of the electronic device, around the rotor of the electronic device positioned at the small surface of the device housing, and on the second major surface of the electronic device.
In one or more embodiments, additional support members may be attached to the blade assembly 102 to provide additional support to the flexible display 104, facilitate translation of the blade assembly 102 about the device housing, or a combination thereof.
As described above, in one or more embodiments, the blade assembly 102 is coupled to the flexible display 104. In contrast to a sliding device that includes multiple device housings, embodiments of the present disclosure provide an electronic device with a sliding display that is included only on the device housing. The blade assembly 102 is configured as a mechanical chassis that allows the flexible display 104 to translate along a translation surface defined by the major and minor surfaces of the single device housing.
In one or more embodiments, the blade assembly 102 also provides mechanical support for the portion of the flexible display 104 that protrudes beyond the top edge of the single device housing when the blade assembly 102 and flexible display 104 are in the extended position. The blade assembly 102 may include a unitary blade substrate (125), but the blade substrate (125) defines both a flexible portion and a rigid portion. The blade substrate (125) may include a silicone bezel 127 that surrounds and protects the edges of the flexible display 104.
The low friction dynamic bending lamination stack (128) and the blade (126) may be positioned between the blade assembly 102 and a translation surface defined by the single equipment housing (101). In one or more embodiments, the blade (126) and the low friction dynamic bending lamination stack (128) are positioned between the blade assembly 102 and a translating surface defining an equipment enclosure to which the blade assembly 102 is attached.
The blade (126) supports the blade assembly 102 and a portion of the flexible display 104 that protrudes beyond the top edge of the device housing when the blade assembly 102 is transitioned to the extended position. Since the blade (126) needs to be rigid to support those portions of the blade assembly 102 and the flexible display 104, it cannot bend around the flexible portion of the blade substrate (125) of the blade assembly 102. To prevent gaps or steps from occurring where the blade (126) terminates, in one or more embodiments, the low friction dynamic bending lamination stack (128) spans the remainder of the blade assembly 102 and abuts a transition surface defined by a single equipment housing.
In one or more embodiments, the blade (126) includes a steel layer. In one or more embodiments, the thickness of the blade (126) is greater than the thickness of the blade substrate (125) of the blade assembly 102 or the flexible substrate (805) of the flexible display 104. For example, in one or more embodiments, the blade (126) includes a steel layer having a thickness of five hundred micrometers or 0.5 millimeters.
In one or more embodiments, the blade (126) includes a rigid, substantially planar support layer. For example, in one or more embodiments, the blade (126) may be made of aluminum, steel, or stainless steel. In another embodiment, the blade (126) is made of a rigid thermoplastic sheet. Other materials may also be used to fabricate the blade substrate (125). For example, nitinol may also be used to fabricate the blade (126).
In one or more embodiments, the blade (126) is the hardest layer in the overall assembly. In one or more embodiments, the blade (126) is made of stainless steel having a thickness of about five hundred microns. In another embodiment, the blade (126) is made of carbon fiber. Other materials capable of making the blade (126) will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In one or more embodiments, the low friction dynamic bending lamination stack (128) includes a plurality of layers. When assembled, the low friction dynamic bending laminate stack (128) adds a layer to the blade assembly 102 that improves the lubricity of the entire assembly to allow smooth movement of the blade assembly 102 and flexible display 104 across the translating surface of the device housing. Additionally, when abutted against the blade (126), the low friction dynamic bending lamination stack (128) prevents features on other layers of the assembly from reducing the ability of the blade assembly 102 and flexible display 104 to translate across those translating surfaces.
In one or more embodiments, the low friction dynamic bending lamination stack (128) allows for "low friction" sliding across static surfaces and the ability to circulate bending and/or rolling around the rotor. In one or more embodiments, a low friction dynamic bending lamination stack (128) engages and abuts the blade (126) to improve lubricity.
In one or more embodiments, the uppermost layer of the low friction dynamic bending lamination stack (128) is a pressure sensitive adhesive layer. The pressure sensitive adhesive layer allows the low friction dynamic bending lamination stack (128) to adhere to the underside of the blade assembly 102.
In one or more embodiments, below the pressure sensitive adhesive layer is a strain resistant foam layer. Examples of strain resistant foams suitable for use as the strain resistant foam layer include silicone, low density polyethylene, or other materials that provide sufficient thickness to allow the low friction dynamic bending lamination stack (128) to match the thickness of the blade (126) while reducing internal stress and allowing bending.
In one or more embodiments, below the strain resistant foam layer is another pressure sensitive adhesive layer. The pressure sensitive adhesive layer is coupled to a flexible substrate having a strain relief cut pattern formed therein. The flexible substrate may be made of metal or plastic or other materials. For example, in one or more embodiments, the flexible substrate includes a steel layer having a thickness of about thirty microns. While thin flexible steel works well in practice, it will be apparent to those of ordinary skill in the art having the benefit of this disclosure that other materials may be used for the flexible substrate. For example, in another embodiment, the flexible substrate is fabricated from a thin layer of thermoplastic material.
In one or more embodiments, another pressure sensitive adhesive layer then couples the flexible substrate to the low friction layer. In one or more embodiments, the low friction layer includes a substrate with Teflon.sup.TM attached. In another embodiment, the low friction layer comprises a polytetrafluoroethylene layer that is a synthetic fluoropolymer of tetrafluoroethylene. Such materials are known for their non-stick properties and add lubricity to the low friction dynamic bending lamination stack (128) allowing the entire assembly to slide smoothly. Furthermore, the low friction layer prevents the strain relief cut pattern in the flexible substrate from hooking surface imperfections and transitions to which components on the device housing are attached. In short, the low friction layer greatly improves the lubricity of the overall assembly.
Fig. 15-20 illustrate the fully assembled electronic device 100 of fig. 1 in an extended position 200 and a retracted position 300. Embodiments of the present disclosure contemplate that electronic devices constructed in accordance with embodiments of the present disclosure have significantly unique decorative features in addition to significantly unique practical features. Many of these decorative features are visible in fig. 15-20.
Fig. 15 shows a front elevation view of the electronic device 100 in the extended position 200, while fig. 16 shows a side elevation view of the electronic device 100 in the extended position 200. Fig. 17 then also provides a rear elevational view of the electronic device 100 in the extended position 200.
Fig. 18 shows a front elevation view of electronic device 100 in retracted position 300, while fig. 19 shows a side elevation view of electronic device 100 in retracted position 300. Fig. 20 then also provides a rear elevation view of electronic device 100 in retracted position 300.
As can be seen by comparing these figures, the blade assembly 102 is capable of sliding around a single device housing 101 such that the blade 126 slides off the single device housing 101, thereby changing the apparent overall length of the flexible display 104 as viewed from the front of the electronic device 100. The blade assembly 102 may also be slid around the single device housing 101 in opposite directions to a retracted position 300 in which a similar amount of flexible display 104 is visible on the front side of the electronic device 100 and the back side of the electronic device 100 in the retracted position 300. Graphics, images, user actuation targets, and other indicia may be presented anywhere on the flexible display 104, including on the front side of the electronic device 100, the back side of the electronic device 100, or the lower end of the electronic device 100.
To date, while much attention has been paid to the unique translation of the blade assembly and flexible display between the extended and retracted positions, another truly unique feature provided by embodiments of the present disclosure occurs when the blade assembly and flexible display are transitioned to the peeping position. Turning now to fig. 21-22, the electronic device 100 is shown in the peeping position 400.
As shown in fig. 21, in one or more embodiments, when the blade assembly 102 and flexible display 104 transition to the peeping position 500, the backpack 1201 moves beyond the retracted position (300) toward the rearward facing image capture device 108. When this occurs, the upper edge 2101 of the blade assembly 102 moves below the upper edge 2102 of the single device housing 101. In one or more embodiments, this reveals a forward-facing image capture device 501 or imager, wherein the forward-facing image capture device 501 or imager is located below the blade assembly 102 when the blade assembly 102 is in the retracted position (300).
In one or more embodiments, translation of the blade assembly 102 and flexible display 104 to the peeping position 500 occurs automatically. For example, in one or more embodiments, when the forward-facing image capture device 501 is actuated, the one or more processors (114) of the electronic device 100 translate the blade assembly 102 to the peep position 500, revealing the image capture device 501. 21-22, also revealing speaker 502 once the image capture operation with image capture device 501 is complete, the one or more processors (114) may cause blade assembly 102 to transition back to the retracted position, again covering and obscuring image capture device 501.
In other embodiments, the transition to the peep position 500 is initiated manually by actuation of a button or other user interface control. For example, a single press of button 2103 may transition blade assembly 102 to the extended position (200), while a double press of button 2103 returns blade assembly 102 to the retracted position (300). Long presses of button 2103 may cause blade assembly 102 to transition to peep position 500 of fig. 5, and so on. Other button operation schemes will be apparent to those of ordinary skill in the art having the benefit of this disclosure. In other embodiments, user input in the form of a slide gesture delivered to flexible display 104 may also be used to cause a transition to the peep position.
By positioning the forward image capture device 501 under the blade assembly 102 and its corresponding opaque blade (126) during normal operation, embodiments of the present disclosure provide privacy guarantees to a user of the electronic device 100. In other words, by positioning the image capture device 501 below the blade assembly 102 and the flexible display 104 when in the retracted (300) or extended (200) position, the user of the electronic device 100 is mechanically assured of privacy due to the fact that the image capture device 501 is physically impossible to perform image capture operations through the blade (126) of the blade assembly 102.
Thus, even if the electronic device 100 is accessed by a hacker or other malicious party, the user may be assured that the image capture device 501 cannot capture images or video when the blade assembly 102 and flexible display 104 are in the retracted position (300), the extended position (200), or a position therebetween. Only when the blade assembly 102 and flexible display 104 are transitioned to the peeping position 500 to reveal the image capturing device 501, the image capturing device 501 can capture a forward-facing image or forward-facing video.
Various hardware components have been described, and attention is directed to methods of using an electronic device in accordance with one or more embodiments of the present disclosure, operational steps performed by an electronic device in accordance with one or more embodiments of the present disclosure, and advantages, features, and benefits provided by an electronic device constructed in accordance with embodiments of the present disclosure. Attention is now directed to a method for using the electronic device described above, and more particularly to automatically moving the flexible display 104 and blade assembly 102 in response to an imager call request and device orientation of a document scanning application in accordance with one or more embodiments of the present disclosure.
Turning now to fig. 23, one illustrative method 2300 in accordance with one or more embodiments of the present disclosure is shown. The method 2300 of fig. 23 is intended for an electronic device having: an equipment housing; a blade assembly carrying a blade and a flexible display, wherein the blade assembly is slidably coupled to the device housing; a translation mechanism operable to slide the blade assembly relative to the device housing between at least an extended position, a retracted position, and a peeping position; and one or more processors capable of cooperating with the translation mechanism.
At step 2301, method 2300 determines that the electronic device has an image capture device hidden under a blade assembly that carries a blade that is slidably coupled to the device housing and is operable to slidably transition between an extended position in which the blade extends beyond an edge of the device housing, a retracted position in which a major surface of the blade abuts a major surface of the device housing, and a peeping position in which the blade reveals the forward-facing imager. At step 2302, method 2300 determines a trigger that causes the image capture device to be invoked.
Decision 2303 determines whether the image capture device call is a regular call or a service mode call. Examples of service mode invocations include instances when the document scanning application 2309 requires scanning a document using an image capture device. Other service mode invocations will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
If the image capture device invocation is a regular invocation, e.g., user initiated actuation 2312 of the image capture device, embodiments of the present disclosure assume that the user is aware of which major surface of the device housing of the electronic device is oriented toward the subject of the image capture operation. Thus, step 2308 actuates the image capture device identified in the regular call.
In contrast, if the image capture device call is a service mode call made by the document scanning application 2309, step 2304 uses one or more sensors of the electronic device to determine whether the forward imager of the electronic device is oriented toward the document. This is because, assuming that the electronic device (100) of fig. 1 is being used using the method 2300 of fig. 23, there are two basic modes of operation for the forward imager. The first is that the forward imager is exposed 2313 when the blade assembly is in the peep position. Another is when the blade assembly is in any other position, whether in the retracted position, the extended position, or a position between the extended position and the retracted position, where the rearward imager is exposed 2314 but the forward imager is hidden.
Step 2304 may occur in a variety of ways. Turning briefly to fig. 24, some techniques are illustrated, many of which will be described in more detail below with reference to fig. 25-28. For example, grip detection 2401 may be used to determine how a user holds an electronic device. If the user is a right-handed user, and if the grip sensor determines that the user's thumb is on the right side of the device and the finger is on the left side, this may indicate, for example, that the user is holding the electronic device with the right hand, and that the rearward portion of the flexible display is facing the document.
Face detection 2402 using a backward imager may also be used to determine which side of the electronic device the document is facing. If, for example, a back imager that always appears regardless of the blade assembly position captures an image depicting the face of the user, it is clear that the user is facing the back side of the electronic device. This may mean that the forward imager is facing the document. In contrast, if the backward imager captures an image and the document is depicted in the image, the probability that the user is facing the front side of the electronic device is high, and so on.
Touching position 2403 with a finger may be used to determine how the user is holding the electronic device. An example of this will be described below with reference to fig. 26. Comparison 2405 of the side touch portions may also be used, as will be described below with reference to fig. 27.
Comparison of the coverage 2404 of the forward and rearward portions of the flexible display may also be used to determine which side of the electronic device is facing the document. If, for example, the rearward surface of the flexible display is substantially covered and the forward portion of the flexible display is substantially uncovered, this may be an indication that the palm of the user is wrapped around the electronic device and supporting the rear surface. This may be an indication that the document does face the forward portion of the flexible display.
The list of techniques for determining whether a user is facing the front side of an electronic device or the back side of an electronic device shown in fig. 24 is merely illustrative. Other techniques will be described below with reference to fig. 25 to 28. Still other techniques will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Turning now back to fig. 23, method 2300 advantageously automatically and proactively moves the translating display to an optimal state based on the sensed trigger detected at step 2302, as opposed to whether the user needs to manually select whether the display is to be transitioned to the peeping position, such as by making a "long press" of a button to cause the translating display to transition to the peeping position. Decision 2305 then determines if peeping at a location is required. In one or more embodiments, this decision 2305 determines that a peeping position is required when a document scanning application operating on one or more processors of the electronic device invokes an image capture operation and one or more sensors of the electronic device determine that a forward imager of the electronic device is oriented toward the document.
If decision 2305 determines from the service mode call and the device orientation of the imager that a peep position is required, the one or more processors cause the translation mechanism to transition the blade assembly to the peep position at step 2307. In one or more embodiments, step 2307 further includes actuating the forward imager.
In contrast, if decision 2305 determines from the imager's service mode call and device orientation that a backward imager needs to be used, step 2306 includes actuating the backward imager. As described above, in one or more embodiments, the rearward imager is always exposed regardless of whether the blade assembly is in the retracted, extended, or peeped position. Thus, in one or more embodiments, step 2306 also includes excluding switching of the blade assembly around the device housing, as no switching wall interface is required to actuate the backward imager.
In other embodiments, step 2306 may include translating the blade assembly toward the extended position in addition to actuating the backward imager. For example, in one or more embodiments, one or more processors of the electronic device may detect that the orientation of the electronic device transitions to a landscape orientation at step 2304 when one or more sensors of the electronic device while a foreground application operating on the one or more processors enters a full screen, immersive mode while capturing images with the image capture device transitions the translational display to an extended position at step 2306.
In other embodiments, the one or more processors may cause, at step 2306, the translation display to be transitioned to the extended position when the user opens the input method editor, for example, to create a social media posting such as a written down containing an image captured by the backward imager.
Similarly, when the decision 2305 determines that the operating condition would benefit from a peeping position, such as when the user desires to scan a document using a forward-facing imager, the one or more processors of the electronic device may cause the blade assembly and flexible display to transition to the peeping position at step 2307. For example, if an imager call request is received from a document scanning application, as detected at step 2302, step 2307 may include one or more processors of the electronic device automatically transitioning the blade assembly and flexible display to a peep position, and so on.
Turning now to fig. 28, some of the operating conditions are shown that would benefit from either the peeping position or the extended position, as determined by the 2504 decision. Starting from the extended position, in one or more embodiments, when one or more sensors detect that an application operating on the one or more processors of the electronic device is entering the predefined criteria 2801 in a full screen, immersive mode while the electronic device is oriented in a landscape mode, then decision 2504 will determine that the electronic device may benefit from being in the extended position.
While fig. 23 shows several illustrative service mode invocations that would benefit from use of a forward imager, embodiments of the present disclosure are not so limited. With specific reference to the peeping position, in one or more embodiments, when one or more sensors detect a condition requiring an earpiece speaker, decision 2305 will determine that the electronic device may also benefit from being in the peeping position. If there is a situation where another component that is required to be positioned under the blade assembly while the blade assembly is in the retracted position, decision 2305 will determine that the electronic device may also benefit from being in the peeping position. These conditions are merely examples, as other conditions will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
Having now understood the general approach, attention is directed to more specific method examples that more particularly illustrate and describe many of the many benefits provided by embodiments of the present disclosure. Turning first to fig. 25, one illustrative method 2500 for using the electronic device 100 of fig. 1 is shown for quickly, easily, and simply ensuring that the blade assembly 102 is transitioned to the peep position when a forward-facing imager 501 is desired. Method 2500 of fig. 25 shows one illustrative electronic device 100 performing one or more illustrative operational steps in accordance with one or more embodiments of the present disclosure. Other methods will be described hereinafter with reference to fig. 26 to 28.
Beginning at step 2501, a user 2520 is holding the electronic device 100 as described above with reference to fig. 1. In one or more embodiments, the electronic device 100 includes a device housing 101, a forward imager 501 (shown at 2506), one or more sensors (120, 121), and one or more processors (114). The electronic device 100 also includes a blade assembly 102 that carries a blade (126). Blade assembly 102 is slidably coupled to the device housing and is operable to slidably transition between retracted position 300 of step 2501, extended position (200) shown above in fig. 2, and peeping position 500 shown at step 2506. In one or more embodiments, the extended position (200) occurs when the blade (126) extends beyond an edge of the device housing 101. The retracted position 300 occurs when a major surface of the blade abuts a major surface of the device housing 101. As shown at step 2506, peeping position 500 reveals forward imager 501.
At step 2501, document scanning application 2507 is operating on one or more processors (114) of electronic device 100. As shown at step 2501, document scanning application 2507 invokes service mode call 2508 of the imager by prompting user 2520 as to whether they are to scan a document. This is a service mode call 2508 because if user 2520 wants to scan a document, it will be necessary for the image capture device to capture an image of user 2520. As shown at step 2501, user 2520 confirms service mode call 2508 by interacting with the prompt to initiate a scanning operation.
According to an automated embodiment of the present disclosure, one or more sensors (120, 121) of the electronic device 100 must now determine whether to invoke the forward imager 501 or the backward imager 108. As shown at step 2502, the electronic device 100 further includes at least one backward imager 108. As previously described, the rearward imager 108 is exposed regardless of whether the blade assembly 102 is in the retracted position 300, the extended position (200), or the peeping position 500.
As described above, the one or more sensors (120, 121) of the electronic device 100 may determine whether the orientation of the forward imager 501 is toward the document in a variety of ways. These include grip detection, touch detection, display coverage and image analysis, as described above with reference to fig. 24. It is important to note that these techniques may be used in any combination. For simplicity, method 2500 of fig. 25 uses image analysis.
Specifically, at step 2502, the one or more processors (114) of the electronic device cause the backward imager 108 to capture at least one image 2509 in response to a video conferencing application 2507 operating on the one or more processors (114) invoking an image capturing operation. Once the at least one image 2509 has been captured, the one or more processors (114) perform image analysis on the image at step 2503. Decision 2504 then determines whether the at least one image 2509 matches a predefined criteria. If the at least one image 2509 matches a predefined criteria, as determined at decision 2504, in one or more embodiments, this means that the orientation of the backward imager 108 is toward the document. Thus, as shown at step 2505, the one or more processors (114) of the electronic device 100 scan the document using the backward imager 108, thereby presenting an image of the scanned document on a backward portion of the flexible display 104. In other cases, the one or more processors (114) cause the blade assembly 102 to automatically transition to the peep position 500 to reveal the forward-facing imager 501. The forward imager 501 may then capture an image of the document for use in a scanning operation at step 2506.
In one or more embodiments, the predefined criteria includes the at least one image 2509 depicting a document within the field of view of the backward imager 108. In this case, as determined at decision 2504, method 2500 moves to step 2505 where the backward imager 108 may capture an image of the document shown.
In one or more embodiments, step 2503 determines whether a document having an associated close range is depicted in the at least one image 2509. If so, the one or more processors (114) cause the backward imager 108 to capture an image of the document at step 2505. In contrast, if the one or more processors (114) detect a face depicted in the at least one image 2509 at step 2503, the one or more processors (114) cause the blade assembly 102 to move to the peeping position 500 at step 2506, revealing the forward-facing imager 501 and allowing the forward-facing imager 501 to capture a document because the electronic device 100 is likely to be located between the user 2520 and the document.
Step 2506 may even confirm that the document does face the front surface of the electronic device 100. For example, the one or more processors (114) may cause the forward imager 501 to capture at least one image. If the image captured by the forward imager 501 located on the front major surface of the electronic device 100 depicts a document, in one or more embodiments, the one or more processors (114) confirm that the peeping position 500 is indeed required.
The one or more processors (114) may also determine by other means whether the document is facing the front side of the electronic device 100 shown at step 2506 or the back side of the electronic device 100 shown at step 2505. For example, if the at least one image 2509 captured at step 2502 depicts only non-document inanimate objects such as streets or distant automobiles, as determined at step 2503, then in one or more embodiments, the one or more processors (114) concludes that the document is facing the front side of the electronic device 100 and thus transitions the blade assembly 102 to the peeping position 500 at step 2506. Similarly, if only a human hand (rather than a face) is depicted in the at least one image 2509 captured at step 2502, the one or more processors (114) concludes that the document is facing the front side of the electronic device 100 and thus converts the blade assembly 102 to the peeping position 500 at step 2506.
Embodiments of the present disclosure contemplate that if a document is depicted in the at least one image 2509 captured at step 2502, the size of the detected document may also be important. For example, in one or more embodiments, the one or more processors (114) may perform image analysis on the at least one image 2509 (and in particular the size of the document depicted in the at least one image 2509) at step 2503 to determine whether the document is reasonably within a field of view of the image capturing device in which a scanning operation may be performed. Such a mechanism prevents remote documents, such as posters, billboards, and wall graffiti, that are just in the field of view of the back imager 108 from becoming confused with documents to be scanned.
After performing image analysis on at least one image 2509 at step 2503, when the one or more processors (114) detect a document in the image, the one or more processors (114) scan using the backward imager 108 at step 2505. In contrast, in the event that no document is detected as being depicted in the image, the one or more processors (114) conclude that the document is facing the front side of the electronic device 100 and thus transition the blade assembly 102 to the peeping position 500 for scanning using the forward-facing imager 501 at step 2506.
At step 2502, in response to the one or more processors (114) identifying a predefined event, such as a document scanning application initiating a service mode call of the image capture device, the one or more processors (114) cause the backward imager 108 to capture at least one image 2509. At step 2503, the one or more processors (114) of the electronic device 100 perform image analysis operations on the at least one image 2509. Decision 2504 then determines whether the at least one image 2509 fails to match the predefined criteria. When the at least one image 2509 fails to match the first predefined criteria, as determined at decision 2504, the one or more processors (114) of the electronic device 100 use the backward imager 108 for the video call (2309) and cause content to be presented on a backward portion of the electronic device 100 at step 2505. In contrast, when the at least one image 2509 fails to match the second predefined criteria, as determined at decision 2504, the one or more processors (114) of the electronic device 100 transition the blade assembly to the peeping position 500 and capture an image in a video call (2309) using the forward-facing imager 501 while causing content to be presented on a forward-facing portion of the flexible display 104.
The predefined criteria that the at least one image 2509 must meet may vary. Turning briefly to fig. 28, several example criteria suitable for use in the determination at 2504 decision making are illustrated. The criteria set forth in fig. 28 may be used for either the first predefined criteria or the second predefined criteria. Furthermore, the example criteria shown in FIG. 28 are merely illustrative and are not intended to form an inclusive list. Many other criteria suitable for use in the determination at decision 2504 will be apparent to those of ordinary skill in the art having the benefit of this disclosure.
In one or more embodiments, the predefined criteria 2801 includes whether a face is depicted in the at least one image (2509). In one or more embodiments, when a face is depicted in the at least one image (2509), the one or more processors (114) of the electronic device 100 use the forward-facing imager (501) after transitioning the blade assembly (102) to the peeping position (500) in response to a service mode call of the image capture device. In contrast, when no face is depicted in the at least one image (2509) but a document is depicted, the one or more processors (114) perform a scanning operation using the backward imager (108).
In another embodiment, the predefined criteria 2802 includes whether the at least one image (2509) depicts that an authorized user of the electronic device (100) is staring at a rear major surface of the electronic device (100). In one or more embodiments, when the at least one image (2509) depicts an authorized user staring at a rear major surface of the electronic device (100), the one or more processors (114) of the electronic device (100) cause the blade assembly (102) to transition to the peep position (500) using the forward-facing imager (501) for service mode invocation of the image capture device. In contrast, when the at least one image (2509) fails to depict an authorized user gazing at the rear major surface, in one or more embodiments, the one or more processors (114) of the electronic device (100) use the backward imager (108) in response to a service mode call of the image capture device.
In another embodiment, the predefined criteria 2803 includes that the at least one image (2509) depicts a document having a size that exceeds a predefined image area threshold. Embodiments of the present disclosure contemplate that the size of the document may be important when the at least one image (2509) depicts the document. Remote documents are likely not being scanned using the electronic device (100). In contrast, the document populating the frame of the at least one image (2509) is likely to be a document being scanned using the electronic device (100).
Thus, in one or more embodiments, predefined criteria 2803 includes that the depicted document exceeds a predefined image area threshold. The predefined image area threshold will vary according to the field of view of the backward imager (108), but in one or more embodiments the predefined image area threshold is at least twenty-five percent of the image. Thus, in one or more embodiments, the one or more processors (114) of the electronic device 100 use the backward imager (108) in response to a service mode call of the image capture device when the at least one image (2509) depicts a document having a size that exceeds a predefined image area threshold. In contrast, when the at least one image (2509) fails to depict a document having a size that exceeds a predefined image area threshold, in one or more embodiments, the one or more processors (114) of the electronic device 100 cause the blade assembly (102) to transition to the peep position (500) using the forward-facing imager (501) for service mode invocation of the image capture device.
In yet another embodiment, the predefined criteria 2804 includes whether the at least one image (2509) does not depict a document, optionally within a predefined distance from the electronic device (100). In one or more embodiments, the one or more processors (114) of the electronic device 100 use the backward imager (108) in response to a service mode call of the image capture device when rendering the document in the at least one image (2509). In contrast, when there is no document depicted in the at least one image (2509), in one or more embodiments, the one or more processors (114) of the electronic device 100 cause the blade assembly (102) to transition to the peep position (500) using the forward-facing imager (501) for service mode invocation of the image capture device.
In one or more embodiments, the predefined criteria 2805 includes whether a human hand is depicted in the at least one image (2509). In one or more embodiments, when a human hand is depicted in the at least one image (2509), this is an indication that the imager is being overlaid (or partially overlaid) by a user (2520) holding the electronic device (100) while the document is facing the other side of the flexible display (104) (i.e., the forward portion of the flexible display (104)). Thus, the one or more processors (114) of the electronic device 100 cause the blade assembly (102) to transition to the peep position (500) using the forward imager (501) for service mode invocation of the image capture device. In contrast, when no human hand is depicted in the at least one image (2509), in one or more embodiments, the one or more processors (114) of the electronic device 100 use the backward imager (108) in response to a service mode call of the image capture device.
In one or more embodiments, the predefined criteria 2806 includes whether a finger is depicted in the at least one image (2509). In one or more embodiments, when a finger is depicted in the at least one image (2509), this indicates that the backward imager (108) is covered (or partially covered) by a user (2520) holding the electronic device (100) while staring at the other side of the flexible display (104). Thus, the one or more processors (114) of the electronic device 100 cause the blade assembly (102) to transition to the peep position (500) using the forward imager (501) for service mode invocation of the image capture device. In contrast, when a finger is not depicted in the at least one image (2509), in one or more embodiments, the one or more processors (114) of the electronic device 100 use the backward imager (108) in response to a service mode call of the image capture device.
In one or more embodiments, the predefined criteria 2807 includes whether a non-document inanimate object, such as an automobile, boat, street lamp, house, or other inanimate object, is depicted in the at least one image (2509). In one or more embodiments, when a non-document inanimate object is depicted in the at least one image (2509), this indicates that the backward imager (108) is being oriented away from the document. Thus, the one or more processors (114) of the electronic device 100 cause the blade assembly (102) to transition to the peep position (500) using the forward imager (501) for service mode invocation of the image capture device. In contrast, when non-document inanimate objects are not depicted in the at least one image (2509), in one or more embodiments, the one or more processors (114) of the electronic device 100 use the backward imager (108) in response to a service mode call of the image capture device.
Turning now to fig. 25, in addition to determining whether the at least one image 2509 fails to match one or more criteria, decision 2504 may also determine whether one or more conditions are met when the at least one image 2509 is captured. In one or more embodiments, when decision 2504 determines that the first condition is met, the one or more processors (114) may cause the blade assembly 102 to transition to the peeping position (500) such that a service mode call for the image capturing device is used with the forward-facing imager 501 or alternatively a service mode call for the image capturing device is used with the backward-facing imager 108, depending on whether the at least one image 2509 matches the first or second predefined criteria. However, in the event that decision 2504 determines that the first criteria is not met, or alternatively the second criteria is met, the one or more processors (114) of electronic device 100 may exclude the use of either forward imager 501 or backward imager 108, even if an imager invocation request is received.
Thus, the method 2500 of fig. 25 includes step 2501 in which one or more processors (114) of the electronic device 100 detect that an application operating on the one or more processors (114) requests a call by the application for service mode use of the imager. In this illustrative example, document scanning application 2507 is requesting to invoke an image capture device for service mode use to scan a document.
One or more sensors (120, 121) of the electronic device 100 then determine whether the front side of the electronic device 100, shown at step 2501, or the back side of the electronic device 100, shown at step 2502, is facing the document. In the illustrative embodiment, the one or more sensors (120, 121) of the electronic device 100 determine whether the front side of the device housing 101 of the electronic device 100 is facing the user 2520 of the electronic device 100 by causing the back-facing imager 108 of the electronic device 100 to capture at least one image 2509 at step 2502.
The one or more processors (114) of the electronic device 100 then perform image analysis on the at least one image 2509 at step 2503. In one or more embodiments, when the at least one image 2509 depicts one or more of a finger, a human hand, or a non-document inanimate object, the one or more processors (114) determine that the front side of the device housing 101 is facing the document at decision 2504. In contrast, when the at least one image 2509 depicts the document itself, the one or more processors (114) may determine that the backside of the device housing 101 is facing the document at decision 2504.
When the latter occurs, the one or more processors (114) may cause the translation mechanism to translate the blade assembly 102 to the peep position 500 that reveals the forward-facing imager 501, and in other cases, the forward-facing imager 501 is covered by the blade assembly 102 when in the retracted position of step 2506. The one or more processors (114) may also actuate the forward imager for service mode use by the application after the blade assembly 102 is in the peeping position 500 at step 2506. In contrast, when the back side of the device housing 101 is facing the document, step 2505 may include actuating the backward imager 108 for service mode use by the application. In one or more embodiments, step 2505 further includes causing the translation mechanism to omit translation of the blade assembly 102 prior to actuating the rearward imager 108 because the rearward imager 108 is exposed wherever the blade assembly 102 is positioned.
Turning now to fig. 26, illustrated therein is another method 2600 configured in accordance with one or more embodiments of the present disclosure. In the method 2600 of fig. 26, a selection is made as to whether to use the backward imager 108 at step 2609 or to cause the blade assembly 102 to transition to the peeping position 500 to reveal the imager 501 for use at step 2610 based on a touch, rather than whether the captured image matches a predefined criteria. Embodiments of the present disclosure contemplate that when user 2520 is holding electronic device 100, his hand and/or fingers tend to cover more area along one major surface than the other major surface. For example, at step 2601, a user 2520 is holding the electronic device 100 with its first major surface facing the document and its second major surface facing their face.
Thus, in one or more embodiments, the method 2600 of fig. 26 measures the portion of the touch-sensitive display that receives touch input from the user 2520. In one or more embodiments, the one or more processors (114) then select the backward imager 108 or the forward imager 501 for use in response to a service mode call to the image capture device by the document scanning application. In other words, in one or more embodiments, the one or more processors (114) upon receiving an imager call request from a document scanning application operating on the one or more processors (114) and the one or more processors (114) determining that the forward-facing imager 501 is facing a document to be scanned, cause the translation mechanism of the electronic device 100 to translate the blade assembly 102 to the peeping position 500 of step 2610.
As described in more detail below, in one or more embodiments, the method 2600 employs a static touch to determine which image capture device will be used in response to the imager invocation request that occurs at step 2601. The static touch is more continuous than a transient dynamic touch (e.g., a transient dynamic touch that occurs when the user 2520 is interacting with the flexible display 104). In one or more embodiments, the one or more processors (114) of the electronic device 100 employ an artificial intelligence engine to distinguish between static touches and dynamic touches.
Effectively, the method 2600 of fig. 26 measures the total "back touch" area and compares it to the total "front touch" area. If the back touch area is greater than the front touch area, then in one or more embodiments, the one or more processors (114) of the electronic device 100 scan the document using the backward imager 108, as embodiments of the present disclosure contemplate that the user typically uses the backward imager 108 during a scanning operation while staring at the forward portion of the flexible display 104. In contrast, if the back touch area is less than the front touch area, the one or more processors (114) transition the blade assembly 102 to the peeping position 500 of step 2610, revealing the forward-facing imager 501 and using the forward-facing imager 501 in response to an imager invocation request.
In one or more embodiments, the method 2600 of fig. 26 allows the touch sensor of the flexible display 104 to determine which image capture device is used to scan a document. Consider the cross-sectional area of the electronic device 100. In a typical carry mode, an example of which is shown at step 2601, the user 2520 tends to touch less the first major surface of the flexible display 104 (which is forward at step 2601) than the second major surface of the flexible display 104 (which is rearward at step 2609). The user 2520 does so as not to obscure the view of the forward portion of the flexible display 104. The rearward portion of the flexible display 104 receives more touches because four fingers of the human hand are connected to the thumb and the human hand extends alongside the rearward portion of the flexible display 104. The method 2600 of fig. 26 advantageously utilizes a document-oriented image capture device in response to an imager invocation request, rather than a user-face-oriented image capture device.
As shown at step 2601, user 2520 is holding electronic device 100 in their hand. As described above, the electronic device 100 includes a forward portion of the flexible display 104 positioned on the first major surface of the electronic device 100. In this example, the flexible display 104 includes a touch sensor and is a touch sensitive display. As shown at step 2609, the electronic device 100 further includes a rearward portion of the flexible display 104 positioned on the second major surface of the electronic device 100.
As shown at step 2601, one or more processors (114) of electronic device 100 are receiving an imager call request 2612 from an application operating on the one or more processors (114). In this illustrative example, imager invocation request 2612 includes a service mode invocation request because document scanning application (2507) requests scanning of a document using an image capture device.
As shown at step 2601, the flexible display 104 is receiving user input. In this illustrative embodiment, the user input includes touch input because the user 2520 is holding the electronic device 100 and touching a portion of the flexible display 104. However, in other embodiments, the user input may include a proximity input, such as may occur when a user's finger (or another object, such as a stylus) is sufficiently close to but does not touch the flexible display 104. Such proximity input may be detected by an imager, a touch sensor of the flexible display 104, one or more proximity sensors, or by other sensors.
The user input may also include gesture input. Gesture input may include translating and/or rotating electronic device 100 in three-dimensional space, or electronic device 100 remaining stationary while user 2520 moves their hand (or another object) in the vicinity of electronic device 100. Such gesture input may be detected by the imager, the touch sensor of the first flexible display 2602, and the one or more processors (114) of the electronic device 100 identify a first touch-sensitive display portion of the forward portion of the flexible display 104 that receives the user input. For the illustrative embodiment of FIG. 26, a first touch sensitive display portion is shown at step 2603.
As also shown at step 2601, a rearward portion of the flexible display 104 is receiving a second user input. In this illustrative embodiment, the second user input includes a touch input as the first user input. However, in other embodiments, the second user input may include a proximity input, a gesture input, or other input. The imager may detect such input, a touch sensor, one or more proximity sensors, or by other sensors of the flexible display 104.
At step 2604, the one or more processors (114) of the electronic device 100 recognize that a second touch-sensitive display portion of the flexible display 104 receives a second user input along a rearward portion of the flexible display 104. For the illustrative embodiment of FIG. 26, a second touch sensitive display portion is shown at step 2605.
At step 2606, the one or more processors (114) of the electronic device 100 compare the sizes of the first touch-sensitive display portion and the second touch-sensitive display portion. In one or more embodiments, step 2606 includes the one or more processors (114) comparing the first touch-sensitive display portion and the second touch-sensitive display portion to determine which is larger. In another embodiment, step 2606 includes the one or more processors (114) comparing the first touch-sensitive display portion and the second touch-sensitive display portion to determine whether a difference therebetween exceeds a predefined threshold.
Embodiments of the present disclosure contemplate that the first touch-sensitive display portion may be larger than the second touch-sensitive display portion, the first touch-sensitive display portion may be smaller than the second touch-sensitive display portion, or the first touch-sensitive display portion and the second touch-sensitive display portion may be roughly equal. In one or more embodiments, the "approximately equal" determinations are grouped using a predefined threshold, such as ten percent, into any cases that occur when the first touch-sensitive display portion and the second touch-sensitive display portion are within ten percent of each other.
In one or more embodiments, whether this occurs is determined at optional decision 2607. In one or more embodiments, if the first touch-sensitive display portion and the second touch-sensitive display portion are equal, or within a predefined threshold, as determined at decision 2607, the method 2600 transitions to the method of fig. 27 below (2700). In contrast, in the event that the first touch-sensitive display portion and the second touch-sensitive display portion, or have a difference that exceeds a predefined threshold, the method 2600 moves to decision 2607.
Decision 2607 determines which of the first touch sensitive display portion or the second touch sensitive display portion is larger. In one or more embodiments, when the first touch-sensitive display portion is greater than the second touch-sensitive display portion, the one or more processors (114) of the electronic device 100 cause the blade assembly 102 to transition to the peep-up position 500 that reveals the forward-facing imager 501, wherein the forward-facing imager 501 is used in response to the imager invocation request 2612 at step 2610. In contrast, when the second touch-sensitive display portion is greater than the first touch-sensitive display portion, the one or more processors (114) actuate the backward imager 108 in response to the imager invocation request 2612 at step 2609. In the illustrative embodiment of fig. 26, step 2601 includes the first touch-sensitive display portion being smaller than the second touch-sensitive display portion, meaning that the method 2600 will end at step 2609.
In the case where decision 2607 is omitted, decision 2608 would include a direct comparison of the areas of the first touch sensitive display portion and the second touch sensitive display portion. If they are equal, method 2600 may transition to the method of FIG. 27 (2700). However, absolute equality is almost impossible in view of the high resolution of modern displays. Thus, where decision 2607 is omitted, step 2609 will occur when the first touch sensitive display portion is smaller than the second touch sensitive display portion. Similarly, step 2610 will occur when the first touch-sensitive display portion is larger than the second touch-sensitive display portion.
However, when decision 2607 is included, the one or more processors (114) cause step 2610 to occur only when the first touch-sensitive display portion is greater than the second touch-sensitive display portion by more than a predefined threshold. Similarly, the one or more processors (114) cause step 2609 to occur only when the second touch-sensitive display portion is greater than the first touch-sensitive display portion by more than a predefined threshold.
Turning now to fig. 27, a method 2700 is shown wherein the method 2700 accommodates situations in which the first touch sensitive display portion and the second touch sensitive display portion are equal (wherein decision 2607 is omitted from method 2600 of fig. 26) or in which the first touch sensitive display portion and the second touch sensitive display portion are within a predefined threshold (wherein decision 2607 is included in method 2600 of fig. 26).
Beginning at step 2701, user 2520 is holding electronic device 100 of fig. 1. As described above, the electronic device 100 includes a forward portion of the flexible display 104 positioned on a first major surface of the electronic device 100 and a rearward portion of the flexible display 104 positioned on a second major surface of the electronic device 100. In this example, the flexible display 104 includes a touch sensor and is a touch sensitive display. As shown at step 2708, the electronic device 100 further includes a rearward portion of the flexible display 104 positioned on the second major surface of the electronic device 100.
As shown at step 2701, a forward portion of the flexible display 104 is receiving a first user input. In this illustrative embodiment, the first user input comprises a touch input. However, in other embodiments, the user input may include proximity input, gesture input, or other input.
At step 2702, the one or more processors (114) of the electronic device 100 identify that a first touch sensitive display portion of the forward portion of the flexible display 104 is receiving a first user input. For the illustrative embodiment of FIG. 27, a first touch sensitive display portion is shown at step 2703.
As also shown at step 2701, a rearward portion of the flexible display 104 is receiving a second user input. In this illustrative embodiment, the second user input includes a touch input as the first user input. However, in other embodiments, the second user input may include a proximity input, a gesture input, or other input. The imager may detect such input, a touch sensor, one or more proximity sensors, or by other sensors of the flexible display 104.
At step 2704, the one or more processors (114) of the electronic device 100 identify a second touch sensitive display portion of the rearward portion of the flexible display 104. For the illustrative embodiment of FIG. 27, a second touch sensitive display portion is shown at step 2705.
At step 2706, the one or more processors (114) of the electronic device 100 compare the sizes of the first touch-sensitive display portion and the second touch-sensitive display portion. In one or more embodiments, step 2706 includes the one or more processors (114) comparing the first touch sensitive display portion and the second touch sensitive display portion to determine whether they are equal. In another embodiment, step 2706 includes the one or more processors (114) comparing the first touch sensitive display portion and the second touch sensitive display portion to determine whether a difference therebetween is within a predefined threshold, as described above.
The method (2600) of fig. 26 may be performed in case the first touch sensitive display portion and the second touch sensitive display portion are not equal, or alternatively in case the difference between the first touch sensitive display portion and the second touch sensitive display portion exceeds a predefined threshold. However, when the first touch-sensitive display portion and the second touch-sensitive display portion are equal or within a predefined threshold, as determined at decision 2707, in one or more embodiments, this constitutes a predefined event.
Thus, in one or more embodiments, the one or more processors (114) cause the backward imager 108 to capture at least one image 2509 at step 2708. Image analysis may then be performed at step 2709, wherein decision 2710 determines whether the at least one image 2509 fails to match predefined criteria, which may be any of those predefined criteria described above with reference to fig. 24 or fig. 28.
As described above, when the at least one image 2509 fails to match the first predefined criteria, in one or more embodiments, the one or more processors (114) of the electronic device 100 use the backward imager 108 in response to the imager invocation request at step 2712. In contrast, when the at least one image 2509 fails to match the second predefined criteria, the one or more processors (114) transition the electronic device 100 to a peep position 500 that reveals the forward-facing imager 501 and use the forward-facing imager 501 in response to the imager invocation request at step 2711.
As illustrated and described above in the description of fig. 26 and 27, the methods and systems described herein allow one or more processors (114) to cause a translation mechanism to translate a blade assembly to a peeping position when an imager invocation request is received from a document scanning application operating on the one or more processors and one or more sensors determine that a forward imager is facing a document.
Turning now to fig. 29, various embodiments of the present disclosure are shown. The embodiment of fig. 29 is shown as a label box in fig. 29, since the various components of these embodiments have been shown in detail in fig. 1-28 prior to fig. 29. Accordingly, since these items have been shown and described previously, repeated illustrations of them are no longer necessary for a proper understanding of the embodiments. Thus, the embodiments are shown as marker boxes.
At 2901, an electronic device includes a device housing, a forward-facing imager, and one or more sensors. At 2901, the electronic device includes a blade assembly carrying the blade and slidably coupled to the device housing and operable to slidably transition between an extended position in which the blade extends beyond an edge of the device housing, a retracted position in which a major surface of the blade abuts a major surface of the device housing, and a peeping position in which the blade reveals the forward-facing imager. At 2901, the electronic device includes one or more processors. At 2901, the one or more processors invoke an image capture operation in response to a document scanning application operating on the one or more processors and the one or more sensors determine that the forward imager is oriented toward the document, causing the blade assembly to transition to the peeping position.
At 2902, the one or more sensors of 2901 include a backward imager. At 2903, the rearward imager of 2902 is exposed, whether the blade assembly is in the extended, retracted, or peeped position.
At 2904, the one or more sensors of 2902 determine that the forward imager is oriented toward the document when the one or more processors cause the backward imager to capture at least one image in response to an application operating on the one or more processors invoking an image capture operation and the at least one image matches a predefined criteria. At 2905, the predefined criteria of 2904 include the at least one image depicting a document within a field of view of the backward imager. At 2906, the predefined criteria of 2904 includes the at least one image depicting a document having a size that exceeds a predefined image area threshold.
At 2907, the one or more sensors of 2902 determine that the forward imager is oriented toward the document when the one or more processors cause the backward imager to capture at least one image in response to an application operating on the one or more processors invoking an image capture operation and the at least one image fails to match a predefined criteria. At 2908, the predefined criteria of 2907 include the at least one image depicting one or more of a human hand, a finger, or an inanimate object.
At 2909, the electronic device of 2908 further includes a flexible display that is touch-sensitive and carried by the blade assembly. At 2909, the flexible display defines a forward portion, a rearward portion, and a curvilinear portion spanning one end of the device housing between the forward portion and the rearward portion. At 2909, the predefined criteria includes that the backward portion receives less touch input than the forward portion.
At 2910, the document of 2901 includes text. At 2911, the document of 2901 includes an image.
At 2912, a method in an electronic device includes detecting, by one or more processors, a call by a document scanning application operating on the one or more processors to service mode use of an imager by the document scanning application. At 2912, the method includes determining, by one or more sensors of the electronic device, whether a front side of a device housing of the electronic device or a back side of the device housing of the electronic device faces the document. At 2912, when the front side of the device housing faces the document, the method includes causing, by the one or more processors, a translation mechanism to translate a blade assembly slidably coupled to the electronic device from a retracted position to a peeping position that reveals a forward-facing imager covered by the blade assembly in the retracted position.
At 2913, the method of 2912 further includes, after the blade assembly is in the peep position, actuating the forward-facing imager for service mode use of the document scanning application. At 2914, the method of 2912 further includes actuating the backward imager for service mode use of the document scanning application when the back side of the device housing faces the document.
At 2915, the method of 2914 further includes causing, by the one or more processors, translation of the translating mechanism to omit translation of the blade assembly prior to actuating the backward imager when the back side of the device housing faces the document. At 2916, the one or more sensors of 2912 determine whether the front side of the device housing of the electronic device or the back side of the device housing of the electronic device is facing the document by determining whether a forward portion of the flexible display carried by the blade assembly or a rearward portion of the flexible display is receiving more touch input.
At 2917, the one or more sensors of 2912 determine whether the front side of the device housing of the electronic device or the back side of the device housing of the electronic device is facing the document by causing the backward imager to capture at least one image. At 2918, the one or more sensors of 2917 determine that the front side of the device housing faces the document when the at least one image depicts one or more of a finger, a human hand, or an inanimate object, and the one or more sensors determine that the back side of the device housing faces the document when the at least one image depicts the document.
At 2919, an electronic device includes a device housing, a blade assembly slidably coupled to the device housing and slidable between an extended position, a retracted position, and a peeping position, and a forward-facing imager covered when the blade assembly is in the extended position and the retracted position and exposed when the blade assembly is in the peeping position. At 2919, the electronic device includes one or more sensors and one or more processors that are capable of cooperating with the translation mechanism to cause translation of the blade assembly about the device housing.
At 2919, the one or more processors, upon receiving an imager call request from a document scanning application operating on the one or more processors and the one or more sensors determining that the forward imager is oriented toward the document, cause a translation mechanism to translate the blade assembly to the peeping position. At 2920, the imager invocation request of 2919 includes a service mode invocation request.
In the foregoing specification, specific embodiments of the disclosure have been described. However, it will be understood by those skilled in the art that various modifications and changes may be made without departing from the scope of the present disclosure as set forth in the appended claims. Thus, while the preferred embodiments of the present disclosure have been shown and described, it will be apparent that the present disclosure is not limited thereto. Numerous modifications, changes, variations, substitutions and equivalents will occur to those skilled in the art without departing from the spirit and scope of the present disclosure as defined by the appended claims.
Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present disclosure. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims.

Claims (20)

1. An electronic device, comprising:
An equipment housing;
A forward imager;
One or more sensors;
A blade assembly carrying a blade and slidably coupled to the device housing and operable to slidably transition between an extended position, a retracted position, and a peeping position:
In the extended position, the blade extends beyond the edge of the device housing,
In the retracted position, the major surface of the blade abuts the major surface of the device housing, an
In the peeping position, the blade reveals the forward-facing imager; and one or more processors;
Wherein the one or more processors cause the blade assembly to transition to the peeping position in response to a document scanning application operating on the one or more processors invoking an image capture operation and the one or more sensors determining that the forward imager is oriented toward a document.
2. The electronic device of claim 1, the one or more sensors comprising a backward imager.
3. The electronic device of claim 2, wherein the backward imager is exposed regardless of whether the blade assembly is in the extended position, the retracted position, or the peeping position.
4. The electronic device of claim 2, wherein the one or more sensors determine that the forward imager is oriented toward the document when:
One or more processors cause the backward imager to capture at least one image in response to the document scanning application operating on the one or more processors invoking the image capture operation; and
The at least one image matches a predefined criterion.
5. The electronic device of claim 4, the predefined criteria comprising the at least one image depicting the document within a field of view of the backward imager.
6. The electronic device of claim 4, the predefined criteria comprising the at least one image depicting the document having a size exceeding a predefined image area threshold.
7. The electronic device of claim 2, wherein the one or more sensors determine that the forward imager is oriented toward the document when:
One or more processors cause the backward imager to capture at least one image in response to the document scanning application operating on the one or more processors invoking the image capture operation; and
The at least one image fails to match a predefined criterion.
8. The electronic device of claim 7, wherein the predefined criteria includes one or more of the at least one image depicting a human hand, a finger, or an inanimate object.
9. The electronic device defined in claim 7 further comprising a flexible display that is touch-sensitive and carried by the blade assembly, the flexible display defining a forward portion, a rearward portion, and a curvilinear portion that spans one end of the device housing between the forward portion and the rearward portion, wherein the predefined criteria comprises the rearward portion receiving less touch input than the forward portion.
10. The electronic device of claim 1, wherein the document comprises text.
11. The electronic device of claim 1, wherein the document comprises an image.
12. A method in an electronic device, the method comprising:
Detecting, by one or more processors, a call by a document scanning application operating on the one or more processors requesting use of a service mode of an imager by the application;
Determining, by one or more sensors of the electronic device, whether a front side of a device housing of the electronic device or a rear side of the device housing of the electronic device faces a document; and
When the front side of the device housing faces the document, causing, by the one or more processors, a translation mechanism to translate a blade assembly slidably coupled to the electronic device from a retracted position to a peeping position in which a forward-facing imager covered by the blade assembly in the retracted position is revealed.
13. The method of claim 12, further comprising actuating the forward imager for the service mode use of the application after the blade assembly is in the peeping position.
14. The method of claim 12, further comprising actuating a backward imager for the service mode use of the application while the back side of the device housing faces the document.
15. The method of claim 14, further comprising causing, by the one or more processors, translation of the translating mechanism to omit translation of the blade assembly prior to actuating the backward imager when the rear side of the device housing faces the document.
16. The method of claim 12, wherein the one or more sensors determine whether the front side of the device housing of the electronic device or the back side of the device housing of the electronic device is facing the document by determining whether a forward portion of a flexible display carried by the blade assembly or a backward portion of the flexible display is receiving more touch input.
17. The method of claim 12, wherein the one or more sensors determine whether the front side of the device housing of the electronic device or the back side of the device housing of the electronic device is facing the document by causing a backward imager to capture at least one image.
18. The method according to claim 17, wherein:
the one or more sensors determine that the front side of the device housing is facing the document when the at least one image depicts one or more of a finger, a human hand, or an inanimate object; and
The one or more sensors determine that the backside of the device housing is facing the document when the at least one image depicts the document.
19. An electronic device, comprising:
An equipment housing;
a blade assembly slidably coupled to the device housing and slidable between an extended position, a retracted position, and a peeping position; and
A forward-facing imager that is covered when the blade assembly is in the extended position and the retracted position and that is revealed when the blade assembly is in the peeping position;
One or more sensors; and
One or more processors operable in conjunction with a translation mechanism to cause translation of the blade assembly about the device housing;
wherein the one or more processors cause the translating mechanism to translate the blade assembly to the peep position when:
receiving an imager call request from a document scanning application operating on the one or more processors; and
The one or more sensors determine that the forward imager is oriented toward a document.
20. The electronic device of claim 19, wherein the imager invocation request comprises a service mode invocation request.
CN202311312203.XA 2022-10-17 2023-10-10 Electronic device and method with translating flexible display for automatic position change Pending CN117914979A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/416,925 2022-10-17
US63/419,994 2022-10-27
US18/088,705 2022-12-26
US18/088,705 US11838433B1 (en) 2022-10-17 2022-12-26 Electronic devices with translating flexible displays and corresponding methods for automatic transition to peek position

Publications (1)

Publication Number Publication Date
CN117914979A true CN117914979A (en) 2024-04-19

Family

ID=90689785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311312203.XA Pending CN117914979A (en) 2022-10-17 2023-10-10 Electronic device and method with translating flexible display for automatic position change

Country Status (1)

Country Link
CN (1) CN117914979A (en)

Similar Documents

Publication Publication Date Title
US10289381B2 (en) Methods and systems for controlling an electronic device in response to detected social cues
US20170052566A1 (en) Mobile terminal and control method therefor
US9262867B2 (en) Mobile terminal and method of operation
US11838433B1 (en) Electronic devices with translating flexible displays and corresponding methods for automatic transition to peek position
US20240126345A1 (en) Electronic Devices with Translating Flexible Displays and Corresponding Methods for Automatic Transition to Peek Position
CN117914979A (en) Electronic device and method with translating flexible display for automatic position change
US20240129394A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
US20240129391A1 (en) Electronic Devices with Translating Flexible Displays and Corresponding Methods for Managing Display Position as a Function Content Presentation
US20240126332A1 (en) Electronic Devices with Translating Flexible Displays and Corresponding Methods for Managing Display Position as a Function Content Presentation
US20240129396A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
US20240126325A1 (en) Electronic Devices with Translating Flexible Displays and Corresponding Methods for Managing Display Position as a Function Content Presentation
US20240129398A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
US20240129393A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
US20240129392A1 (en) Methods and Systems for Controlling a Translating Flexible Display of an Electronic Device in Response to Scrolling User Input
CN117912358A (en) Electronic device with translating flexible display and corresponding method
US20240126333A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
US20240126326A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
US20240126334A1 (en) Electronic Devices with Translating Flexible Display and Corresponding Methods
CN117912359A (en) Electronic device with translating flexible display and corresponding method
CN117912352A (en) Electronic device with translating flexible display and method of managing display position
CN117912354A (en) Electronic device with translating flexible display and method of managing display position
CN117912357A (en) Electronic device with translating flexible display and corresponding method
CN117912353A (en) Electronic device with translating flexible display and method of managing display position
CN117912360A (en) Electronic device with translating flexible display and corresponding method
CN117908746A (en) Method and system for controlling a translating flexible display of an electronic device in response to a scrolling user input

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication