WO2016071569A1 - Ui control redundant touch - Google Patents

Ui control redundant touch Download PDF

Info

Publication number
WO2016071569A1
WO2016071569A1 PCT/FI2015/050759 FI2015050759W WO2016071569A1 WO 2016071569 A1 WO2016071569 A1 WO 2016071569A1 FI 2015050759 W FI2015050759 W FI 2015050759W WO 2016071569 A1 WO2016071569 A1 WO 2016071569A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
input area
user input
sensors
area
Prior art date
Application number
PCT/FI2015/050759
Other languages
French (fr)
Inventor
Antti KERÄNEN
Mika LAAKSONEN
Original Assignee
Tacto Tek Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tacto Tek Oy filed Critical Tacto Tek Oy
Publication of WO2016071569A1 publication Critical patent/WO2016071569A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • G06F1/1643Details related to the display arrangement, including those related to the mounting of the display in the housing the display being associated to a digitizer, e.g. laptops that can be used as penpads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03547Touch pads, in which fingers can move on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention concerns providing user input via an electronic user interface (Ul). Particularly, however not exclusively, the invention pertains to a method of providing user input via multiple predefined input features of the Ul, and to a device arranged to receive, monitor and detect input correspondingly.
  • Ul electronic user interface
  • touch surfaces are commonly used together with digital displays in a wide array of mobile and desktop devices ranging from small handheld devices to large interactive touch surface tables and "walls".
  • the objective of the embodiments of the present invention is to at least alle- viate one or more of the aforesaid drawbacks evident in the prior art arrangements particularly in the context of electronic input area arrangements and input methods that eliminate unintentional user input via a primary device input area.
  • the objective is generally achieved with a device and an input method in accordance with the present invention by having user input disabled (neglected, not registered, etc.) via a primary device input area unless an ongoing user input is detected via another determined user input area.
  • One of the many advantageous features of the present invention is that un- intentional input may be avoided by having essentially at least functionally separate input areas that are arranged so that one is used to enable user input via the other.
  • one area may be used as a primary input area of the device optionally together with a display and another area may be used to enable user input (to be engendered) via said primary input area.
  • the device may be further configured to disable some of its features or functionalities when ongoing user input via determined input area is absent.
  • the suggested solution doesn't require or rely on multi-touch technology, whereupon the input areas used in the present invention may be simple in structure.
  • Another one of the many advantageous features of the present invention is that it doesn't require or rely on touch screen technology, i.e. including an electronic display together with an input surface, which anticipates that various embodiments of present device and method may be used in a wide range of applications that may omit displays or at least touch displays, such as in some wearable technology devices.
  • the input areas of the suggested device may be physically or spatially different so that the areas and according surfaces may overlap either partially or completely or they may not overlap at all allowing for having the surfaces in different locations, which is beneficial it allows the device for more freedom of design.
  • an electronic device comprises:
  • a first number of sensors configured to define a first input area upon a first substrate, arranged to detect essentially continuous user input provided via said first input area
  • a second number of sensors configured to define a second input area upon said first or a second substrate, spatially and/or technology-wise sepa- rate from the first input area, arranged to detect user input provided via said second input area, wherein a controller entity is arranged, at least in a predefined functional state, and based on monitoring user input via said first and second input areas, to disable user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command.
  • user input provided via said input areas may comprise means known from the state of art such as static touch or (continuous) movement, optionally in contact with a surface.
  • Means of engendering user input, such as static touch and/or move- ment may involve one or more fingers, other similarly suitable anatomical part(s) and/or stylus or other input elements/features.
  • the user input may comprise one or more input means being provided simultaneously-
  • the sensor(s) configured to define either of the input areas may be sensors capable of detecting input such as touch and/or continuous movement essentially on a surface.
  • the sensor(s) configured to define either of the input areas may be sensors capable of detecting three-dimensional input such as movement or presence inside a predetermined space optionally above and/or in reference to a surface.
  • the sensors configured to define either of the input areas may comprise a combination of sensors capable of detecting input on a surface and capable of detecting three-dimensional input.
  • either of the input areas may define or be essentially a surface such that the area may optionally be but need not entirely be bound with the physical dimensions of the surface.
  • either of the input areas may be essentially a three-dimensional space that is substantially predetermined in reference to physical boundaries such as a touch surface or a display of a device, but is not necessarily tied and/or limited by any physical boundaries and/or dimensions.
  • either of the input areas may be essentially a three-dimensional space or two dimensional surface/projection of that space, which is predetermined in reference to physical boundaries such as a touch surface or a display of a device and is essentially limited by physical boundaries such as physical dimen- sions of the device.
  • either of the input areas may essentially include any combination of said surfaces and three-dimensional spaces.
  • the input areas may be defined by a number of different sensors so that each input area has at least some dedicated sensors that only detect input via their respective input areas.
  • a number of shared sensors may be used to define and serve both of the input areas.
  • the sensors may comprise a number of technology- wise essentially similar or different sensors.
  • technology-wise in this document is used to refer to components, elements and entities, particularly in the context of different user in- terface sensors, and it is meant to distinguish the components, elements and entities within a particular field of technology from the components, elements and entities belonging to other fields of technology.
  • components such as sensors that are used to produce capacitive user inter- faces are technology-wise different from the sensors used to produce infrared (IR)-based user interfaces.
  • IR infrared
  • the substrate may comprise any kind of material suitable to be used together with aforementioned input area sensors, such as a flexible plastic film.
  • a number of the sensors may be laid essentially on the surface of substrate or essentially embedded in the substrate.
  • the sensors defining first input area and the sensors defining the second input area may be ar- ranged essentially on the same substrate.
  • the sensors defining first input area and the sensors defining the second input area may be arranged essentially on different substrates, which substrates may be essentially physically connected, optionally with any other material such as non-input surface and/or coating re- siding in between and/or around them, by being essentially adjacent, overlapping, piled and/or enclosed within either, or said substrates may be physically disconnected, i.e., separate.
  • the first and second areas may be essentially separated by a functionally inactive area such as a non- touch surface area.
  • the sensors defining first input area and the sensors defining the second input area may be arranged essentially on the same substrate.
  • the input areas may be essentially adjacent, optionally with (empty) space between them.
  • the controller entity is arranged to monitor user input via first and second input areas. According to an embodiment of the invention the controller entity is arranged to disable user input via the second input area in the absence of user input detection via the first input area. According to an embodiment of the invention the controller entity is arranged to enable user input via the second input area when an ongoing user input via the first input area is detected.
  • the second input may optional- ly not be only enabled but also dependent on the first user input, such that for example the first user input determines, and/or anticipates by suggesting, what kind of user input may be given via second user input.
  • the electronic de- vice structure may be used together or included in for example a variety of electronic apparatuses incorporating different user interfaces (Uls) such as wearable technology and wearable computing devices and/or terminal devices including mobile, desktop, laptop, palmtop, phablet and/or tablet/pad devices.
  • the electronic device may be configured with a display to imple- ment a touch screen and/or a graphical user interface (GUI).
  • GUI graphical user interface
  • a method for manufacturing an embodiment of the electronic device comprises: -attaching a first number of sensors on a first substrate,
  • the first and second number of sensors may comprise the same amount (count) or a different amount of sensors.
  • the substrates may be made flexible and comprise plastic, silicon, rubber, or a mixture of these.
  • the first number of sensors and second number of sensors may be attached on the same substrate.
  • the sensors may be manufactured, optionally directly on the substrates, by screen printing or by any other printing technique such as rotary screen printing, gravure printing, flexography, jet printing, tampo printing, etching, transferlaminating or thin-film deposition utilizing conductive inks.
  • any of the substrates may be used as an insert in injection molding to mold substantially on said substrates, wherein a preferred layer of material is attached on the surface of the film, optionally to create housing for the sensors and sub- strates.
  • a method for obtaining user input through an electronic device comprises: -detecting essentially continuous user input provided via a first input area upon a first substrate defined using a first number of sensors,
  • the ongoing user input detected via the first input area may be used to enable input via the second input area.
  • the ongoing user input detected via the first input area may be also configured to invoke an interface, a feature or a plurality of features, such as a menu or one or more icons via a graphical user interface such as a display via the second input area.
  • the ongoing user input detected via the first input area may be used to disable a screen lock function on the second input area.
  • the ongoing user input detected via the first input area may be used to activate the second input area by e.g. turning said input area from inactivity state to active state.
  • the user input detected via the first input area may be used to restore the second input area and/or the electronic device from a sleep mode.
  • the first input area is arranged to determine when the device is in use, i.e., for example if a mobile device is being held in hand optionally in a particular predetermined manner or if a wearable technology apparatus is being worn. This may be attained by having the input area in a location in relation to the device where it interacts when in use with a specific user input means, such as for example the wrist in case of a wearable wrist apparatus or preferred fingers and/or palm in case of a handheld mobile device.
  • the first input area may so be in a bezel, encapsulation, cover and/or in any other part and location of the device, which interacts with the preferred user input means as the device is held or worn.
  • detection of absence or discontinuation in user input via the first input area may be for example used to deactivate one or more features, such as (computer) applications, functions, tasks, and/or a combination of them. This may advantageously be used to conserve energy and deactivate unneeded and idle tasks and func- tions when they aren't needed, such as when the device is not in use.
  • the functional command translated by the user input detected via the second input area may comprise invoking an interface, a feature or a plurality of features, which is optionally graphical, such as a menu or one more icons via said second input area. Even further, the functional command translated by the user input detected via the second input area may comprise giving commands via an interface such as choosing an icon or a menu function.
  • a computer program product embodied in a non-transitory computer readable carrier medium comprises computer code for causing the computer to execute:
  • the expression "a number of may herein refer to any positive integer start- ing from one (1 ).
  • the expression "a plurality of may refer to any positive integer starting from two (2), respectively.
  • the numerals “first” and “second” are herein used to distinguish various instances of mutually similar or different elements from each other. They do not indicate any particular priority, order or quantity of the elements unless otherwise explicitly specified.
  • the expression “engender”, which is mainly used together with giving user input, is herein used to refer to user action of giving input via any user interface, such as touch-based or three-dimensional user interface, which may be based on at least partially contactless user input technology.
  • exemplary refers herein to an example or example-like feature, not the sole or only preferable option.
  • Fig. 1 is a block diagram of one embodiment of an electronic device comprising entities in accordance with the present invention.
  • Fig. 2 illustrates exemplary configurations for input areas of an embodiment of an electronic device in accordance with the present invention.
  • Fig. 3 illustrates an embodiment of an electronic device in accordance with the present invention.
  • Fig. 4 illustrates another embodiment of an electronic device in accordance with the present invention.
  • Fig. 5 is a flow diagram disclosing one embodiment of a method in accordance with the present invention.
  • Figure 1 shows a block diagram of one feasible embodiment of an electronic device 100 in accordance with the present invention.
  • the electronic device 100 essentially comprises a first input area 102a, a second input area 102b, a first number of sensors 104a, a second number of sensors 104b, substrates 106a, 106b and a controller entity 108.
  • a first input area 102a a second input area 102b
  • a first number of sensors 104a a second number of sensors 104b
  • substrates 106a 106b
  • controller entity 108 essentially comprises a first input area 102a, a second input area 102b, a first number of sensors 104a, a second number of sensors 104b, substrates 106a, 106b and a controller entity 108.
  • substrates 106a, 106b optionally only one substrate 106a or 106b may be comprised in the device. Additional elements and means known to a person skilled in the art may be incorporated appropriately according to various embodiments.
  • the input areas 102a, 102b defined by the sensors 104a, 104b are
  • the input areas 102a, 102b may be (at least functionally) defined by a number of different sensors 104a, 104b so that each input area 102a, 102b has at least some dedicated sensors 104a, 104b that only detect input via their respective input areas 102a, 102b.
  • a number of shared sensors 104a, 104b may be used to define both of the input areas 102a, 102b.
  • the sensors 104a, 104b may comprise a number of (mutually) technology-wise essentially similar or (mutually) technology-wise essentially different sensors 104a, 104b.
  • the sensors 104a, 104b are preferably chosen from sensors suitable for detecting user input such as continuous touches and/or gestures.
  • Such electronic and/or electromechanical components comprise camera-based technology, capacitive, frustrated total internal reflection ((F)TIR), IR (infrared), optical, resistive, strain gauge and surface acoustic wave sensors.
  • conductive electrodes for facilitating capacitive detection may be utilized as sensors 104a, 104b.
  • additional components and/or elements necessary for the electronic device 100 construction may be used.
  • An embodiment of the device may incorporate technology-wise different sensors 104a, 104b for different input areas 102a, 102b enabling for technology-wise and optionally in terms of functionalities different input areas 102a, 102b.
  • the sensors 104a, 104b are preferably capable of detecting input such as touches and/or continuous movement, or swipes essentially upon and/or on a surface.
  • the sensors 104a, 104b may be capable to detect three-dimensional input such as movement inside a predetermined space optionally above and/or in reference to a surface such as the substrates 106a, 106b or a plane, such as formed by the sensors' 104a, 104b locations relative to each other.
  • the sensors 104a, 104b may comprise a combination of sensors capable of detecting input essentially on and/or upon a surface and said sensors capable of detecting three-dimensional input.
  • the sensors 104a, 104b are preferably manufactured by screen printing or by any other printing technique preferably belonging to printed electronics technology, rotary screen printing, gravure printing, flexography, jet printing, tampo printing, etching, transferlaminating or thin-film deposition utilizing conductive inks.
  • the printing may be done directly on a substrate or on a different film which is then attached to a substrate.
  • the sensors 104a, 104b may be surface-mount technology (SMTs), through-hole, flip-chip or printed entities.
  • SMT, though-hole, flip-chip and printed entities may be attached using optionally substantially flexible means by anchoring, gluing or by other adhesive, such as an epoxy adhesive. Both conductive (for enabling electrical contact) and non-conductive (for mere fixing) adhesives may be utilized.
  • Said entities may be selected by their technology and functions as well as so as to withstand the pressure and temperature of the utilized manufacturing methods as well as the housing establishing process, such as injection molding process.
  • the additional (complementary) elements may be electronic, electro-optic, electroacoustic, piezoelectric, electric, and/or electromechani- cal by nature, or at least comprise such components.
  • such components may comprise tactile components and/or vibration elements such as piezoelectric actuators or vibration motors, light-emitting components such as Organic Light Emitting Diode (O)LEDs, light blocking elements or structures, sound-emitting and or sound-receiving such as microphones and speakers, cameras, conductors, wires, fastening means and encasing(s).
  • O Organic Light Emitting Diode
  • sound-emitting and or sound-receiving such as microphones and speakers, cameras, conductors, wires, fastening means and encasing(s).
  • the configuration of the disclosed components may differ from the explicitly depicted one depending on the requirements of each intended use scenario and selected user interface technologies, wherein the present invention may be capitalized.
  • the substrates 106a, 106b are preferably chosen according to the sensors 104a, 104b and feasible manufacturing methods in accordance to the sensors 104a, 104b and the substrates 106a, 106b are used so that the intended input area technology is attained.
  • Such substrates 106a, 106b may fur- ther on be chosen i.a., according to material properties such as flexibility, thickness, adhesion properties, optical properties, conductivity and malleability.
  • the substrates 106a, 106b may comprise different structures such as single sheet, laminated and/or otherwise combined, merged, melded, joined and/or integrated structures.
  • the substrates 106a, 106b may contain a number of recesses, cavities, or holes for accommodating electronics such as electronic circuits, conductors, or component leads and/or sockets, etc.
  • the substrates 106a, 106b may also contain overlays.
  • the substrates 106a, 106b may also comprise decorations and/or graphics produced for example by printing, in-mould labeling (IML), or in-mould decorating (IMD).
  • the substrates 106a, 106b may constitute a single (aggregate or composite) film for example such that the substrates are integrated or attached to each other. Optionally only one of the substrates 106a, 106b is used in the device.
  • suitable substrate 106a, 106b materials comprise preferably polycarbonate (PC), polyethylene terephthalate (PET), polyethylene naph- thalate (PEN), PMMA (polymethyl methacrylate), polyimide (PI), liquid crystal polymer (LCP), polyethylene (PE), polypropylene (PP), and/or a mixture of these.
  • PC polycarbonate
  • PET polyethylene terephthalate
  • PEN polyethylene naph- thalate
  • PMMA polymethyl methacrylate
  • PI polyimide
  • LCP liquid crystal polymer
  • PE polyethylene
  • PP polypropylene
  • the substrate material is preferably chosen so that the substrates 106a, 106b may be made flexible.
  • preferable overlay materials comprise PC (polycarbonate), PMMA (polymethyl methacrylate), PA (polyamide, nylon), COC (cyclo olefin copolymer), COP (cyclo olefin polymer), and/or a mixture of these.
  • PC polycarbonate
  • PMMA polymethyl methacrylate
  • PA polyamide, nylon
  • COC cyclo olefin copolymer
  • COP cyclo olefin polymer
  • other materials such as other plastics, glass and/or a mixture of these may be used.
  • the substrates 106a, 106b may be embedded and/or integrated in other materials such as rubber, fabric, synthetic fiber, polymer, composite and/or any other feasible material, such as any material from the vast amount of materials used in wearable technology devices.
  • PCB printed circuit board
  • PWB printed wiring board
  • the controller entity 108 comprises a computing entity, which monitors and controls (by processing data from various sources such as the sensors and memory) the user input areas and their functioning relative to each other.
  • the controller entity 108 comprises, e.g.
  • processing/controlling unit such as a microprocessor, a digital signal processor (DSP), a digital signal controller (DSC), a micro-controller or programmable logic chip(s), optionally comprising a plurality of co-operating or parallel (sub-)units.
  • DSP digital signal processor
  • DSC digital signal controller
  • micro-controller or programmable logic chip(s), optionally comprising a plurality of co-operating or parallel (sub-)units.
  • the controller entity 108 is further on connected or integrated with a memory entity, which may be divided between one or more physical memory chips and/or cards.
  • the memory entity may comprise necessary code, e.g. in a form of a computer program/application, for enabling the control and operation of the device 100, and provision of the related control data.
  • the memory may comprise e.g. ROM (read only memory) or RAM-type (random access memory) implementations as disk storage storage or flash storage.
  • the memory may further comprise an advantageously detachable memory card/stick, a floppy disc, an optical disc, such as a CD-ROM, or a fixed/removable hard drive.
  • controller entity 108, memory entity and the other additional elements are preferably surface-mount technology (SMTs), through-hole, flip-chip or printed entities.
  • SMTs surface-mount technology
  • at least part of the said entities and elements may be printed.
  • SMT, though-hole, flip-chip and printed entities may be attached using optionally substantially flexible means by anchoring, laminating, molding, mechanically (screws, bolts, fingers, etc.) gluing or by other adhesive, such as an epoxy adhesive. Both conductive (for enabling electrical contact) and non-conductive (for mere fixing) adhesives may be utilized.
  • the electronic device 100 may also comprise a display panel such as an LCD (liquid crystal display), LED (light-emitting diode), organic light-emitting diode (OLED) or plasma display, for instance.
  • a display panel such as an LCD (liquid crystal display), LED (light-emitting diode), organic light-emitting diode (OLED) or plasma display, for instance.
  • So-called flat display technologies such as the aforementioned LCD, LED or OLED are in typical applica- tions preferred but in principle other technologies such as CRT (cathode ray tube) are feasible in the context of the present invention as well.
  • the display panel may also comprise a touch screen.
  • the device 100 may be used together, include or constitute for example a variety of electronic devices incorporating various user interfaces such as wearable technology devices and/or terminal devices including mobile, smartphone, desktop, laptop, palmtop, personal digital assistant (PDA), phablet and/or tablet/pad devices.
  • PDA personal digital assistant
  • the device 100 may also include or constitute a control device for industrial or other applications, a specific- or multi-purpose computer (desktop/laptop/palmtop), etc.
  • a specific- or multi-purpose computer etc.
  • various elements of the device may be directly integrated in the same housing or provided at least with functional connectivity, e.g. wired or wireless connectivity, with each other.
  • One applicable method for manufacturing the electronic device preferably comprises attaching a first number of sensors 104a on a first substrate 106a, attaching a second number of sensors 104b on said first or a second substrate 106b, and providing a controller entity 108 to at least functionally connect to the first number and second number of sensors 104a, 104b.
  • the substrates may be used as an insert in injection molding to mold substantially on said substrates, wherein a preferred layer of material is at- tached on the surface of the film, optionally to create a housing for the sensors and substrates.
  • the injection molding may be chosen from at least one of the following: co-injection molding, fusible core injection molding, gas- assisted injection molding, injection compression molding, insert molding, outsert molding, in-mold labeling, in-mold decoration, lamellar injection molding, low-pressure injection molding, microinjection molding, microcellu- lar molding, multicomponent injection molding (overmolding), multiple live- feed injection molding, push-pull injection molding, powder injection molding, reaction injection molding, resin transfer molding, rheomolding, structural foam injection molding, thin-wall molding, vibration gas injection mold- ing or water assisted injection molding.
  • the thickness of the established electronic device 100 structure and housing as well as the installation depth of the elements, entities and sensors 104a, 104b in said structure may be varied according to the embodiment, use case and application so that they may form a part of the surface (inner or outer surface of the overall electronic device) thereof or be completely embedded, or 'hidden', inside the housing.
  • This enables customization of the toughness, elas- ticity, transparency, etc., of the constructed structure as a whole as well as customization of the maintenance capabilities and protection of the elements, entities and/or sensors 104a, 104b.
  • Embedding the elements, entities and/or sensors 104a, 104b completely inside the housing typically pro- vides better protection.
  • elements, entities and/or sensors 104a, 104b may be embedded entirely, when other elements, entities and/or sensors 104a, 104b may be only partially embedded.
  • the electronic device 100 may comprise integrating the first input area 102a together with a device comprising a touch screen, wherein optionally said touch screen may be used as the second input area 102b.
  • Figure 2 illustrates various embodiments of the electronic device's 200 configurations.
  • the illustration depicts the electronic device 200, which comprises various exemplary first input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, and the second input area 202b housed together.
  • the various different elements, entities and sensors aren't depicted although they are in fact part of the structure.
  • the second input area 202b may comprise or be integrated with a display or a touch screen.
  • the illustrated embodiment of the electronic device 200 may so comprise or constitute a mobile device, such as a tablet, phablet or a (smart) phone or a wearable technology (computer) device, such as a wrist worn device incorporating e.g. a number of the input area 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b configurations.
  • a mobile device such as a tablet, phablet or a (smart) phone or a wearable technology (computer) device, such as a wrist worn device incorporating e.g. a number of the input area 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b configurations.
  • the depicted exemplary input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af may be located on the predefined "front side" 202aa, 202ac, 202ae, 202af, i.e., on the side which may be understood as the main operating side, on the backside 202ab, i.e., on the side opposite/contraposed to the "front side" and/or on either sides or "edges" 202ad of the device 200 and/or upon the said locations.
  • the first input area of 202aa, 202ab, 202ac, 202ad, 202ae, 202af and second input area 202b may be essentially adjacent, optionally with a space between them.
  • Any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may be essentially a surface such that the area, may optionally be, but needn't entirely be bound with the physical dimensions of the surface.
  • the depicted exemplary locations may comprise reference planes for three-dimensional input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b.
  • any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may essentially be (realized as) upon said reference planes as a three-dimensional input area.
  • Any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may be a three-dimensional space that is essentially predetermined in reference to physical boundaries such as a touch surface or a display of a device, but is not necessarily tied and/or limited by any physical boundaries and/or dimensions.
  • any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may be essentially a three-dimensional space or two- dimensional surface/projection of that space, which is predetermined in reference to physical boundaries such as a touch surface or a display of a device and is essentially limited by physical boundaries such as a device's physical dimensions.
  • a number of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may essentially include any combination of said input surfaces and three-dimensional input spaces.
  • user input provided via said input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may comprise techniques or gestures known from the state of art such as static touch or continuous movement, optionally in contact with a surface.
  • Means of engendering user input, such as static touch and/or movement, may comprise one or more fingers, another similarly suitable anatomical part and/or stylus/other dedicated or separate input element.
  • the user input may comprise one or more input means being provided simultaneously on any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b.
  • any of the input area locations 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b but in particular the first input area locations 202ab, 202ac, 202ad, 202ae, 202af may be chosen in accordance with the embodiment.
  • the first input area 202ab, 202ac, 202ad, 202ae, 202af is arranged to determine when the device 200 is in use, i.e., for example if a mobile device is being held in hand in a preferred manner or if a wearable technology apparatus is being worn, the first input area 202ab, 202ac, 202ad, 202ae, 202af is provided in a location where it interacts when in use with a specific user input means, such as for example the wrist in case of a wearable wrist apparatus or preferred fingers and/or palm in case of a handheld mobile device.
  • a specific user input means such as for example the wrist in case of a wearable wrist apparatus or preferred fingers and/or palm in case of a handheld mobile device.
  • the first input area may so be in the bezel, encapsulation, cover and/or in any other part and location of the electronic device, which interacts with the preferred user input means as the device is held or worn.
  • Figure 3 illustrates an embodiment of an electronic device 300 in accordance with the present invention.
  • the electronic device 300 may comprise one or more optionally integrated devices or elements, such as e.g. wearable wrist or "wristop" devices.
  • the device 300 further comprises a first input area 302a and a second input area 302b, which may be (either or both) optionally implemented in a strap (shown) and/or main housing of the device 300.
  • the first input area 302a is configured to allow user input via second input area 302b to control the device 300.
  • the device 300 preferably comprises a display screen as il- lustrated.
  • the first input area 302a is used to enable input via the display screen, which is optionally a touchscreen.
  • the first input area 302a may be used to enable user input via second input area 302b.
  • the input areas 302a, 302b may be configured and shaped in any feasible shape, so as to support ease-of-use and intuitive user input and/or user input gestures.
  • the first input area 302a may be configured to allow for and/or detect push user input in a continuous manner to enable swipe, circular, arched and/or other shaped movement user input (also presented hereinbefore) via the second input area 302b, the user input via second input area 302b being preferably used to control the host device functionalities and/or features.
  • the device 300 allows the input areas 302a, 302b to be essentially remote from an integrated host device, and optionally essentially remote from the controller entity, which may be comprised in the host device.
  • the input surfaces 302a, 302b may so be config- ured and built into existing devices and straps and/or any clothing incorporating and/or carrying a wearable technology device and/or terminal device, optionally such that the controller entity is comprised in the host device e.g., the controller device being the host device's computing/controller entity.
  • Figure 4 illustrates an embodiment of an electronic device 400 in accordance with the present invention.
  • FIG. 4 depicts the electronic device 400, herein in the form of a wearable wrist or "wristop" device, which comprises a first input area 402a and a second input area 402b. Further on, as mentioned hereinbefore the second input area 402b may comprise or be integrated with a display or a touch screen.
  • the electronic device 400 components, elements and/or entities may be es- sentially hermetically housed in the device 400 so that the device 400 may be waterproof.
  • the input area sensors used herein may be such that they enable user input via the first input area 402a and via the second input area 402b under water. Accordingly, the sensors may be configured so that they detect the presence of water and/or air, or other fluid, around the de- vice 400, such as essentially upon either of the input areas 402a, 402b, and calibrate the detection of user input via the input areas 402a, 402b accordingly.
  • basic level no user input
  • the device 400 may be herein used so that the user input via first input area 402a, such as a continuous gesture, touch, swipe and/or double tap, enables engendering user input via the second input area 402b.
  • first input area 402a such as a continuous gesture, touch, swipe and/or double tap.
  • the device and method may be utilized in or integrated to clothing such as footwear, strap, headwear, protective gear, vest, coat, jacket, shirt, eyewear, goggles, trousers or a combination of them, and to various accessories and even implants in some cases.
  • Figure 5 shows a flow diagram of one feasible embodiment of a method in accordance with the present invention.
  • user input via the second input area and optionally features of the electronic device area are preferably disabled.
  • the disabled features and/or user input may be chosen according to user preferences prior to the phase 502; i.e., essentially before carrying out the method.
  • Some examples of such disablements comprise disabling (not registering, ignoring, etc.) all the user input via the second input area such as that a user is not able to use the second input are and/or a touchscreen of a device.
  • disablements include disabling partly a device's user interface, for example, so that the user can for example view (data on) a touch screen but may not be able to interact and/or engender input via the said touch screen. Further on, the disablements may be limited (targeted) to not being able to initiate features, such as tasks and/or applications by user input via a device's touch surface.
  • Some other examples of the actions according to phase 502 include shutting down features, killing tasks and keeping tasks and/or functions disabled. Practical examples of such actions include shutting down for example WiFi and/or Global Positioning System (GPS) tasks and/or sensors, and/or idle applications of a device.
  • GPS Global Positioning System
  • user input engendered by a user via the first input area is detected.
  • Such user input may comprise static touch, movement and/or gestures engendered via the first input area, optionally to a graphical user interface area.
  • the means by which the said user input may be engendered may com- prise one or more fingers, other similarly suitable anatomical parts, worn input devices such as a fingertip, hand worn or wrist worn devices and/or e.g. styluses.
  • the user input may comprise one or more input means being provided simultaneously.
  • the user input detected via first input area at 504 is revised and monitored. The purpose of this phase may be to determine whether the input of phase 504 was intentional and whether the same intention to give the said input remains.
  • phase 508 is taken. If the said user input of the phase 504 has changed essentially so that it may be interpreted that the user input of phase 504 is not meant to be given anymore, i.e., input of phase 504 is not continuous and/or ongoing, as would be the case for example if the said input was unintentional or a user would change their mind and not want to engender the input anymore, the initial phase 502 is taken.
  • the user input via first user area is used to determine whether the device utilizing the method is in use.
  • the input via first input area may be used to invoke features and/or start the device or its features, such as GPS, Bluetooth, IR and/or wide area networking (WAN) features and/or applications.
  • the absence or discontinuity in the detection of user input via first user interface may be used to shut down the device utilizing the method or deactivate a number of the device's features such as idle applications and/or applications that aren't needed when the device isn't being used. Said embodiments are especially beneficial from the perspective of power management.
  • the controller entity monitoring the first input area enables providing (meaningful, response-triggering) input via the second input area.
  • features such as a number of different applications, tasks and/or functions are initiated based on the said input via first input area.
  • the controller entity invokes a number of graphical user interface features such as a menu, icons, lists or graphs on the second input area according to the user input detected via first user interface.
  • the user input detected via first input area may be used to disable lock screen or (re-)activate currently inactive screen, such as that of the screen used together with or integrated with the second input area.
  • the first input area may be also used to restore a device from a sleep mode.
  • user input engendered by a user via the second input area is detected.
  • Such user input may comprise static touch, movement and/or gestures engendered via the first input area, optionally to a graphical user inter- face area.
  • the means by which the said user input may be engendered may comprise one or more fingers, other similarly suitable anatomical parts, worn input devices such as a fingertip, hand worn or wrist worn devices and/or by styluses.
  • the user input may comprise one or more input means being provided simultaneously.
  • the user input via input area one may be engendered by a thumb and the user input via second user interface by the same hand's other fingers, which is beneficially convenient in some device embodiments.
  • the detected user input via the second input area is translated into a functional command.
  • Such functional command may optionally comprise invoking an interface, a feature or a plurality of features, which are optionally graphical, such as a menu or one more icons via said second input area and/or the device's touchscreen, and/or giving a number of commands via an interface such as choosing an icon or a menu function.
  • one functional command is given before taking the next phase but in some embodiments the user may be able to give a plurality of commands such as navigating through a plurality of icons as long as an application or a similar feature is invoked. This may be in many cases beneficial as it often takes more than one command or "click", i.e., choosing icons and/or other graph- ical interface features, to start an application.
  • phase 506 an essentially equal phase as in 506 is taken.
  • the possible next phase are either 508 (the process continues) or 502 (the process ends or starts from the beginning).
  • the phase 512 may be used to end the whole process. This means that instead of determining whether user in- put via first user input area is still ongoing the process returns to the first phase of the process 502. According to this embodiment engendering new input via first input area is required to start the process again. Said new input may be engendered by input means described hereinbefore in this doc- ument.

Abstract

Electronic device comprising a first number of sensors configured to define a first input area upon a first substrate, arranged to detect essentially continuous user input provided via said first input area, a second number of sensors configured to define a second input area upon said first or a second substrate, spatially and/or technology-wise separate from the first input area, arranged to detect user input provided via said second input area, wherein a controller entity is arranged, at least in a predefined functional state, and based on monitoring user input via said first and second input areas, to disable user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command. Related methods are also presented.

Description

Ul CONTROL REDUNDANT TOUCH
FIELD OF THE INVENTION
Generally the present invention concerns providing user input via an electronic user interface (Ul). Particularly, however not exclusively, the invention pertains to a method of providing user input via multiple predefined input features of the Ul, and to a device arranged to receive, monitor and detect input correspondingly.
BACKGROUND
Input and particularly e.g. touch surface technology have improved tremendously since the dawn of various touch surface technologies. Nowadays, touch surfaces are commonly used together with digital displays in a wide array of mobile and desktop devices ranging from small handheld devices to large interactive touch surface tables and "walls".
More recently, the emergence of wearable technology has broadened the way of how consumers and developers understand mobile devices and mobile and ubiquitous computing. Wearable technology offers a myriad of new ways to utilize sensors and applications compared to the more traditional mobile applications, but in turn, poses new challenges as well. Due to increase in the interest towards wearable technology applications, the problem of avoiding erroneous user input has become somewhat pressing. With mobile devices the problem has been widely recognized and discussed but the existing solutions rely almost exclusively on the use of "screen lock" and "inactivity state" type functions. These functions not only raise the complexity by adding to the ever increasing number of different functions utilized in mobile and wearable devices but their use is also tied to the complementary technological aspect of having a screen together with a touch surface. However, e.g. for many applications in the context of wearable technology, bundling the solution of getting rid of erroneous touches with a display is problematic, impractical or substantially impossible. Even further, many wearable technology and mobile device applications comprise arrangements and use contexts wherein dealing with the recognition between intentional and unintentional (i.e. erroneous, accidental) input by utilizing known device arrangements and methods is still generally rudi- mentary and insufficient.
SUMMARY OF THE INVENTION
The objective of the embodiments of the present invention is to at least alle- viate one or more of the aforesaid drawbacks evident in the prior art arrangements particularly in the context of electronic input area arrangements and input methods that eliminate unintentional user input via a primary device input area. The objective is generally achieved with a device and an input method in accordance with the present invention by having user input disabled (neglected, not registered, etc.) via a primary device input area unless an ongoing user input is detected via another determined user input area.
One of the many advantageous features of the present invention is that un- intentional input may be avoided by having essentially at least functionally separate input areas that are arranged so that one is used to enable user input via the other. For example, one area may be used as a primary input area of the device optionally together with a display and another area may be used to enable user input (to be engendered) via said primary input area. Conversely, the device may be further configured to disable some of its features or functionalities when ongoing user input via determined input area is absent.
One other of the many advantageous features of the present invention is that the suggested solution doesn't require or rely on multi-touch technology, whereupon the input areas used in the present invention may be simple in structure.
Another one of the many advantageous features of the present invention is that it doesn't require or rely on touch screen technology, i.e. including an electronic display together with an input surface, which anticipates that various embodiments of present device and method may be used in a wide range of applications that may omit displays or at least touch displays, such as in some wearable technology devices.
Further one of the many advantageous features of the present invention is that the input areas of the suggested device may be physically or spatially different so that the areas and according surfaces may overlap either partially or completely or they may not overlap at all allowing for having the surfaces in different locations, which is beneficial it allows the device for more freedom of design.
In congruence with one aspect of the present invention an electronic device comprises:
- a first number of sensors configured to define a first input area upon a first substrate, arranged to detect essentially continuous user input provided via said first input area,
- a second number of sensors configured to define a second input area upon said first or a second substrate, spatially and/or technology-wise sepa- rate from the first input area, arranged to detect user input provided via said second input area, wherein a controller entity is arranged, at least in a predefined functional state, and based on monitoring user input via said first and second input areas, to disable user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command.
According to an exemplary embodiment of the invention user input provided via said input areas may comprise means known from the state of art such as static touch or (continuous) movement, optionally in contact with a surface. Means of engendering user input, such as static touch and/or move- ment, may involve one or more fingers, other similarly suitable anatomical part(s) and/or stylus or other input elements/features. Further on, the user input may comprise one or more input means being provided simultaneously- According to a merely exemplary embodiment of the invention the sensor(s) configured to define either of the input areas may be sensors capable of detecting input such as touch and/or continuous movement essentially on a surface. According to one other merely exemplary embodiment of the inven- tion the sensor(s) configured to define either of the input areas may be sensors capable of detecting three-dimensional input such as movement or presence inside a predetermined space optionally above and/or in reference to a surface. According to a further exemplary embodiment of the invention the sensors configured to define either of the input areas may comprise a combination of sensors capable of detecting input on a surface and capable of detecting three-dimensional input.
According to an embodiment of the invention either of the input areas may define or be essentially a surface such that the area may optionally be but need not entirely be bound with the physical dimensions of the surface. According to an embodiment of the invention either of the input areas may be essentially a three-dimensional space that is substantially predetermined in reference to physical boundaries such as a touch surface or a display of a device, but is not necessarily tied and/or limited by any physical boundaries and/or dimensions. According to another embodiment of the invention either of the input areas may be essentially a three-dimensional space or two dimensional surface/projection of that space, which is predetermined in reference to physical boundaries such as a touch surface or a display of a device and is essentially limited by physical boundaries such as physical dimen- sions of the device. According to an embodiment of the invention either of the input areas may essentially include any combination of said surfaces and three-dimensional spaces.
According to an embodiment of the present invention the input areas may be defined by a number of different sensors so that each input area has at least some dedicated sensors that only detect input via their respective input areas. According to another embodiment of the present invention a number of shared sensors may be used to define and serve both of the input areas. In either embodiments the sensors may comprise a number of technology- wise essentially similar or different sensors.
The expression "technology-wise" in this document is used to refer to components, elements and entities, particularly in the context of different user in- terface sensors, and it is meant to distinguish the components, elements and entities within a particular field of technology from the components, elements and entities belonging to other fields of technology. For example, components such as sensors that are used to produce capacitive user inter- faces are technology-wise different from the sensors used to produce infrared (IR)-based user interfaces. The "technology-wise" is used so as to distinguish different technologies from each other although the software and hardware aspects may further on differ within, and in the range of, said technologies.
According to an embodiment of the present invention the substrate may comprise any kind of material suitable to be used together with aforementioned input area sensors, such as a flexible plastic film. According to an embodiment of the present invention a number of the sensors may be laid essentially on the surface of substrate or essentially embedded in the substrate.
According to an embodiment of the present invention the sensors defining first input area and the sensors defining the second input area may be ar- ranged essentially on the same substrate. According to an embodiment of the present invention the sensors defining first input area and the sensors defining the second input area may be arranged essentially on different substrates, which substrates may be essentially physically connected, optionally with any other material such as non-input surface and/or coating re- siding in between and/or around them, by being essentially adjacent, overlapping, piled and/or enclosed within either, or said substrates may be physically disconnected, i.e., separate. For example, the first and second areas may be essentially separated by a functionally inactive area such as a non- touch surface area. According to another embodiment of the present inven- tion the sensors defining first input area and the sensors defining the second input area may be arranged essentially on the same substrate.
According to an embodiment of the present invention the input areas may be essentially adjacent, optionally with (empty) space between them.
According to an embodiment of the present invention the controller entity is arranged to monitor user input via first and second input areas. According to an embodiment of the invention the controller entity is arranged to disable user input via the second input area in the absence of user input detection via the first input area. According to an embodiment of the invention the controller entity is arranged to enable user input via the second input area when an ongoing user input via the first input area is detected.
According to an embodiment of the invention the second input may optional- ly not be only enabled but also dependent on the first user input, such that for example the first user input determines, and/or anticipates by suggesting, what kind of user input may be given via second user input.
According to another embodiment of the present invention the electronic de- vice structure may be used together or included in for example a variety of electronic apparatuses incorporating different user interfaces (Uls) such as wearable technology and wearable computing devices and/or terminal devices including mobile, desktop, laptop, palmtop, phablet and/or tablet/pad devices. The electronic device may be configured with a display to imple- ment a touch screen and/or a graphical user interface (GUI).
In accordance with one aspect of the present invention a method for manufacturing an embodiment of the electronic device, comprises: -attaching a first number of sensors on a first substrate,
-attaching a second number of sensors on said first or a second substrate, and -providing a controller entity to at least functionally connect to the first number and second number of sensors.
According to embodiments of the present invention the first and second number of sensors may comprise the same amount (count) or a different amount of sensors.
According to an embodiment of the present invention the substrates may be made flexible and comprise plastic, silicon, rubber, or a mixture of these. According to an exemplary embodiment of the present invention the first number of sensors and second number of sensors may be attached on the same substrate. According to an exemplary embodiment the sensors may be manufactured, optionally directly on the substrates, by screen printing or by any other printing technique such as rotary screen printing, gravure printing, flexography, jet printing, tampo printing, etching, transferlaminating or thin-film deposition utilizing conductive inks.
According to an exemplary embodiment of the present invention any of the substrates may be used as an insert in injection molding to mold substantially on said substrates, wherein a preferred layer of material is attached on the surface of the film, optionally to create housing for the sensors and sub- strates.
In accordance with one aspect of the present invention a method for obtaining user input through an electronic device, comprises: -detecting essentially continuous user input provided via a first input area upon a first substrate defined using a first number of sensors,
-detecting user input provided via a second input area upon said first or a second substrate, spatially and/or technology-wise separate from the first input area, and disabling user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command.
According to an embodiment of the invention the ongoing user input detected via the first input area may be used to enable input via the second input area. According to an embodiment of the invention the ongoing user input detected via the first input area may be also configured to invoke an interface, a feature or a plurality of features, such as a menu or one or more icons via a graphical user interface such as a display via the second input area. According to an embodiment, the ongoing user input detected via the first input area may be used to disable a screen lock function on the second input area. According to an embodiment, the ongoing user input detected via the first input area may be used to activate the second input area by e.g. turning said input area from inactivity state to active state.
According to an embodiment, the user input detected via the first input area may be used to restore the second input area and/or the electronic device from a sleep mode.
According to an embodiment of the present invention the first input area is arranged to determine when the device is in use, i.e., for example if a mobile device is being held in hand optionally in a particular predetermined manner or if a wearable technology apparatus is being worn. This may be attained by having the input area in a location in relation to the device where it interacts when in use with a specific user input means, such as for example the wrist in case of a wearable wrist apparatus or preferred fingers and/or palm in case of a handheld mobile device. The first input area may so be in a bezel, encapsulation, cover and/or in any other part and location of the device, which interacts with the preferred user input means as the device is held or worn. According to an embodiment of the present invention detection of absence or discontinuation in user input via the first input area may be for example used to deactivate one or more features, such as (computer) applications, functions, tasks, and/or a combination of them. This may advantageously be used to conserve energy and deactivate unneeded and idle tasks and func- tions when they aren't needed, such as when the device is not in use.
According to an embodiment of the present invention the functional command translated by the user input detected via the second input area may comprise invoking an interface, a feature or a plurality of features, which is optionally graphical, such as a menu or one more icons via said second input area. Even further, the functional command translated by the user input detected via the second input area may comprise giving commands via an interface such as choosing an icon or a menu function. In accordance with one aspect of the present invention a computer program product embodied in a non-transitory computer readable carrier medium, comprises computer code for causing the computer to execute:
-detecting essentially continuous user input provided via a first input area upon a first substrate defined using a first number of sensors,
-detecting user input provided via a second input area upon said first or a second substrate, spatially and/or technology-wise separate from the first input area, and disabling user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command.
The previously presented considerations concerning the various embodiments of the electronic device may be flexibly applied to the embodiments of the method mutatis mutandis and vice versa, as being appreciated by a skilled person. Similarly, the electronic structure obtained by the method and corresponding arrangement is scalable in the limitations of the entities according to the arrangement. As briefly reviewed hereinbefore, the utility of the different aspects of the present invention arises from a plurality of issues depending on each particular embodiment.
The expression "a number of may herein refer to any positive integer start- ing from one (1 ). The expression "a plurality of may refer to any positive integer starting from two (2), respectively.
The numerals "first" and "second" are herein used to distinguish various instances of mutually similar or different elements from each other. They do not indicate any particular priority, order or quantity of the elements unless otherwise explicitly specified. The expression "engender", which is mainly used together with giving user input, is herein used to refer to user action of giving input via any user interface, such as touch-based or three-dimensional user interface, which may be based on at least partially contactless user input technology.
The term "exemplary" refers herein to an example or example-like feature, not the sole or only preferable option.
Different embodiments of the present invention are also disclosed in the at- tached dependent claims.
BRIEF DESCRIPTION OF THE RELATED DRAWINGS
Next, the embodiments of the present invention are more closely reviewed with reference to the attached drawings, wherein
Fig. 1 is a block diagram of one embodiment of an electronic device comprising entities in accordance with the present invention.
Fig. 2 illustrates exemplary configurations for input areas of an embodiment of an electronic device in accordance with the present invention.
Fig. 3 illustrates an embodiment of an electronic device in accordance with the present invention.
Fig. 4 illustrates another embodiment of an electronic device in accordance with the present invention.
Fig. 5 is a flow diagram disclosing one embodiment of a method in accordance with the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS Figure 1 shows a block diagram of one feasible embodiment of an electronic device 100 in accordance with the present invention.
The electronic device 100 essentially comprises a first input area 102a, a second input area 102b, a first number of sensors 104a, a second number of sensors 104b, substrates 106a, 106b and a controller entity 108. Optionally only one substrate 106a or 106b may be comprised in the device. Additional elements and means known to a person skilled in the art may be incorporated appropriately according to various embodiments. The input areas 102a, 102b defined by the sensors 104a, 104b are essentially touch-based, touchless and/or three-dimensional input areas via which a user may give input to the device 100.
The input areas 102a, 102b may be (at least functionally) defined by a number of different sensors 104a, 104b so that each input area 102a, 102b has at least some dedicated sensors 104a, 104b that only detect input via their respective input areas 102a, 102b. Optionally a number of shared sensors 104a, 104b may be used to define both of the input areas 102a, 102b. In either of the embodiments the sensors 104a, 104b may comprise a number of (mutually) technology-wise essentially similar or (mutually) technology-wise essentially different sensors 104a, 104b. The sensors 104a, 104b are preferably chosen from sensors suitable for detecting user input such as continuous touches and/or gestures. Such electronic and/or electromechanical components comprise camera-based technology, capacitive, frustrated total internal reflection ((F)TIR), IR (infrared), optical, resistive, strain gauge and surface acoustic wave sensors. Also conductive electrodes for facilitating capacitive detection may be utilized as sensors 104a, 104b. In accordance to the sensors 104a, 104b additional components and/or elements necessary for the electronic device 100 construction may be used. An embodiment of the device may incorporate technology-wise different sensors 104a, 104b for different input areas 102a, 102b enabling for technology-wise and optionally in terms of functionalities different input areas 102a, 102b.
The sensors 104a, 104b are preferably capable of detecting input such as touches and/or continuous movement, or swipes essentially upon and/or on a surface. Optionally the sensors 104a, 104b may be capable to detect three-dimensional input such as movement inside a predetermined space optionally above and/or in reference to a surface such as the substrates 106a, 106b or a plane, such as formed by the sensors' 104a, 104b locations relative to each other. Optionally the sensors 104a, 104b may comprise a combination of sensors capable of detecting input essentially on and/or upon a surface and said sensors capable of detecting three-dimensional input. The sensors 104a, 104b are preferably manufactured by screen printing or by any other printing technique preferably belonging to printed electronics technology, rotary screen printing, gravure printing, flexography, jet printing, tampo printing, etching, transferlaminating or thin-film deposition utilizing conductive inks. The printing may be done directly on a substrate or on a different film which is then attached to a substrate.
Optionally the sensors 104a, 104b may be surface-mount technology (SMTs), through-hole, flip-chip or printed entities. SMT, though-hole, flip- chip and printed entities may be attached using optionally substantially flexible means by anchoring, gluing or by other adhesive, such as an epoxy adhesive. Both conductive (for enabling electrical contact) and non-conductive (for mere fixing) adhesives may be utilized. Said entities may be selected by their technology and functions as well as so as to withstand the pressure and temperature of the utilized manufacturing methods as well as the housing establishing process, such as injection molding process.
As an example, the additional (complementary) elements may be electronic, electro-optic, electroacoustic, piezoelectric, electric, and/or electromechani- cal by nature, or at least comprise such components. Further on, such components may comprise tactile components and/or vibration elements such as piezoelectric actuators or vibration motors, light-emitting components such as Organic Light Emitting Diode (O)LEDs, light blocking elements or structures, sound-emitting and or sound-receiving such as microphones and speakers, cameras, conductors, wires, fastening means and encasing(s). As being appreciated by skilled readers, the configuration of the disclosed components may differ from the explicitly depicted one depending on the requirements of each intended use scenario and selected user interface technologies, wherein the present invention may be capitalized.
The substrates 106a, 106b are preferably chosen according to the sensors 104a, 104b and feasible manufacturing methods in accordance to the sensors 104a, 104b and the substrates 106a, 106b are used so that the intended input area technology is attained. Such substrates 106a, 106b may fur- ther on be chosen i.a., according to material properties such as flexibility, thickness, adhesion properties, optical properties, conductivity and malleability. The substrates 106a, 106b may comprise different structures such as single sheet, laminated and/or otherwise combined, merged, melded, joined and/or integrated structures. The substrates 106a, 106b may contain a number of recesses, cavities, or holes for accommodating electronics such as electronic circuits, conductors, or component leads and/or sockets, etc. The substrates 106a, 106b may also contain overlays. The substrates 106a, 106b may also comprise decorations and/or graphics produced for example by printing, in-mould labeling (IML), or in-mould decorating (IMD).
The substrates 106a, 106b may constitute a single (aggregate or composite) film for example such that the substrates are integrated or attached to each other. Optionally only one of the substrates 106a, 106b is used in the device.
Examples of suitable substrate 106a, 106b materials comprise preferably polycarbonate (PC), polyethylene terephthalate (PET), polyethylene naph- thalate (PEN), PMMA (polymethyl methacrylate), polyimide (PI), liquid crystal polymer (LCP), polyethylene (PE), polypropylene (PP), and/or a mixture of these. Other suitable materials comprise other plastics, silicon, rubber, or a mixture of these. Further on, the substrate material is preferably chosen so that the substrates 106a, 106b may be made flexible.
Examples of preferable overlay materials comprise PC (polycarbonate), PMMA (polymethyl methacrylate), PA (polyamide, nylon), COC (cyclo olefin copolymer), COP (cyclo olefin polymer), and/or a mixture of these. However, also other materials such as other plastics, glass and/or a mixture of these may be used.
Additionally the substrates 106a, 106b, optionally with elements and com- ponents such as the sensors 104a, 104b, may be embedded and/or integrated in other materials such as rubber, fabric, synthetic fiber, polymer, composite and/or any other feasible material, such as any material from the vast amount of materials used in wearable technology devices. Optionally printed circuit board (PCB), or printed wiring board (PWB), may be used as either of, or essentially partly together with, the substrates 106a, 106b. The controller entity 108 comprises a computing entity, which monitors and controls (by processing data from various sources such as the sensors and memory) the user input areas and their functioning relative to each other. The controller entity 108 comprises, e.g. at least one processing/controlling unit such as a microprocessor, a digital signal processor (DSP), a digital signal controller (DSC), a micro-controller or programmable logic chip(s), optionally comprising a plurality of co-operating or parallel (sub-)units.
The controller entity 108 is further on connected or integrated with a memory entity, which may be divided between one or more physical memory chips and/or cards. The memory entity may comprise necessary code, e.g. in a form of a computer program/application, for enabling the control and operation of the device 100, and provision of the related control data. The memory may comprise e.g. ROM (read only memory) or RAM-type (random access memory) implementations as disk storage storage or flash storage. The memory may further comprise an advantageously detachable memory card/stick, a floppy disc, an optical disc, such as a CD-ROM, or a fixed/removable hard drive. Also controller entity 108, memory entity and the other additional elements are preferably surface-mount technology (SMTs), through-hole, flip-chip or printed entities. Optionally, at least part of the said entities and elements may be printed. SMT, though-hole, flip-chip and printed entities may be attached using optionally substantially flexible means by anchoring, laminating, molding, mechanically (screws, bolts, fingers, etc.) gluing or by other adhesive, such as an epoxy adhesive. Both conductive (for enabling electrical contact) and non-conductive (for mere fixing) adhesives may be utilized.
The electronic device 100 may also comprise a display panel such as an LCD (liquid crystal display), LED (light-emitting diode), organic light-emitting diode (OLED) or plasma display, for instance. So-called flat display technologies such as the aforementioned LCD, LED or OLED are in typical applica- tions preferred but in principle other technologies such as CRT (cathode ray tube) are feasible in the context of the present invention as well. The display panel may also comprise a touch screen. The device 100 may be used together, include or constitute for example a variety of electronic devices incorporating various user interfaces such as wearable technology devices and/or terminal devices including mobile, smartphone, desktop, laptop, palmtop, personal digital assistant (PDA), phablet and/or tablet/pad devices. The device 100 may also include or constitute a control device for industrial or other applications, a specific- or multi-purpose computer (desktop/laptop/palmtop), etc. As being clear to a skilled person, various elements of the device may be directly integrated in the same housing or provided at least with functional connectivity, e.g. wired or wireless connectivity, with each other.
One applicable method for manufacturing the electronic device preferably comprises attaching a first number of sensors 104a on a first substrate 106a, attaching a second number of sensors 104b on said first or a second substrate 106b, and providing a controller entity 108 to at least functionally connect to the first number and second number of sensors 104a, 104b.
The substrates may be used as an insert in injection molding to mold substantially on said substrates, wherein a preferred layer of material is at- tached on the surface of the film, optionally to create a housing for the sensors and substrates. The injection molding may be chosen from at least one of the following: co-injection molding, fusible core injection molding, gas- assisted injection molding, injection compression molding, insert molding, outsert molding, in-mold labeling, in-mold decoration, lamellar injection molding, low-pressure injection molding, microinjection molding, microcellu- lar molding, multicomponent injection molding (overmolding), multiple live- feed injection molding, push-pull injection molding, powder injection molding, reaction injection molding, resin transfer molding, rheomolding, structural foam injection molding, thin-wall molding, vibration gas injection mold- ing or water assisted injection molding.
Generally in the embodiments of the present invention, the thickness of the established electronic device 100 structure and housing as well as the installation depth of the elements, entities and sensors 104a, 104b in said structure may be varied according to the embodiment, use case and application so that they may form a part of the surface (inner or outer surface of the overall electronic device) thereof or be completely embedded, or 'hidden', inside the housing. This enables customization of the toughness, elas- ticity, transparency, etc., of the constructed structure as a whole as well as customization of the maintenance capabilities and protection of the elements, entities and/or sensors 104a, 104b. Embedding the elements, entities and/or sensors 104a, 104b completely inside the housing typically pro- vides better protection. Optionally leaving the elements, entities and/or sensors 104a, 104b to the surface provides less protection but enables easier maintenance or replacement. Depending on the application and the sensor technology used, certain elements, entities and/or sensors 104a, 104b may be embedded entirely, when other elements, entities and/or sensors 104a, 104b may be only partially embedded.
Optionally the electronic device 100 may comprise integrating the first input area 102a together with a device comprising a touch screen, wherein optionally said touch screen may be used as the second input area 102b.
Figure 2 illustrates various embodiments of the electronic device's 200 configurations. The illustration depicts the electronic device 200, which comprises various exemplary first input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, and the second input area 202b housed together. For the sake of clarity and understandability the various different elements, entities and sensors aren't depicted although they are in fact part of the structure. Further on, as mentioned hereinbefore the second input area 202b may comprise or be integrated with a display or a touch screen. Accordingly, the illustrated embodiment of the electronic device 200 may so comprise or constitute a mobile device, such as a tablet, phablet or a (smart) phone or a wearable technology (computer) device, such as a wrist worn device incorporating e.g. a number of the input area 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b configurations. The depicted exemplary input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af may be located on the predefined "front side" 202aa, 202ac, 202ae, 202af, i.e., on the side which may be understood as the main operating side, on the backside 202ab, i.e., on the side opposite/contraposed to the "front side" and/or on either sides or "edges" 202ad of the device 200 and/or upon the said locations. As depicted in figure 2 the first input area of 202aa, 202ab, 202ac, 202ad, 202ae, 202af and second input area 202b may be essentially adjacent, optionally with a space between them. Any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may be essentially a surface such that the area, may optionally be, but needn't entirely be bound with the physical dimensions of the surface.
Optionally the depicted exemplary locations may comprise reference planes for three-dimensional input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b. Although not clearly illustrated any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may essentially be (realized as) upon said reference planes as a three-dimensional input area. Any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may be a three-dimensional space that is essentially predetermined in reference to physical boundaries such as a touch surface or a display of a device, but is not necessarily tied and/or limited by any physical boundaries and/or dimensions. Optionally any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may be essentially a three-dimensional space or two- dimensional surface/projection of that space, which is predetermined in reference to physical boundaries such as a touch surface or a display of a device and is essentially limited by physical boundaries such as a device's physical dimensions. Optionally a number of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may essentially include any combination of said input surfaces and three-dimensional input spaces.
According to an exemplary embodiment of the invention user input provided via said input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b may comprise techniques or gestures known from the state of art such as static touch or continuous movement, optionally in contact with a surface. Means of engendering user input, such as static touch and/or movement, may comprise one or more fingers, another similarly suitable anatomical part and/or stylus/other dedicated or separate input element. Further on, the user input may comprise one or more input means being provided simultaneously on any of the input areas 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b. Any of the input area locations 202aa, 202ab, 202ac, 202ad, 202ae, 202af, 202b but in particular the first input area locations 202ab, 202ac, 202ad, 202ae, 202af may be chosen in accordance with the embodiment. For example if the first input area 202ab, 202ac, 202ad, 202ae, 202af is arranged to determine when the device 200 is in use, i.e., for example if a mobile device is being held in hand in a preferred manner or if a wearable technology apparatus is being worn, the first input area 202ab, 202ac, 202ad, 202ae, 202af is provided in a location where it interacts when in use with a specific user input means, such as for example the wrist in case of a wearable wrist apparatus or preferred fingers and/or palm in case of a handheld mobile device. The first input area may so be in the bezel, encapsulation, cover and/or in any other part and location of the electronic device, which interacts with the preferred user input means as the device is held or worn. Figure 3 illustrates an embodiment of an electronic device 300 in accordance with the present invention.
The electronic device 300 may comprise one or more optionally integrated devices or elements, such as e.g. wearable wrist or "wristop" devices. The device 300 further comprises a first input area 302a and a second input area 302b, which may be (either or both) optionally implemented in a strap (shown) and/or main housing of the device 300. Herein, the first input area 302a is configured to allow user input via second input area 302b to control the device 300. The device 300 preferably comprises a display screen as il- lustrated. Optionally the first input area 302a is used to enable input via the display screen, which is optionally a touchscreen.
In the present configuration illustrated the first input area 302a may be used to enable user input via second input area 302b. The input areas 302a, 302b may be configured and shaped in any feasible shape, so as to support ease-of-use and intuitive user input and/or user input gestures. For example, the first input area 302a may be configured to allow for and/or detect push user input in a continuous manner to enable swipe, circular, arched and/or other shaped movement user input (also presented hereinbefore) via the second input area 302b, the user input via second input area 302b being preferably used to control the host device functionalities and/or features. As illustrated, the device 300 allows the input areas 302a, 302b to be essentially remote from an integrated host device, and optionally essentially remote from the controller entity, which may be comprised in the host device. This is beneficial because the input surfaces 302a, 302b may so be config- ured and built into existing devices and straps and/or any clothing incorporating and/or carrying a wearable technology device and/or terminal device, optionally such that the controller entity is comprised in the host device e.g., the controller device being the host device's computing/controller entity. Figure 4 illustrates an embodiment of an electronic device 400 in accordance with the present invention.
The illustration of figure 4 depicts the electronic device 400, herein in the form of a wearable wrist or "wristop" device, which comprises a first input area 402a and a second input area 402b. Further on, as mentioned hereinbefore the second input area 402b may comprise or be integrated with a display or a touch screen.
The electronic device 400 components, elements and/or entities may be es- sentially hermetically housed in the device 400 so that the device 400 may be waterproof. Also, the input area sensors used herein may be such that they enable user input via the first input area 402a and via the second input area 402b under water. Accordingly, the sensors may be configured so that they detect the presence of water and/or air, or other fluid, around the de- vice 400, such as essentially upon either of the input areas 402a, 402b, and calibrate the detection of user input via the input areas 402a, 402b accordingly. For calibration or adaptation, basic level (no user input) may be determined from the sensed data. The device 400 may be herein used so that the user input via first input area 402a, such as a continuous gesture, touch, swipe and/or double tap, enables engendering user input via the second input area 402b. The features presented in this paragraph may also be applicable and utilized in the embodiment of figure 3.
The wearable technology, wearable computing and ubiquitous computing applications are however plenty, while two embodiments of which being depicted herein in more detail to illustrate some of the functionalities, benefits and possibilities of the invention. For example, the device and method may be utilized in or integrated to clothing such as footwear, strap, headwear, protective gear, vest, coat, jacket, shirt, eyewear, goggles, trousers or a combination of them, and to various accessories and even implants in some cases.
Figure 5 shows a flow diagram of one feasible embodiment of a method in accordance with the present invention. At 502, referring to an initial state of the method according to the present invention, user input via the second input area and optionally features of the electronic device area are preferably disabled. The disabled features and/or user input may be chosen according to user preferences prior to the phase 502; i.e., essentially before carrying out the method. Some examples of such disablements comprise disabling (not registering, ignoring, etc.) all the user input via the second input area such as that a user is not able to use the second input are and/or a touchscreen of a device. Other examples of disablements include disabling partly a device's user interface, for example, so that the user can for example view (data on) a touch screen but may not be able to interact and/or engender input via the said touch screen. Further on, the disablements may be limited (targeted) to not being able to initiate features, such as tasks and/or applications by user input via a device's touch surface. Some other examples of the actions according to phase 502 include shutting down features, killing tasks and keeping tasks and/or functions disabled. Practical examples of such actions include shutting down for example WiFi and/or Global Positioning System (GPS) tasks and/or sensors, and/or idle applications of a device.
At 504, user input engendered by a user via the first input area is detected. Such user input may comprise static touch, movement and/or gestures engendered via the first input area, optionally to a graphical user interface area. The means by which the said user input may be engendered may com- prise one or more fingers, other similarly suitable anatomical parts, worn input devices such as a fingertip, hand worn or wrist worn devices and/or e.g. styluses. Further on, the user input may comprise one or more input means being provided simultaneously. At 506, the user input detected via first input area at 504 is revised and monitored. The purpose of this phase may be to determine whether the input of phase 504 was intentional and whether the same intention to give the said input remains.
If the said user input of phase 504 is still detected as being essentially the same, i.e., the input refers to the same action and so has the same intention as in the beginning, and is hence continuous, and ongoing, the phase 508 is taken. If the said user input of the phase 504 has changed essentially so that it may be interpreted that the user input of phase 504 is not meant to be given anymore, i.e., input of phase 504 is not continuous and/or ongoing, as would be the case for example if the said input was unintentional or a user would change their mind and not want to engender the input anymore, the initial phase 502 is taken.
Optionally according to some embodiments of the method and to some use contexts of the device utilizing said method the user input via first user area is used to determine whether the device utilizing the method is in use. In such case the input via first input area may be used to invoke features and/or start the device or its features, such as GPS, Bluetooth, IR and/or wide area networking (WAN) features and/or applications. Conversely according to the previous and some other embodiments, herein referring also to the phase 502, the absence or discontinuity in the detection of user input via first user interface may be used to shut down the device utilizing the method or deactivate a number of the device's features such as idle applications and/or applications that aren't needed when the device isn't being used. Said embodiments are especially beneficial from the perspective of power management.
At 508, the controller entity monitoring the first input area enables providing (meaningful, response-triggering) input via the second input area. Optionally additionally features such as a number of different applications, tasks and/or functions are initiated based on the said input via first input area. Optionally additionally the controller entity invokes a number of graphical user interface features such as a menu, icons, lists or graphs on the second input area according to the user input detected via first user interface. Optionally additionally the user input detected via first input area may be used to disable lock screen or (re-)activate currently inactive screen, such as that of the screen used together with or integrated with the second input area. Additionally the first input area may be also used to restore a device from a sleep mode.
At 510, user input engendered by a user via the second input area is detected. Such user input may comprise static touch, movement and/or gestures engendered via the first input area, optionally to a graphical user inter- face area. The means by which the said user input may be engendered may comprise one or more fingers, other similarly suitable anatomical parts, worn input devices such as a fingertip, hand worn or wrist worn devices and/or by styluses. Further on, the user input may comprise one or more input means being provided simultaneously. For the sake of example, in some embodi- ments the user input via input area one may be engendered by a thumb and the user input via second user interface by the same hand's other fingers, which is beneficially convenient in some device embodiments.
At 512, the detected user input via the second input area is translated into a functional command. Such functional command may optionally comprise invoking an interface, a feature or a plurality of features, which are optionally graphical, such as a menu or one more icons via said second input area and/or the device's touchscreen, and/or giving a number of commands via an interface such as choosing an icon or a menu function. Preferably one functional command is given before taking the next phase but in some embodiments the user may be able to give a plurality of commands such as navigating through a plurality of icons as long as an application or a similar feature is invoked. This may be in many cases beneficial as it often takes more than one command or "click", i.e., choosing icons and/or other graph- ical interface features, to start an application.
At 514, an essentially equal phase as in 506 is taken. As in phase 506 the possible next phase are either 508 (the process continues) or 502 (the process ends or starts from the beginning).
Optionally conversely, according to another embodiment of the method in accordance with the present invention the phase 512 may be used to end the whole process. This means that instead of determining whether user in- put via first user input area is still ongoing the process returns to the first phase of the process 502. According to this embodiment engendering new input via first input area is required to start the process again. Said new input may be engendered by input means described hereinbefore in this doc- ument.
The scope of the invention is determined by the attached claims together with the equivalents thereof. The skilled persons will again appreciate the fact that the disclosed embodiments were constructed for illustrative pur- poses only, and the innovative fulcrum reviewed herein will cover further embodiments, embodiment combinations, variations and equivalents that better suit each particular use case of the invention.

Claims

Claims
1. An electronic device comprising: - a first number of sensors configured to define a first input area upon a first substrate, arranged to detect essentially continuous user input provided via said first input area,
- a second number of sensors configured to define a second input area up- on said first or a second substrate, spatially and/or technology-wise separate from the first input area, arranged to detect user input provided via said second input area, wherein a controller entity is arranged, at least in a predefined functional state, and based on monitoring user input via said first and second input areas, to disable user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one func- tional command.
2. The device according to any preceding claim, wherein the device comprises a display, which is optionally a touch screen, the display being optionally configured together with the second input area.
3. The device according to any preceding claim, wherein any of the first input or second input areas comprise conductive electrodes for facilitating capacitive detection.
4. The device according to any preceding claim, wherein the first input area essentially comprises at least a part of a bezel of the electronic device.
5. The device according to any preceding claim, wherein the first and second areas are essentially separated by a functionally inactive area such as a non-touch surface area.
6. The device according to any preceding claim, wherein a number of optionally mutually technology-wise different sensors are used to define both first and second input areas.
7. A wearable technology device comprising the electronic device of claim 1.
8. The wearable technology device comprising the electronic device of 7, wherein said wearable technology device comprises or constitutes a wrist device or a wristop device.
9. A piece of clothing, comprising the wearable technology device of claim 7, wherein said wearable technology device is integrated with the clothing such as footwear, strap, headwear, protective gear, vest, coat, jacket, shirt, eyewear, goggles, trousers or a combination of them.
10. A mobile terminal comprising the device of any of claims 1 -8.
1 1. A desktop or laptop computer comprising the device of any of claims 1 -6.
12. A tablet or a phablet computer comprising the device of any of claims 1 -6.
13. A method for manufacturing an electronic device, said device comprising a first number of sensors configured to define a first input area upon a first substrate, arranged to detect continuous user input provided via said first input area, a second number of sensors configured to define a second input area upon said first or second substrate, spatially and/or technology- wise separate from the first input area, arranged to detect user input provided via said second input area, wherein a controller entity is arranged, at least in a predefined functional state, and based on monitoring user input via said first and second input areas, to disable user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into a functional command, said method comprising:
-attaching said first number of sensors on a first substrate, -attaching said second number of sensors on said first or on a second substrate, and -providing said controller entity to at least functionally connect to said first number and second number of sensors.
14. The method according to claim 13, wherein any of the substrates is flexible and comprises polymer, silicon, rubber, or a mixture of these.
15. The method according to claims 13-14, wherein any of the sensors is manufactured, optionally directly on a substrate, by screen printing or by any other printing technique such as rotary screen printing, gravure printing, flexography, jet printing, tampo printing, etching, transferlaminating or thin- film deposition utilizing conductive inks.
16. The method according to claims 13-15, wherein any of the substrates is used as an insert in injection molding to mold substantially on said substrates, wherein a preferred layer of material is attached on the surface of the film, optionally to create a housing for the sensors and substrates.
17. A method for obtaining user input through an electronic device, comprising: -detecting essentially continuous user input provided via a first input area upon a first substrate defined using a first number of sensors,
-detecting user input provided via a second input area upon said first or a second substrate, spatially and/or technology-wise separate from the first input area, and disabling user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command.
18. The method according to claim 17, wherein the first input area is arranged to determine when the device is in use, i.e., for example if a mobile device is being held in hand or wearable technology apparatus is being worn.
19. The method according to claims 17-18, wherein the user input detect- ed via the first input area is used to activate one or more features, such as applications, functions, tasks, and/or a combination of them.
20. The method according to claims 17-19, wherein the absence or discontinuation of user input via the first input area is used to deactivate one or more features, such as applications, functions, tasks, and/or a combination of them.
21. The method according to claims 17-20, wherein the user input detected via the first input area is used to activate, unlock and/or restore the sec- ond input area from a sleep mode.
22. A computer program product embodied in a non-transitory computer readable medium, comprising computer code for causing the computer to execute:
-detecting essentially continuous user input provided via a first input area upon a first substrate defined using a first number of sensors,
-detecting user input provided via a second input area upon said first or a second substrate, spatially and/or technology-wise separate from the first input area, and disabling user input via the second input area unless substantially continuous, ongoing user input is detected via said first input area, in which case simultaneous input via the second input area is translated into at least one functional command.
PCT/FI2015/050759 2014-11-04 2015-11-04 Ui control redundant touch WO2016071569A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462074748P 2014-11-04 2014-11-04
US62/074,748 2014-11-04

Publications (1)

Publication Number Publication Date
WO2016071569A1 true WO2016071569A1 (en) 2016-05-12

Family

ID=55908643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2015/050759 WO2016071569A1 (en) 2014-11-04 2015-11-04 Ui control redundant touch

Country Status (2)

Country Link
TW (1) TW201633075A (en)
WO (1) WO2016071569A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090303174A1 (en) * 2008-06-06 2009-12-10 Oqo, Inc. Control of dual function input area
US20110267371A1 (en) * 2010-04-28 2011-11-03 Hon Hai Precision Industry Co., Ltd. System and method for controlling touchpad of electronic device
US20130271350A1 (en) * 2012-04-13 2013-10-17 Nokia Corporation Multi-segment wearable accessory
US20140078086A1 (en) * 2012-09-20 2014-03-20 Marvell World Trade Ltd. Augmented touch control for hand-held devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090303174A1 (en) * 2008-06-06 2009-12-10 Oqo, Inc. Control of dual function input area
US20110267371A1 (en) * 2010-04-28 2011-11-03 Hon Hai Precision Industry Co., Ltd. System and method for controlling touchpad of electronic device
US20130271350A1 (en) * 2012-04-13 2013-10-17 Nokia Corporation Multi-segment wearable accessory
US20140078086A1 (en) * 2012-09-20 2014-03-20 Marvell World Trade Ltd. Augmented touch control for hand-held devices

Also Published As

Publication number Publication date
TW201633075A (en) 2016-09-16

Similar Documents

Publication Publication Date Title
US11720176B2 (en) Device having integrated interface system
US20210353226A1 (en) Wearable electronic device with glass shell
US8310351B2 (en) Apparatuses, methods, and systems for an electronic device with a detachable user input attachment
US10042480B2 (en) Apparatuses, methods, and systems for an electronic device with a detachable user input attachment
JP5677423B2 (en) Detection of contact on a curved surface
JP5778130B2 (en) Detection of contact on a curved surface
KR101440708B1 (en) Apparatuses, methods, and systems for an electronic device with a detachable user input attachment
CN103052286B (en) Hand-hold electronic device
US10261616B2 (en) Pressure sensor with waterproof structure and electronic device including the same
CN108780362B (en) Key set fingerprint sensor with backlight
JP2015512106A (en) Detection of user input at the edge of the display area
CN107111378B (en) Fabric to device bonding
US20150145804A1 (en) Touch apparatus
US20160282977A1 (en) Capacitive sensing assembly including a thin film plastic
US20150227170A1 (en) Touch sensor and method for manufacturing the same
WO2014041245A1 (en) Electronic device with housing-integrated functionalities and method therefor
US20190004662A1 (en) Touch-sensitive electronic device chasses
EP3204844B1 (en) Device operated through opaque cover and system
TWI459079B (en) Touch panel structure
TW201335662A (en) Method of manufacturing touch panel
JP5970365B2 (en) Control device and electronic device
TW201335819A (en) Touch device
WO2016071569A1 (en) Ui control redundant touch
CN110290653A (en) Housing unit and its processing method, electronic equipment
CN104615377A (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15857166

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15857166

Country of ref document: EP

Kind code of ref document: A1