GB2547504A - A method of touch and a touch device - Google Patents

A method of touch and a touch device Download PDF

Info

Publication number
GB2547504A
GB2547504A GB1620562.7A GB201620562A GB2547504A GB 2547504 A GB2547504 A GB 2547504A GB 201620562 A GB201620562 A GB 201620562A GB 2547504 A GB2547504 A GB 2547504A
Authority
GB
United Kingdom
Prior art keywords
touch
screen
user
perform
digit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1620562.7A
Other versions
GB201620562D0 (en
Inventor
Stewart Irvine Nes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=58159720&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=GB2547504(A) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority claimed from PCT/GB2015/053690 external-priority patent/WO2016087855A2/en
Priority claimed from GBGB1604767.2A external-priority patent/GB201604767D0/en
Priority claimed from GBGB1609970.7A external-priority patent/GB201609970D0/en
Priority claimed from GBGB1609963.2A external-priority patent/GB201609963D0/en
Application filed by Individual filed Critical Individual
Publication of GB201620562D0 publication Critical patent/GB201620562D0/en
Publication of GB2547504A publication Critical patent/GB2547504A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3265Power saving in display device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The status of a touch screen device may be changed from a first (power on) state when it may be asleep or in hibernation mode to a second state where it is awake and thus ready for use by the sole action of a user contacting the screen of the device with a finger digit or with stylus. The device does not require a previous or simultaneous action of a button or switch present in the device being pressed or activated before or whilst contact with the screen is made. The device may be any form of device including a tablet, PC or smartphone. The objective of the device is to allow rapid change of state via a single swipe, single tap or single touch to the screen.

Description

A method of touch, and a touch device
Priority documents
From the 2nd Dec 2014 there have been numerous priority documents that support the claims of this invention. Thus this document provides a veiy brief description of a few representative embodiments to illustrate the scope of the attached claims of the invention and why it cannot be anticipated by the prior art.
The priority documents shows serial photographs in the drawing pages of GB 1520360.7 and GB 1520667.5 of how the swipe 700 is performed by a digit in pages 181 to 285 and how the touch operation 142 is different from the prior art method of pages 27 to 159 illustrating the normal iPhone unlock illustrating the Fig 13 touch GUI(graphical user interface). Indeed to illustrate Fig 5G we see how swipe 700 by serial photographs in pages 181 to 285 of swipe 700 would have required less programming skill than swipe 7 of Fig 5G illustrated in pages 369 -429. Indeed in GB 1520667.5, the above is illustrated by serial photographs in the identical drawing page numbers except the drawings are marked by a Fig. e.g. swipe 700 is shown by Fig 181 to Fig 285 etc. This document also illustrates the swipe 2 by serial photographs from Figs 427 to 451. Then the global swipe 3 which turns off the display component DC of the touch-sensitive display screen TDS on every GUI 134 screen from the same position (overriding any prior art programming) and locks the device is shown by Fig 453 to 496 until swipe 2 is repeated. Fig 5 A shows the conventional manner of entering a password is shown by serial photographs from Figs 499 to 567. Fig 5 C is shown by serial photographs entering a password by independent touch (IT) on a blank number pad from Figs 570 to 628. Fig 5D is shown by serial photographs performing a touch of a right angled swipe to enter the password 2580 from Figs 631 to 658 and a serial photograph using number pad using three regions with 4 swipes per region as shown in Fig 659 (Fig 5H) to 757. This shows how easily and reliably a number pad could enter numbers on a blank screen faster than the prior art, without any button press, or the TDS turned on, or a GUI to touch, as was essential in the prior art. Indeed serial photographs of Figs 761 to 866 shows how someone could text on an invisible keyboard to enter any command into a command-line prompt to perform any operation of the device now, or in the future by entering a text command, or could text without requiring any visual feedback on the screen which may become the latest craze in 20 years as a sign of intelligence or employability as it requires a person to visualise the text without seeing it. Thus by viewing these serial photographs the method of operation of Fig 1A to Fig 5H should be self evident.
Prior Art
This invention is a new touch interface where the user can just touch the touch-sensitive screen and perform a touch operation whether the display screen is turned on or off at any time while the device is powered. This was impossible for the command-line interface CLI, GUI or the touch GUI of the US 8,549 '443 patent which is the prescient patent method for all modem touch devices like the iPhone, because all required as essential at least the display to be on for the user to see what they were typing in the CLI, or to show a GDE (graphical display element) in the GUI resistive touch screen which reserved touch to position a digit so it was designed never to perform an operation by touch and a press to click or perform an operation, or a touch GDE of the '443 patent which could touch the GDE instead of pressing them on a mobile device touch screen to perform the operation required as essential a GDE e g. control area 1 to be displayed on a GUI capacitive touch LCD screen which the user could see to touch.
Thus although the '443 patent is the nearest patent, all these above interfaces required at least something to be displayed on a screen in order to perform an operation. Thus there is no operable prior art on a blank turned off appearance of a TDS in sleep mode, which was the universal screen appearance of the GUI blank screen indicating that nothing could happen by touch and the device was safe to put in a pocket. Thus touch on a TDS independent of a blank screen and a button press and a GUI displayed on the screen to determine the touch operation was unknown. Invention
This invention is a completely new touch interface. Its scope is so broad it needs numerous method claims to capture its scope of the unifying inventive concept of independent touch as shown in the flow diagram of Fig 14. However, the invention is very simple, as illustrated by one of the independent method claims.
Claim 24. A method of performing an operation of a device by a path of one or more locations touched by a movement of a digit on a touch-sensitive display screen as the only input method on the surface of the device.
This invention is independent touch performing a touch operation 142 on a TDS independent to being turned on or off, or any external button press, or a duration of the digit movement, as the only essential input method needed to perform an operation on the surface of the device, while the device is powered. IT is completely unknown, and all its superior unexpected properties over the GUI or touch GUI are unknown because everyone believed that the steps of a GUI in Fig 13 were essential to input touch into a GUI. Independent touch needs only the location information of one or more locations touched to perform an operation, as a turned off display screen provides all the visual feedback necessary to perform operations, without requiring the limiting steps of 131 to 135 of the touch GUI. The only reason the invention of IT and all its superior properties is unknown by the skilled person SP is that it require a flash of inspiration of doing something completely different; as it requires much less programming skill to perform a touch operation 142 using independent touch of Fig 14 of only a touch component TC on 141 rather than the conventional touch GUI of the '443 patent which required both the TC and the DC to be on 133. Indeed, the invention could have been implemented by any averagely skilled person who had the skill to program in iOS,
Android or Windows Phone or any other touch GUI of Fig 13 within a day of having thought of the invention and IT's unknown superior properties over the prior art touch GUI.
Indeed, a skilled person seeing Fig 1 A and Fig 5G and Figs 2BA - 2BC would immediately appreciate the following superior properties of independent touch over the touch GUI 134 of Figs 2AA - 2AD : 1. Always on (the user can always touch the surface and instantly operate the swipe 700 at any time when the TDS is turned off in the prior art). 2. Instant (there is no delay or fumbling to find a button, the user just touches the screen, and no wasted time from pressing the button to then waiting for the screen to be displayed and then seeing the slider and then touching the slider). 3. Simpler (a swipe 700 is simpler than a button press and a swipe 7). 4. Faster (a two step process of a button press and essentially performing an identical swipe 7 to swipe 700 is always going to be slower than just performing swipe 700 of the invention). 5. Flexible (the user could change only one operation according to the independent touch operation 142 of claim 1 e.g. swipe 2 in addition to all the prior art touch operations 136, to changing as many of the prior art touch operations 136 to be operated differently by the touch operation 142 of the invention. Thus this gives maximum flexibility of a new interface being able to alter one operation to all operations from the prior art operation 136 to the independent touch operation 142 or independent touch 142 or IT 142.). 6. Familiar as swipe 700 is almost identical to swipe 7 but on a blank screen. 7. Easy to learn as swipe 2 or swipe 700 is similar to the swipe 7. 8. Independent touch has the capacity to create dependent touch GUI operations to override the existing GUI operation e g swipe 3. 9. Accessibility (the user can access all operations from a blank screen using a command line prompt as shown in Fig 5E). 10. Fastest Password access (There is not a quicker or easier way of entering a password reliably, and with better safety than a button press 1 and a swipe 7, and with power conservation of not having the DC turned on as shown in Fig 5D. Indeed, this is faster, and more instant than fingerprint recognition, which is not safe as the mugger may just press the user finger on the button, which is easier to do then find a 1 in 10000 password, which if tried a couple of times with error can be arranged to get the user to repeat the password 2x or 3x in order to unlock the device). 11. More reliable (the prior art method requires three components to work a button, a display component DC and a touch component TC, thus there are three parts to go wrong compared with just the TC of the invention). Furthermore because the user does not have to waste time pressing a button and moving a digit from button 1 to the slider 7a in Fig 2AB, the swipe 700 may be made slightly longer which means the probability of the device being accidentally triggered especially in the hands of the child will be statistically less as there are no visual clues for the child. 12. Cheaper (even though buttons are cheap, the circuitry and provision of a button are an extra complication and expense than producing a device without any external buttons because they are obsolete in the method of Fig 14 but essential in the method of Fig 13). 13. Less effort (it is less effort to not press a button 1 and not have to move to a slider, by just performing swipe 700 on the turned off display screen in the Fig 14 method, which is almost identical to swipe 7). 14. Less digit movement therefore more efficient (the digit movement to the button 1 and then to the slider 7a is additional to the swipe 7 or swipe 700). 15. Designed to be good for one digit touch of a thumb e.g. right alone in a right hand, (the user can perform the swipe 2 or swipe 3 in Fig 1 A, Fig 1C ,or Fig 2BB much easier than a button 1 press and then a swipe 7). 16. Better power conservation during performing the operation because the DC of the TDS is turned off during the swipe. 17. Prolonged instant usage throughout the whole battery life as the TC is always on. 18. Increased capacity as can perform all touch operations of prior art device with the display on and touches with the display off. 19. Better aesthetic or different aesthetic appearance of the device if it contains one operation performed by IT. 20. Fastest user interface to perform an operation e.g. swipe 700 rather than button press 1 and swipe 7. 21. Fastest user interface to perform a task. A task is a sequence of operations that need to be completed to perform the task, and the operation of claim 1 can be a task e.g. performing the task in Fig 6AA and 6AB by a single swipe. 22. No contamination of the device through cracks in the surface of the device and using sealed plastic bags to cover old iPads to prevent cross contamination. 23. The invention as a whole is vastly superior as all the diagrams explain especially comparing Fig 13 to Fig 14. 24. Able to improve the operation of any prior art operation by touch. 25. Backward compatibility able to perform all operations of the prior art (e g. swipe 2 is an independent touch but it accesses all the prior art operations of the touch GUI).
Brief description of the Diagrams
Figs 1A-C and Figs 2AA-BC This shows how a user may remove the need for an on off button on the surface of the device and thereby allow a user to have a touch device which responds all the time to touch to perform operations, instead of any operations being performed by any other input like any button on the surface of the device. And since touch can control devices wirelessly, and charging the battery can be done by induction, the outside of the new touch device can now be totally smooth and sealed with the potential of a much better aesthetic appearance, which was an impossibility for any device which had a sleep mode.
Furthermore Figs 3A-D and Figs 4A-G allowed the user to perform their own touches and select one or more operations of the device to be performed by the touch, and Figs 4A-G allowed the user to have another program which could enable a user to add one or more further operations to be performed at one or more locations on the path of a touch of a digit by one or more further touches.
Fig 5A shows prior art password screen. Fig 5H shows how the user by a series of swipes could enter a number and that number could perform a unique operation of the device, and by this method the operation may be all operations of the device performable by a touch of a series of swipes (Fig 5H) or taps (Figs 5B-C) on an invisible keypad or invisible keyboard (Fig 5E) in the same way that the command-line interface can operate all operations by typing in lines of code. Thus the touch of claim 1 can be one or more touches of one or more digits on a screen, and the operation can be one or more operations of the device for one or more locations touched of a touch (e.g. a swipe), all entered by touch or a series of touches on an invisible number pad or keyboard.
Furthermore Figs 6A-G shows how this touch could perform the operation of a task e.g. a single swipe could operate a sequence of operations needed to complete a task, and shows one way how a task could be completed without error in a single swipe e.g. Figs 6 AA - 6 AB.
Fig 7 shows how an attached stylus to a digit may never be lost and makes writing and prior art pointing device easy by touch.
Fig 8 then explains how the power drainage of having the TC continually on is minimised or could be reduced, and even made better than a mechanical button in sleep mode by having solar power cells on the surface of the TC to provide more energy than the small area of 802 which needed to be continually powered. And by realising that the TC of the TDS could perform all the operations of an external button on a device, made obvious that an internal button was less likely to be damaged in a car crash when a TC of a TDS was most likely to be damaged, and that having an internal button or switch by the batteiy would be useful to reset the device, completely power off the device, or send a GPS coordinate to an emergency service and thereby may save a life. Furthermore other methods like using solar power cells as a backup method to detect touch by light and other special touches could be designed on the TC of the TDS, so even if the screen was damaged and could not be displayed, the user would have operations performable by invisible touch which could reset the device, if the accident had caused the software to freeze, by a separate independent circuit of touch or solar power to the software driving the rest of the computer operations.
Furthermore, the possibilities of touch to perform an operation are virtually unlimited, and Fig 9 shows another set of touches and taps at the circles of a touch-sensitive screen shown in Fig 9, and also swipes or slides between these circles, all as other ways of executing operations on a blank screen, even the miniaturised blank screen of a iWatch or equivalent. Furthermore the ability of touch to be performed on crystal glass (like the iWatch) means that the crystal of the analog Swiss watch as an example of Fig 10 may detect an input of invisible touch to perform an operation without any visual feedback or it could be connected to a transparent LCD screen which allows the user to see the analog watch, but could show text downloaded from a phone when the display screen was touched in a specified manner to perform that operation.
Indeed all the Figs 1 -10 are a very limited selection of possible embodiments of the invention limited by the claim language of this invention. Fig 11A shows how silent mode can be instantly done by any touch on the touch-sensitive screen when the phone is ringing, and the user then can take the phone out at their leisure and then perform a swipe 11 to see the notification Fig 11B that had caused the phone not to be silent and this may be terminated in the normal way. Fig 12 shows how a user may design their own operations all dependent on a swipe 11 making accessible a range of different invisible touch operations instead of the device requiring any external buttons. Fig 13 shows a flow diagram why the nearest prior art touch GUI cannot anticipate the Fig 14 independent touch flow diagram. Fig 15 shows the prior art device has a TDS with an external button 1 and requires the screen to be turned on to display a GUI in order for touch to work, whereas Fig 14 performs an operation by touch on the TC of the TDS as toe only input method.
Detailed Description of the diagrams.
Fig 1A
This shows one embodiment of toe invention. It shows a thumb performing a swipe 2 on a prior art touch device which has been modified according to toe invention by having a TC turned on 141 and it performs a touch operation 142 of turning on the display screen and showing the last screen accessed with all toe benefits as described in toe abstract over toe prior art device button press 1 and swipe 7.
Fig IB
This illustrates the prior art touch device structure in sleep mode which made sure that the TDS had both a turned off touch component TC 01 and a turned off display component DC 02 as shown by a display screen 9. Thus this prior art touch device configuration used by the entire prior art touch software enabled it to be impossible for the TDS to detect touch or to use power as the TDS was turned off, unless a button 1 was pressed.
Fig 1C
This illustrates the touch device configuration of the invention which made sure that the TDS had both a turned on 12 touch component TC 01 and a turned off display component DC 02 as shown by a display screen 12. The importance is from a visual perspective screen 12 shown in Fig 2BC of the invention looks identical to screen 9 in Fig 2AD which had the GUI appearance of a switched off TDS in sleep mode, which for 20 years symbolised the display screen was safe to touch and incapable of performing an operation by touch. However, this touch device configuration enables the user to perform an operation at any time while the device is powered without any limitations of the prior art in Fig 13 of 131-135, which made it impossible for the prior art touch device to perform a touch operation 142 in the prior art GUI or touch GUI 134.
Fig 2AA
This emphasises that the prior art touch device screen in sleep mode could not perform any operation on the turned off TDS 131 illustrated by the visual feedback of TDS with screen appearance 9 and screen completely inoperable to touch. In order to power both the TC and DC component of the TDS so the user could see a GUI on the display screen 9, the user had to press button 1 132 to turn on the TDS screen 133 including both the TC and the DC of the TDS.
Fig 2AB.
This shows the display screen TDS turned on 133 showing a GUI 134 screen with a GDE 135 slider 7a and another GDE 135 slider control which has the boundary 7b, which if the user touches the GDE 135 Slider 7a and performs the swipe 7 by moving the vertical right slider edge 7a to the edge 7c and remove a digit to perform the swipe 7 this unlocks the phone to allow access of the rest of the touch operations 136 of the phone. This is equivalent to the start sequence of the nearest prior art '443 and also operating according to the independent claims of the '443 patient which was the first to describe the GUI touch interface. The important aspect of this prior art touch device operation is that it was not any touch that could perform any operation which is the scope of the touch operation 142 of the invention. This touch operation 136 was completely dependent on steps 131-135 and it was inoperative.
Furthermore the user had no choice to perform any touch they liked to operate any operation on the GUI screen to modify the touch operation of unlock to operate according to the user's touch or the user's choice of operation the touch could operate. The touch GUI always operated by touch according to how the programmer had programmed the GUI. Indeed without steps 131 -135 the touch was inoperative.
Fig 2AC
This shows the last screen seen by the user of the prior art device which was the desktop 8. After a period of screen inactivity the TDS turns off 131 both the TC and DC, so the screen is in sleep mode, and is incapable of performing an operation by touch, and is conserving battery power. The GUI 134 desktop 8 also is configured to detect button input. The home button could change the last screen to the desktop 8, if that was the home screen, and the on off button 1 could also turn off the TDS 131 to sleep mode. Thus this illustrates that the device is not operating anything by touch as the only method. It is not a touch device operating by the only method of touch on a TDS 142. The principle method of control is the GUI, or what you see is what you get. Thus if you see the desktop a user knows you could turn the appearance of the GUI off by the on off button and make the display screen only to turn on by pressing the on off button 1132 or the home buttonl32. Thus this representative prior art device could not perform one operation by touch on the touch-sensitive screen but was reliant on all the other input methods and configuration of 131 to 135.
Fig 2 AD
This shows the blank TDS screen 9 of a turned off TDS 131, which was known and designed to be impossible to perform any operation by touch and completely safe to touch. This was the standard appearance of a blank turned off screen since 1992.
Fig 2BA
This shows the swipe 2 performing an operation to replace the button 1 or provide an alternative method to turn on the display screen by touch alone on the turned off DC of the TDS but turned on TC of the TDS 141 which has a screen appearance 12 identical to the TDS turned off 131 screen appearance 9 on Fig 2AA and Fig 2AD. Furthermore the skilled person SP would note that swipe 2 is longer than swipe 7 meaning the swipe 2 is safer than the unlock swipe 7 because it requires a longer distance of locations touched to enable the swipe 2 to be performed. Thus this is safer at unlocking the device. Furthermore it will be noted that the starting position of the swipe 2 is conveniently located for an easier right thumb swipe than the more awkward swipe 7 of the prior art. It is also safer and uses less power because the DC is turned off all the way through the swipe 2 rather than having to be on for the swipe 7, and the swipe 2 can be done quicker and easier than performing the operation by pressing a button 1 and swipe 7 of the prior art. It is a touch operation 142 of the invention of claim 1, which a user can touch a TC of a TDS 141 and perform a touch operation 142, that is a touch of a predetermined movement of one or more digits on the TC of a TDS to perform the operation. It has all the superior properties of the independent touch of Fig 14 over the prior art Fig 13, including at least the improved performance described in claim 11.
Fig 2BB
This shows the last screen used by the user. It will be noted this is showing the last screen of the prior art touch device, and all the operations unlocked in the desktop 8 shown can be operated in the normal touch GUI manner of the prior art device software and the prior art touch device. Thus the invention of independent touch may only change one operation to be performed in the new manner of swipe 2, and all the rest of the prior art touch device can operate exactly as before to unlock the device and have all the normal behaviour of every unlocked operation of the device. However, to replace the on off button 1 completely Fig 2BB shows a swipe 3 which will replace or provide an alternative method from every screen to turn off the DC of the TDS and to lock the screen until the swipe 2 is performed again. The swipe 3 is a good example of how when the display is on that the touch operation 142 is independent of how the prior art software programmed their original software, and may OVERRIDE or replace modifying the prior art touch response of the GUI screen. Thus no matter what programming was on the GUI screen the swipe 3 on every displayed screen will perform the operation to turn off the DC and lock the device until the swipe 2 is performed.
Fig 2BC
This shows a screen with a display screen permanently turned on to touch 12 but with the display screen DC turned off to save power. Thus the user can always operate the device by touch as long as the memory of the device is powered. Thus it provides touch always active or on to the user, but touch which is safe in that it requires swipe 2 before it can unlock the device or waste power turning the display screen on. Thus the touch is instant (always on), invisible (does not need a display screen on but only one or more locations touched), and independent (it can operate independent of any GUI appearance of the screen or previous GUI programming to touch) to perform the operation. In every aspect it is superior to the prior art dependent touch described in Fig 13.
Fig 3A
This shows a general setting menu which has a menu item 100 to allow a user to record an invisible Touch or independent touch. This is one embodiment of a setting menu option to record a touch operation 142 of the invention.
Fig 3B
This shows an aspect of the invention where the user can record a touch operation 142. This shows the user touching the screen by a swipe 2 and the locations touched of swipe 2 is saved to memory of the device as the touch component of the touch operation 142.
Fig 3C
Fig 3C shows a swipe 2 having been completed as the touch component of the touch operation 142 and represented graphically 200 on the screen. The user has an option if happy with the swipe 200 graphically represented to tap a button 201 3x to add an operation to the graphical representation of the touch. Alternatively can tap 3x on the cancel button to return to the Fig 3 A menu. Thus the button 201 provides one embodiment how the user can determine the operation component of the touch operation 142.
Fig 3D
Having tapped button 201 3x, Fig 3D appears and allows the user to add an operation as the operation component of the touch operationl42 of the touch swipe 2. Thus the user can touch a location on the graphical swipe 200, e.g. location 46 at the tip of the graphical swipe 200 representing the location where the digit is removed from the screen. The user is then presented with one or more operation of a scrollable menu which could be all the operations of the device arranged in a single menu format or in hierarchical format, or alternatively there may be a magnifier icon which allows the user by a QWERTY keyboard to search for operations. Thus if we use the single menu, it can be appreciated by a long scrollable menu the user could select one or more operations out of all operations of the device e.g. the operation to turn on the display screen 206 and to unlock the last screen 207 out of a scrollable menu of 206,207,208,209. The user then could either save this 205, cancel 204 , or add another operation 203 which could be used to add the additional locations 41 for the camera application, 43 for the music application, and 44 for the notification applications each which are operated by the user performing the swipe 2 and when the digit arrives at location 41, 43 or 44 shown in Fig 4A respectively the camera application, music application, and notification application open and are displayed for the location of the circle and then disappear until the end of the swipe 2 where at 46 when the digit is removed the display is turned on and the last screen is displayed. Thus by this method of adding one or more operations 203 to the touch operation 142, this description explains how a task of operations can be performed by a single swipe 2 in Fig 4A, or the touch operation 142 can be just the operation at 46 of swipe 2 which was shown in Fig 3D .
Fig 4A
This shows the swipe 2 which had the added operations of 41, 43 and 44 described in Fig 3D. It is easy to see by the explanation of Fig 3D, how numerous additional operations could be added by the example embodiment of Fig 3D or another embodiment to record a touch operation 142 where the operation 142 can be an operation Fig 3D 46 or a task of 41, 43,44, and 46 e.g. the swipe which can operate a task of a sequence of operations by a single swipe as shown in Fig 4A and described as been creatable by Fig 3D. In addition an editor menu item 101 in Fig 3A could also provide a means of adding locations to add operations and/or touches.
Fig 4B
As discussed if the user performed the swipe 2 and arrived at 41 this may open and display a prior art camera application. If the user lifted off at 41 while this application was displayed then he could access and use the camera application in the prior art manner. The user could then perform swipe 3 (not shown) to exit from this application. Because this application is accessed before the phone is unlocked no other application will be accessible, and this is made accessible with such a short swipe where the digit lifts off or is removed at 41 to access the camera in this simple embodiment.
Thus if this is done the user would cause the camera screen shown in Fig 4B to be permanently operable and this could operate in the normal manner but no other operation could be accessed from the locked phone, but it would rapidly allow a user to take a picture or record a video using the conventional GDE programming of the prior art, with the only exception when the user had finished taking the picture, the DC of the TDS would turn off after a period of screen inactivity or the user could switch this camera application off by a swipe 3 (not shown but available in Fig 4B).
Fig 4C
Likewise if the user starts the swipe 2 and continues the swipe to the location 43 the music player appears. If the user removes his finger at this point the music player can perform operations as in the prior art. The music player in Fig 4C is shown larger in Fig 4A so to illustrate that the user can design additional touch operations 142, which can change the normal GUI operation of the music player from how it was originally programmed by independent touch because user defined touches can override the previously programmed touch operation 136. Thus the menu may be designed by setting menu options and editor programs easy to enable in the prior art whereby the user can access the menu player screen shown in Fig 4A and Fig 4C and change the menu operations so that a swipe 47 movement from 43 on the left side could play song 3, but on the right side of the menu (like the black region demonstrating area 65 in Figs 6 C) the swipe could cause a reverse scroll, this is when the user swipes 48 downwards or slides downwards and the remainder menu items not displayed e.g. songs 8-14 move upwards and are scrolled into the visible song menu item area. This independent touch is deliberately chosen because it is counter intuitive, to illustrate it is not WYSIWYG but rather the touch of WYTIWYG controlling a prior art touch GUI application, and also it is more ergonomic at seeing a few more remaining song tracts than conventional scrolling. The purpose of this additional description of 47 and 48 is showing how these independent touches of Fig 14 can also change the touch behaviour of an open application of a prior art music player programmed with different touches. Thus this illustrates how new touches with new operations can modify the original prior art music application. Fig 4C.
Fig 4D.
Furthermore, if song 3 was selected by swipe 47 and the user removed the digit, the display screen may be programmed to turn off and show the blank screen of Fig 4D with the following operations. Swipes 25,26,210,211,212,213, respectively invisibly increase volume, decrease volume, move to the preceding tract just played to be played, move to the next tract to be played , or allows the user to scroll a previous playlist keeping the display screen only on (to show the playlists) during the slide and the selection being made by the removal (of the digit to select the selected playlist when the digit is removed) , or scroll to a next playlist keeping the display screen only on during the slide and the selection being made by the removal. In addition in the MUE the user could touch to pause and play the song. Thus this brief description shown in Fig 4D shows how numerous operations dependent on the music player being open can all be performed on an turned off screen even though the original application never was designed to have these independent touches. Thus not only can independent touches 142 modify the behaviour and the appearance of a GUI 134 screen of prior art applications or a GDE 135 touch operation, for only the scope of a single application or single GDE, but also touches on a turned off screen can be dependent to these prior art GUI touch applications.
Fig 4E
This shows another very useful operation to see the latest notifications the phone has received. The user performs the swipe 2 until arriving at location 44 at which point the notifications screen appears. The user then could just look at the notification and then swipe upwards past the 42 boundary to deactivate swipe 2, or to carry on with swipe 2 and this will turn off the notification screen thus it will only appear for the briefest time for the user to see the information and exit from the notification screen, alternatively the user could perform swipe 214 to select the picture notification to see the picture or could reverse scroll the rest of the additional notifications not shown on the screen by swipe 215. Thus again these touch operations 142 may replace the prior art touch operations 136.
Fig 4F
This shows a small selection of specific options that a user may use when creating a touch operation 142 at a specific location as illustrated in Figs 3 D and Figs 4 A-E (e g. 46, or 41).
Fig 4G
This shows a small selection of general options that a user may use when creating a touch operation 142.
Figs 5 A - E
Fig 5 A shows the prior art. Figs 5A - 5D. This shows how a sequence of swipes can input numbers into the touch device, and Fig 5E shows how a sequence of swipes can enter character input into a touch device.
Fig 5A
This shows the prior art method. The user press button 1 to turn on the unlock screen then swipes 7 and then enters four digits e g. 2580 to enter a password on the password screen shown in Fig 5 A.
Fig 5B
This shows the new independent touch method. The user performs swipe 11 by starting the swipe at the URC and lifting off the digit at the MUE within region 10. The user has divided the display screen 12 into nine invisible regions 1- 9. Region 1 or an upper left region ULR or area 14 , an upper middle region UMR or area 50, an upper right region URR or area 51, middle left region MLR or area 502 , a middle middle region MMR or area 501, a middle right region MRR or area 500, a lower left region LLR or area 505 , a lower middle region LMR or area 504, a lower right region LRR 503 as shown in Fig 5F but represented respectively as region 1 to 9 respectively in Fig 5B. It also shows that there is another region, region 0 represented by the rectangle enclosing the 0 over the middle lower edge MLE. In short, these regions are regions in the user's imagination representing an area of a blank turned off display screen and the areas 14, 50,51, 502,501,500,505,504, and 503 are the invisible screen areas shown on Fig 5F corresponding to the regions 1,2,3,4,5,6, 7,8,9 in Fig 5B.
In order to perform the equivalent password entry for the prior art, the user taps within the blank imaginary regions 2,5,8 and 0, and this inputs the identical password shown in 5A and then this performs the operation of turning on the DC of the TDS showing the desktop 8.
It would be appreciated that even this method is far more efficient at digit movement on the screen than the prior art method of Fig 5 A.
Fig 5C This shows the identical imaginary blank regions 1-9 and 0 except the regions 1-9 fit into the upper half of the display screen 12. The user inputs the password to turn on the DC and show the last screen (e.g. Desktop 8) by the identical method in Fig 5B. The only difference is the user has made the imaginary regions only occupy half the screen. Again in setting menu, the user could adjust the number or size of the invisible regions on the blank screen, and exactly where each region is placed as an area of the screen so that the user finds an ideal region size for each number of this invisible region 1-9, and 0 acting as an invisible number pad on a blank screen. Thus although Fig 5B has a larger number pad with larger regions, a skilled user would find that he could accurately input data using this invisible number pad in a numberpad occupying half the screen, in a more convenient and efficient manner.
Fig 5D. This shows how using the identical size of number pad of Fig 5C, the user could then change the behaviour from tapping within a series of regions 2,5,8,0 to input the number 2580 in Fig 5C, the user could perform the whole sequence of number entry by a single right angled swipe 516 as shown in Fig 5D. The SP would appreciate that this would be far faster to implement the data entry of four different operations 2,5,8,0, and indeed it would be the fastest and easiest way a user could perform the task of several different operations, with entering each digit being a different operation. It could be made faster by just requiring the user to perform a downward direction swipe within regions 2,5,8 and within 0 removing the digit from the screen, but this would only have the same safety approximately of performing a swipe 7 in the prior art. However, by making the swipe a right angled swipe i.e. performing the horizontal movement from the URC to the MUE the the digit continues moving in continuous contact downward through each of the numbers and as the TDS detects the digit moving within or entering then leaving each region; 2, then 5, then 8, the TDS would input the operation of entering the three different numbers 258 and when the user removed the digit within the region 0 would enter that digit as the last number.
It would be appreciated by the skilled person, the right angled touch swipe 516 requires the user to make a right angled change in direction, and this has less probability in being accidentally triggered than swipe 7. Therefore swipe 516 is much safer than swipe 7 and it would be almost impossible to turn on the display and unlock the device to the last screen by accidentally performing this operation especially if the screen after the initial detection of the swipe at the URC immediately deactivates if a wrong region e.g. 3 is touched in the wrong sequence, thereby undoing any one or more operations performed by the swipe. The SP would appreciate that a child would have much less chance than 1 in 100000 in accidentally performing this swipe because it require 4 different numbers to be entered in the correct sequence, and also an initial horizontal movement 11a within area 10. This is less probable of being accidentally triggered by a child than the pin numbers used with credit cards. Furthermore, if the wrong sequence of four digits is entered more than twice the device could make the user repeat the swipe 516 2x or 3x making the probability respectively less than 100000000, or 1000000000000. Thus the skilled person would appreciate that requiring the user to enter at least a 4 digit password, with the deactivation of the number pad by a wrong sequence of regions touched (where any region not within the 10 digit region would be also classed as a wrong region) would be safer and quicker than the prior art password entry of Fig 5 A.
Thus Figs 5B -5D shows how a user could enter a sequence of digits by either a sequence of touches e.g. taps in Fig 5B or Fig 5C, or by a single swipe in Fig 5D in the most efficient manner possible, without a button press, at any time, with the minimum of digit movement over the screen.
Indeed the SP would observe that this ability to enter a sequence of operations e.g. each digit representing a different operation, could allow the user to enter a number corresponding to one operation of the device, and thereby all operations of the device could be entered by a sequence of digits where the number of digits in the sequence was larger than the total number of all the operations of the device. Thus by describing an invisible number pad now provides the user at any time with a method to perform any operation of the device by entering a digit into the number pad, and by this method all operations of the device could be operated more safely than any prior art device or software.
Fig 5E shows one embodiment how a user could enter any text into the device. This shows how the original nine regions 1-9 shown in Fig 5B could be used to each have four different swipes. Thus each of these swipes requires the user to place the initial digit contact (e.g. represented by the four tails of each swipe e.g. in region 1 or area 14 in Fig 5H) in a region Thus as long as the initial digit contact 4 is within the region 1 shown in Fig 5B or area 14 shown in Fig 5F, then if the user performs a swipe in a down, left, up, or right direction with the tail 4 of the swipe being within the region this would respectively perform the Cap, or input a, b, or c letter by each of these swipes. In the same way each of the other regions also could enable the user to perform four different swipe actions as shown in Fig 5E and thereby the blank display screen becomes an invisible keyboard. (Likewise, Fig 5H shows swipes entering a number as a passcode as shown in serial photographs of Fig 659 to Fig 757 of GB 1520667.5).
Also at the bottom of the display screen, the lower edge is divided into three additional areas or regions and if tapped within each of these areas could perform three different operation e.g. like Send, View or Cancel.
Thus by this means as shown by sequential photographs a user could enter Hello World into the device.
Thus this method could allow a user to develop a new skill of invisible texting. That is be able to text without any feedback and still know exactly what was written in the text. This may be a common feature indicating user intelligence in 20 years, and this has the advantage of improving the recall and decisiveness of the user by practicing the ability to picture the text message without seeing it written down. The user could at any time see what they had written by the sequence of swipes by touching the view area on the middle lower edge, and when the digit is removed the visible text box reminding the user of what was written disappears.
However, it would be appreciated that an invisible keyboard now gives any SP the ability of a command-line operating system, which a user now can perform any operation using a command line or a list of command lines. Thus all operations of the device and all configurations of the device could be entered or modified using an invisible keyboard. Thus the skilled person would realise by Fig 5E or a similar invisible keyboard (i.e. the SP could rearrange each of the 4 swipes to perform different operations or enter different keys to have a different invisible keyboard layout) designed for a SP who programmed with a command line user interface. And by this method the SP would realise that this new invisible keyboard could have the entire functionality of a command line operating system, meaning by a sequence of touches (in this example it is a sequence of swipes but could be a sequence of touches) the user could operate all operations of the device using the full capacity of a command line operating system which can perform all operations of the GUI in a list format, and since language is not bound by previous prior art languages, in that users or SP can develop new programming functions and procedures to perform by each command line, all operations of the prior art touch interface and prior art GUI can be performed by touch at least by this method, in addition all new modifications to the prior art touch software can be programmed by this method, and all new modified code could be programmed by this method. Thus the ability of touch to input reliably and repeatedly a language by an invisible keyboard by this example embodiment would be understood to make this new touch interface which is not dependent on any visual appearance of the screen, or even a display screen been turned on or any of the other dependencies of the prior art described in Fig 13, would make it obvious to the skilled person that this new touch interface had unlimited capacity to perform all the previous touch operations but invent all the invisible touch operations and modifications to the existing prior art GDE with improved efficiency of operation of the modified GDE of the prior art all through this new interface.
Thus this new touch interface is revealed as having at least the following characteristics above the prior art. The touch interface is a true touch interface in that it can perform an operation at any time when the memory of the device is powered e.g. to remember at least the last screen accessed. It is a touch interface because it requires only touch on the TDS to perform one or more operations of the device, and does not require any external button or any of the dependencies listed in Fig 13.
It is at least the simplest, easiest, fastest, most efficient, least power consuming method, of performing an operation or a sequence of operations (i.e. a task) of inputting data reliably and safely on a computer even when used by children. Its capacity is to be completely backward compatible to all prior art input methods, however, it is different from the prior art in that in its essential form, it can fully operate the device by the processor detecting touch of one or more digits on the TC of the TDS at all times while the device is powered without any other input needed on the surface of the device, where no prior art touch device or touch software could claim that scope.
Indeed the invention was seeing that by the simplicity of touch (devoid of any need to be subservient to a button press and a display screen being on or even the teaching of a GUI which required as essential graphical display elements (GDEs) to exist in order to determine what operations the touch would perform) having the capacity to perform everything the prior art could perform, but by a better more user friendly gliding touch interface, with an unlimited capacity to perform all the operations of the command line interface by invisible touch operations and also to be able to perform operations both with the display screen on or off (impossible for the prior art). In short, seeing that this new interface could modify every existing prior art software or device to perform at least one operation more efficiently with less steps and /or less digit movement on the surface of the device than any other prior art input method. Indeed, a SP would recognise that all programming now will be improved by the touch operation of invisible, instant, independent touch being able to improve the performance of any prior art input method.
Indeed as discussed in detail in priority documents in every aspect it is superior to the prior art touch devices or software.
Fig 5F.
This shows the display screen 12 divided into 9 regions.
In short, these regions are regions in the user's imagination representing an area of a blank turned off display screen and the areas 14, 50,51, 502,501,500,505,504, and 503 are the invisible screen areas shown on Fig 5F corresponding to the regions 1,2,3,4,5,6, 7,8,9 in Fig 5B.
The purpose of Fig 5F is to show the range of touches that are performable by a user. As discussed it would be obvious to the SP that the user can identify accurately a single location at all the comers of the display screen, that is the left upper comer LUC or upper left comer ULC, right upper comer RUC or upper right comer URC, left bottom comer LBC or bottom left comer BLC, and right bottom comer RBC or bottom right comer BRC. Thus one location touched or tapped can occur in these locations. Furthermore, in the imagination of the user the user can identify at least four further locations. The middle upper edge MUE or UME, the middle right edge MRE, the middle bottom edge MBE, and the middle left edge MLE. Thus at least 8 areas on the display screen, a user can reliably and repeatedly touch without error on a turned off display screen.
In reality the division into regions and locations can be much greater than this with practice (e.g. Fig 5C rather than Fig 5B) because the user becomes very adept in dividing the screen into different regions of locations in the imagination of the user, and therefore all operations shown in Fig 9 could be easily and reliably repeatedly performed to detect touches (i.e. contact) or taps at the circles areas of a small screen like a watch screen in Fig 10 and directional slides or swipes between these circles as shown in Fig 9.
However, with just 4 swipes as the only touch shown in each of the nine regions shown in Fig 5E, this has the capacity for 36 different operations. However it would be appreciated that the user could also perform an operation by a contact represented by the square 510, a tap represented by the triangle 511 (or the arrow head tip shown in Fig 5B e g. within region 2), a slide of continuous locations touched represented by a line 512 which as described in claim 2 can be a slide in a certain direction, or a slide in two or more directions 513 symbolised by two lines and an angle, or a swipe 514 symbolised by an arrow with the tail being the initial contact of the path of the digit moving on the screen from the tail within the path of the body of the arrow in the direction of the arrow along a plurality of locations on the screen until the digit is removed at the tip of the arrow as shown by all the swipes (e.g. swipe 2 and swipe3) on the rest of the diagrams.
Thus the touch of claim 1 includes each digit performing any of the touches described for Fig 5F, and these can be performed simultaneously or in series by one or more digits. Furthermore, Fig 5B and Fig 5C shown that a series of taps can cause an input of a number and that number can perform the operation of claim 1. Thus the touch of claim 1 can be a series of touches e.g. a tap in Fig 5B-C. Fig 5D shows how a single swipe can perform a task of input of a number and thus the touch could be a single swipe as the touch performing any operation of the device. However, it would be understood by Fig 5F that any simultaneous touch of two or more digits e.g. two digit making contact at two locations e.g. right index finger at MUE, and right middle finger at URC simultaneously to perform the operation or two different swipes simultaneously or two slides simultaneously. Indeed any of the touch of two or more digits in sequence to perform an operation could also be the predetermined movement of the touch of claim 1, i.e. the location touched of right index finger touching the MUE before the right middle finger touching URC could be the touch of claim 1.
Fig 5 G, shows a swipe 700, which starts at the LBC and moves along the bottom edge and is removed at the RBC to perform turning on the display screen and showing the last screen, illustrates that swipe 700 is a longer swipe than swipe 7 shown on a turned on display screen of the prior art. Thus the skilled person would realise that swipe 700 would be less likely to be accidentally triggered than swipe 7. Furthermore, the sequential photographs show that this movement is an available movement on the existing iPhone to perform the operation of unlock. Thus this photograph shows how easy it would have been for a SP to have enabled this invention on the identical prior art software. Indeed the SP only needs to turn on the TC of the TDS as shown in Fig lc from the prior art configuration where the TC was turned off in Fig lb. This then makes the prior art software screen permanently sensitive to touch even when the display screen is turned off. Thus, it would be appreciated if Fig 5G had the DC turned off but the TC turned on 141 then this would show an invisible screen and then if the user just performed swipe 700 using the existing program except modified to turn on the DC after the completion of the swipe 700, this would be an equivalent swipe to swipe 2 on Fig 1 A. The only difference is the swipe 2 position is more efficient and ergonomic for a right thumb. However, the skilled person would realise by just a few lines of code, it would have been that simple to convert the prior art touch device method of Fig 5G, to the invisible touch of Fig 14 during sleep mode of the independent touch of the invention. The purpose of Fig 5G was to show how easy the enablement was of the invention, and also that it makes obvious the inefficiency of the prior art input method. The input method would require the user to press button 1 (or home) and then move to perform the swipe 7. The new touch require the user only to perform swipe 700 which is faster, easier, safer because it specifically is longer and a more precise touch then the swipe 7, that deactivates or is undone if it is not precisely done, and does not require any pressing and uses less power performing the operation because the display screen is turned off when the user performs swipe 700.
Thus this makes it obvious to any skilled person that the design of this turning on of the display screen and unlocking is superior in every aspect to the prior art.
6A
This shows how by a single swipe the operation of a task can be performed. A task is the performance of a sequence of operations. Thus the user starts a slide motion 1 la in Fig 6A from the URC to the MUE. Then the downward slide 60 can activate turning on the display screen to show a graphical appearance to assist the touch completing the task. [This is different from the prior art which turns on a GDE in order for the user to activate the GDE by the touch]. Thus the touch in independent touch can operate independent to visual feedback, but if visual feedback is used it may be responsive only to the input of the touch. In addition the initial downwards slide can turn on WIFI or radio signals (thus limiting the power loss of these high draining power operations only to them being needed). In order that the user can search a connected database to download a record from the database from a connectively coupled computer (e.g. internet or local LAN) which requires the user to input data to solve a task. The downloaded data is sent in a list format which can be displayed as menu items on the touch device. The searching of the record and the download of the data is not shown but it could have involved numerous different embodiments to search the connectively coupled computer from the simplicity of a single field where the user could type the first letters of important words (e.g. like the first letters of a surname followed by a space followed by first letters of a first name followed by a space and then a date of birth followed by a space by a condition e.g. chest pain). Then the record that would be perfect for the user to solve or perform a task e.g. asking all the relevant questions regarding chest pain for that patient) would then provide a list of data as a downloaded record from the connectively coupled computer in order to perform that task of several operations. And in order to perform that task perfectly the user needs to input a correct response to the following download data items in sequence.
Figs 6AA and 6AB
Show two sequential shots of the same screen to illustrated the record of downloaded data was data elements 1 to N , where N could be any number not just the 8th data element on the second page (i.e. it could be the 12 data element on the third page etc). The important aspect of the Nth data element is that for the user to perform the task completely the sequence of selecting a response option for each of elements 1 to N is necessary in order that the task is completed. And the purpose of Figs 6A, 6AA and 6AB is to show that this task can be completed by a single swipe using the embodiment shown in Figs 6AA and 6AB.
What Figs 6AA and 6AB is showing is that the user can first start with slide 11a, then make an initial downward movement in downwards slide 60 which may allow the user to search and download data from a connectively coupled computer in order to perform a task which could have n elements to complete. This list of n elements is then received by the touch device as shown initially on Fig 6AA where the user then selects Yes for the data element 1 610,No for the data element 2 620 , and uncertain for the data element 3 630, and no for the data element 4 640, and then moves over an area at the bottom of the menu which makes the four elements 640, 630,620, 610 fill up respectively with the next data elements 5th, 6th, 7th and Nth, which the user then continues the long swipe by sliding back up the screen and respectively performing the operation of recording no, uncertain, no and Yes for these respective elements. Since the Nth element 610 is the last element of the task operation that needs a selection of Yes, No or uncertain to be performed to complete the task. This data could be uploaded the moment the user lifted off the Nth element 61 in Fig 6AB. Usually there may be a separate element in the list above this which the user could move over to save the data and upload the data to the connectively coupled computer (e.g. similar to menu item 64 in Fig 6B. The important aspect of this is that this shows one way elements or operations that needed to be performed to complete a task of 1 to N elements can be performed in a single swipe and indeed, N could be a very large number where the user is moving down and up as shown in Figs 6AA and 6AB numerous times so that a very large task on numerous operations can be performed by a single swipe. Thus this Figs 6 AA and 6AB clarifies that the operation of claim 1 may be a task of a sequence of operations performed by one or more digits.
Thus after considering all the possibility of any touch being the touch of claim 1 or any operation being the operation of claim 1, it becomes obvious that this method can include a touch which is a single swipe performing numerous operations of a task and being able to complete that task without missing one important operation in a single swipe.
Now the skilled person would be able to appreciate that businesses could dramatically improve their efficiency by writing lists of tasks needed to be performed by an individual, and then allowing that individual to download that list of operations which needed to be completed to perform the task, and when that individual had performed the task could record the results of completing the task in the time saving and easy manner of a single swipe. The SP would realise there is no simpler or quicker way of using touch to ensure completing a task than this, and because it is sequential nothing is ever missed.
Thus the importance of this is, reliable data recording forces the user to specify exactly what information was gathered or done in performing the task.
Fig 6B
This shows a simpler task that could be completed on a single page of the menu. It shows how a user could perform an initial touch e.g 1 la and make a downward movement 60 (which may download data as described above) and then the user can in a single swipe enter Yes, No, Uncertain for the elements 61, 62 and 64 respectively and then saved this recorded information of the completed task of three operations by sliding within each label area by entering and exiting only one choice per menu item and 65a shows that the display screen could be turned off immediately after the information was saved (and/or uploaded to a connectively coupled computer) to save maximum power by turning the DC of the TDS off by removal of the digit at 65a.
Fig 6 C
As described in Fig 6 A a user can search using a string of data in a field for a patient record and a condition from a coupled computer in order for the user to receive from coupled computer downloaded data in the format of a list of data regarding the patient which can be imported in the form of data in a list of menu items. The data can comprise of background information regarding the patient including demographical data, and then the patients history, examination, investigations and management stored on the NHS spine. However, in addition the user can receive further data that needs to be inputted in order to complete a task of data input for a given one or more presenting conditions e.g. chest pain which the user would have already supplied in the search of the coupled computer. Thus the connectively coupled computer can provide both the patient data and the data required to complete the task of complete data entry for a presenting complaint.
Fig 6C shows a computer which has received both the patient data and the task of data elements required to be completed in order to complete the task of data entry for a presenting complaint.
It is well known in the prior art of medical computing of all the possible ways a computer may output medical data currently e.g. in EMIS web provides a means in a list format for every possible type of medical report, and depending on the user, the patient data on the NHS spine could be listed with all relevant patient data, for the purpose of data recording the specific patient complaint.
Thus this can be downloaded to a touch device as shown in Fig 6C. Because this medical example is an ambulance situation, the 111 operator has already taken the patient details, and the presenting complaint of the patient needing an ambulance, and the address that the ambulance is going to. Thus when the ambulance arrives at the location, and the paramedic or doctor starts the initial slide 11a and makes the initial downward movement 60, at this point the device can receive WIFI or telephone signals (e.g. 3G or 4G) to upload the GPS coordinate of the ambulance and the relevant next patient's details that the operator has already entered. Thus the information of the patient stored on the NHS database could be supplied in a known and agreed list format for the ambulance service so the doctor could scroll through the data elements in the conventional manner in region 65. Region 65 is a special modification of the conventional scroll operation (e.g. list of contacts or message etc in the prior art touch software). The region 65 which is a region of the right side of the menu items is an area which cannot enter data unlike the conventional operation. This is a design feature to make the region 65 for navigation purposes only, and not performing operations. This has the very useful function of providing the scroll area 65 (which could be varied to the size the user finds best) which means that on this side, patient data cannot be altered thus the user can quickly scroll up and down on this side of the screen or rest his digit on this side of the screen with no fear that it will ever enter data or alter data for the patient. Thus Fig 6C is showing the user having turned on the device in the ambulance, is provided with the next patient's data (saving all the unnecessary paper recording of information the NHS has already got), and the paramedic can read the patient's medical data stored on the NHS spine according to an agreed ambulance format. Since the demonstration is on a tiny phone this would require the user to scroll through the patient's past data until the user reached the 1st Data Element with a Yes No or Uncertain option.
In reality this would be titled something completely different e.g. Information needed to be captured for the correct assessment of the presenting complaint of chest pain. Thus when the Dr see this, the doctor or paramedic knows the listed data in this section requires at least the paramedic or doctor to complete the input to all the data elements to perform a complete task of input regarding chest pain. Thus the 1st Data Element to the Nth data input would be all the questions, examination and investigation results needed to be entered to be uploaded to the NHS spine or other connectively coupled computer in order to properly diagnose and treat the chest pain according to the latest best guidelines.
Fig 6D
Thus Fig 6D shows the beginning of the list of data elements that needs to be inputted for the correctly completed task. Indeed some items may require the user to access one or more additional screens using a touch e.g. a left reverse slide touch 72 (this is where the user touches the area to the left of the scroll area 65 and slides a digit in a left direction and then reverses the direction to a right direction to show another diagram Fig 6E).
Fig 6E.
Fig 6E shows examples of the essential vital readings that the paramedic do, like systolic BP 73, diastolic BP 74, pulse 75, temperature 79 etc 76,77,78 in diagrammatic format (so it is very easy by a touch to enter the readings) in Fig 6E and when that section is completed performs another left reverse slide touch 80 to go back to the list of data elements that need to be completed. (This is only one possible embodiment how a user may added data to the menu items, a SP could devise others.)
Fig 6F
Fig 6F then shows by a user making a single swipe 71 to select the various data options as confirmed, not present, or uncertain ,on the first screen, and on the lifting off of the digit on the Automatic page down menu item, this may automatically show a second or remainder screens for the user to swipe to capture all the necessary data input for every element that would comprise the state of the art data capture for that presenting complaint, with the user being shown to finally have swiped 81 the last menu item page on Fig 6 G. On receiving all data input entered for the first data element to the Nth data element to provide the NHS spine or connectively coupled computer with this state of the art data capture, the NHS spine may then send further management steps for the Dr and the paramedic to make for perfect treatment of that presenting complaint.
When the paramedic has then followed those steps, he can then perform a swipe 82 which then enters all the suggested management steps as completed. Indeed one of the management steps could be any other user selected management steps which allows the user to add any steps in addition to the suggested one(s). On completion of this swipe the computer can upload the data.
Now this system shows how fast this swipe system could be, with automated steps making the minimum of swipes to perform the operation, and using automation as much as possible i.e. when the user selects all the data elements which require input for the task, this is automatically sent to the spine, to minimise the time of the paramedic on the touch device e g. iPad in reality instead of the iPhone touch device (unless the paramedic is on a motorcycle).
Indeed although not shown in the same way that all the data elements could have been entered by a single swipe in Figs 6AA - 6AB this may be an alternative method of selecting one option for a task of several data elements.
It would be appreciated this schematic method of entering data has numerous benefits for the complicated task of data input for every presenting complaint for every patient in the NHS.
The data received from the connectively coupled computer is from three sources: 1. Patient data as recorded on the NHS spine. 2. A set of input tasks for every presenting complaint for perfect data capture of that complaint. 3. A set of management tasks for data input responses for every presenting complaint.
In addition the computer has an algorithm which allows further questions to be asked based on the patient's data with reference to the presenting complaint to provide further data input to be captured if necessary.
Likewise when the data input is completed and received, then another algorithm will produce the essential management steps to be performed for that patient with that presenting complaint and the captured data.
Thus the set of input tasks for eveiy presenting complaint could be continually updated centrally which would mean that the user always got the latest most perfect known data input needed and management steps for every presenting complaint, likewise all the captured data would be stored centrally so that no information was ever missed being captured for a patient.
Thus by having the three components of patient data, and a list of input tasks for every known presenting complaint, and management steps for eveiy known presenting complaint according to the data input for the presenting complaint managed on a single NHS spine, this will allow for exponential accurate data input for patients, and provide a minimum standard of the highest medical management for every presenting complaint according to the data input response. This will avoid all duplication of patient data, and enable a level of uniform care throughout the NHS.
Thus this will lead to a central research tool which could make new discoveries by its accurate data input to a single central source. Furthermore, it could allow any touch device with WIFI to be used by Drs. Indeed, a perfect data input system, is a goal that health computer specialists have been seeking for years, and now with the inherent superior properties of independent touch, has been simplified to the above example. Because in the same way health staff like paramedics could access this central data and central database for input tasks and management steps for the input tasks, so could hospital doctors doing a ward round. The staff could carry an iPad and which would be low powered without WIFI until the doctor needed it so it could be operated nearly all day. The user when they entered a ward could invisible touch the TDS and the GPS would identify the ward and identify all the relevant patients in the relevant beds on the ward and may display a ward layout with the relevant patients highlighted. The Dr would then take a picture of the armband barcode to confirm the patient (indeed a similar arrangement could be done in primary care). This would be a double safety confirming the patient details, as the Dr can also confirm verbally the patients identity. Thus the combination of using GPS and independent touch connectively coupled to an NHS spine, would make the most efficient method for recording a medical task and providing a multiuser input to an NHS spine where every entry for every patient is never lost, and can be used to improve the patient care to have a unified high standard of care across the country, while saving millions of pounds of staff time because it eliminates any unnecessary duplication of medical recording for each patient. The uploaded data will be time stamped. Thus the hospital may have its own computer storing bed locations and other information for adminstrative purposes for the patient, however this computer can also have an exact mirror copy of the patient data on the NHS spine. Thus removing any lag in retrieving data regarding a patient admitted to the hospital, and when the doctor then modifies the patient data record of the NHS spine, as the input time by the Dr will be date stamp, this can be added to the NHS spine in a mirroring process at a different time so the Dr does not experience any lag or slowness of the downloading of data or input method. This mirroring can apply to primary care or any other medical user of the NHS data so that although all this data is stored centrally, this and other methods may prevent any lag at accessing any NHS spine data.
Fig 7 shows a right forefinger with an attached stylus. The stylus can be a miniature stylus which can be detected hovering over the screen attached to a metal clasp which nearly goes round the digit distal to the DIP joint. It may or may not be pressure sensitive. The stylus does not need to be touching the touch-sensitive screen to be detected. The stylus only needs to be less in size than from the DIP to the tip of the digit, and it may be pressure sensitive but it does not need pressure sensitivity as essential as the purpose of the stylus is to identify one digit as the dominant digit. Its main purpose is to effortlessly enhance writing on a touch surface, without the problem of losing the stylus, as the metal clasp attached to the stylus may be made of malleable but firm metal designed to cover about 75% of the circumference of the digit tip so that so that the finger clasp attachment can be firmly attached to the digit tip, with the stylus ideally placed for writing. When the thumb is detected by the TDS touching the stylus, then the stylus automatically allows the user to write with the stylus like a pen with a soft plastic tip. The clasp allows the mini stylus to be positioned when the user is typing to be further up the digit not to interfere with typing but yet still allowing the attached stylus to be detected as a dominant digit. As the dominant digit, this can be used as a pointing digit of a pointing device in a traditional GUI pointer based operating system like Microsoft Windows. Thus this pointing digit only moves a pointer over the screen. If the user detects other digits like the thumb or middle finger touching the screen (i.e. left or right digits to the forefinger then these digits could be secondary digits to perform all the standard left and right clicks for a graphical element that the pointing digit is over e.g. an icon on a windows desktop. Thus if the pointing digit is over a Microsoft icon in Windows 7, a single touch of the thumb would the equivalent of the left mouse down of the thumb, the removal of the thumb digit would be a left mouse up, a tap of the thumb would be a left click, a double tap of the thumb a left double click and a treble tap of the left thumb as a treble click. Thus all the left mouse button click functions could be performed by a right thumb. In the same way, all the right mouse down, up, click, double click and treble click could be similarly performed by the right middle digit. The wheel up could be done by the right thumb, forefinger and middle finger simultaneously moving up the screen, and the wheel down of the mouse could be done by the right thumb, forefinger and middle finger simultaneously moving downwards. Thus in this way a pointing device could be easy replaced by using three digits of the finger with the forefinger being identified at all times, so if the user rests his right hand on the screen with five fingers resting on the screen, then only the pointer moves over the screen. By a simple modification of the touch-sensitive screens of the laptop tablets, (or a desktop TDS like the Cintiq PL 550 except the plastic could be a flat TC like the iPad), making the windows desktop display screen smaller than the total touch-sensitive component of the touch-sensitive screen can mean even if the pointer is positioned over the edge of a display component of the display screen, at least the right thumb, forefinger and middle finger will be detectable at all times no matter on which edge of the display screen the digit is touching. This would complete the seamless link between the old pointing based graphical interface (which now the true touch interface can seamlessly imitate) which now a user just need a desktop or laptop or tablet or iPad screen with a smaller display area to be able to detect the 5 fingers of a hand, and suddenly we never need pointing devices. Indeed it could be appreciated that for just the forefinger as a pointing device if the user rests his five digits on the cintiq like TDS , the predetermined movement of the digits of the hand (e.g. right) can be a replacement for the pointing device (mouse). When the five fingers are all resting on the screen this moves a pointer with a coordinate location of the index digit. If this points to a touch GDE 135 of the prior art, if the user lifts up the thumb and touches the screen this can replace the left mouse button, and if the user lifts up the middle finger and places it down then this could be the right button. If the user lifts the hand off the screen this deactivate any click process. Thus this shows just using a hand with no stylus how the predetermined movement of claim 1 can be fully compatible as a pointing device for a prior art GUI. However, a stylus would be better for smaller screens. Again the purpose of this description was to show how this new touch interface i.e. the touch of a predetermined movement of one or more digits (independent to button presses, independent to the display screen being on, and independent to a graphical element, can be made 100% backward compatible to a pointing device because this is always the characteristic of a simpler interface in that it can always be arranged to emulate a prior art method (in a more ergonomic method).
The real advantage of the attachable stylus to the digit is that while it is attached to the digit it does not get lost. And with it a child could write all their notes on an iPad with writing as good as real writing and all that writing could then be converted into searchable text, or searchable text in a pdf format which will locate the graphical written word. Thus this is one advantage of an attached stylus, ft is hoped if this gets popular shops may sell several of these mini digit attachable styli.
Fig 8
This shows the touch component TC of the TDS, as shown in Fig 1C. ft will be noticed that the TC has a larger surface area than the DC screen area of the TC which is represented by the clear transparent rectangular area of the screen which allows the DC of the TDS shown in Fig 1C to be seen as screen 12.
As the invention now is able to allow touch to be detected at any time on the touch-sensitive screen, one or more methods may be used by the skilled person to decrease the power consumption of the TC of the TDS being on all the time. One method is to reduce the power consumption by manufacturing new TC which can power smaller areas of the TC, e.g. the screen area 802 which is approximately the size of the path area 10. Thus if only this area is continually powered then when the user makes an initial left direction slide from the RUC, this then powers the remaining part of the TC. Thus the power drainage of this new TC would be considerably less than powering the whole screen.
Another alternative method is to have an array of solar cells 801 which could be charging a capacitor to continually power the minimal power of the 802 area to detect touch. In this way if the array of solar cells was sufficiently large e.g. over the black area of the TC, this method could be charging the battery or capacitor during the daytime to at least power the circuit for the 802 screen area to be always on. In this way even though the TC is continually powered, if only a small area is powered of the TC initially and then a specified movement caused the remainder of the TC to be powered, thus minimising power loss until the TDS would require full power to detect all movement on the screen if the user always touched the screen with an initial touch e.g. 11 or 1 la.
Also by using solar power cells this could cause the TC to be powered and if there were enough solar power cells the TC could be continually powered as always on by the solar power cells. Thus by these two or other methods, it is easy to see how new phones using invisible touch could make the TC more efficient than a button press turning off the TC completely in the prior art sleep mode. However, it would be appreciated that the prior art touch software always would use more power in performing the operation 136 because it always required the button press 1 to turn on the display screen and the touch component in order to perform the operation, and that would always be more power than performing the identical operation 142 without requiring the DC of the TDS to be on 141. However, for the reset button this could be performed by using a solar power cell array 801 as a separate backup switch on the TC of the TDS. It would be appreciated that an array of solar power cells 801 could be providing power, but also if the user touches over the solar power cell a decrease in power can be detected compared to the other cells, and by this means a touch could be detected over the solar power cells and if it is as specified e.g. a sequence of taps in one or more locations or a swipe over the array then this can be used as a backup electronic switch on the TC of the TDS to independently be able to perform an operation (e.g. send a GPS coordinate if the TC or DC of the TDS was damaged). The array of solar power cells also could be positioned attractively outside the display area.
Thus a reset button, and even a complete power off, could be done by touching a specific area e.g. 802 e.g. holding the screen for more than 5 seconds then tapping three times then holding the screen for more than 5 seconds or whatever pattern the user would want to activate the reset button. With the power off button this would normally always have at least 802 on the TC of the TDS turned on 141 by a separate circuit so it would not be affected if the TDS froze.
Alternatively because the TC of the TDS now can continuously detect touch, there is no need for an external button on any touch device, and with induction charging, and blue tooth headphones, and wireless connectivity to being connectively coupled to another computer, and all operations performed by all external button being performed by the TC of the TDS, then there is no need for an external button because the TC of the TDS can be accessed faster to perform an operation. Thus the new phones could have internal buttons or switches where the battery is stored to additionally reset the device or completely power off the device (i.e. no power at all) if the battery needed complete conservation e.g. for a trip into the jungle, and the phone was going to be used just for contact in case of emergency. In this description the TC can be larger or extend greater than the DC which is already known, however, the TC may become more complicated areas in devices in the future to have separate circuits incase the main screen froze, and these areas may be on different surfaces of the device in addition to that shown in Fig 8.
Fig 9. Fig 9 shows another set of touches and taps at the circles of a touch-sensitive screen shown in Fig 9 and also swipes or slides between these circles all as other ways of executing operations on a blank screen, even the miniaturised blank screen of a iWatch or equivalent. Furthermore the ability of touch to be performed on cry stal glass (like the iWatch) as in Fig 10. This shows an analog watch (e.g. like a Swiss watch). It has a crystal or glass watch face 814 and aTC 810 and a DC 811. The TC can be constantly on and/or with an area of the screen only powered and powered if necessary by an array of solar cells on the face of the watch 812, and/or by battery power. Thus the screen 814 by touches can perform an operation including sending an instruction to another mobile device e.g. to download emails or text. The important aspect of this design is operations may be performed with the DC 811 of the transparent LCD screen which allows the user to see the beauty of the mechanical face of the watch, but also having the control of one or more operations on the face of an analog watch operated by touch. The IT could be applied to any jewellery or other portable items.
Fig 11A and Fig 11B
This shows how silent mode could be more conveniently performed by a user. The user could at any time put the device into silent mode or alarm mode by a variable swipe 100 as one embodiment. Thus the user starts on the URC and moves downward on the right edge, as the user moves downward the user passes location 111 this is silent mode, and the display screen provides feedback when the user is over silent mode by showing the text silent mode over the screen in a low power mode, the user could remove the digit while this mode was shown and this would put the phone into silent mode and the display screen would immediately turn off on the lifting of the digit. The user could put the phone into vibrate mode by ignoring the text for silent mode and continuing the swipe 100 until the text "vibrate mode" is shown on the display screen at location 112, again the user could select this mode by lifting up while this "vibrate mode" was shown at location 112. And lastly if ring mode is needed the use ignores the vibrate mode display and continues to move the digit in contact with the display screen at location 113 and the display states ring mode and then removes the digit at this location 113 this would set the phone in ring mode. However, if the user forgets to turn on the silent mode in a meeting and the phone goes off, the device can be made silent by the user touching the TDS. This will immediately stop the ringing, and then the user can pull out the phone which would be blank as shown in Fig 11A and perform a slide to the location of the arrow head 11 (MUE) and this would turn on the display screen to show the notification e.g. the alarm screen Fig 11 B. which the user could slide over the screen and lift off to keep the screen on and answer the notification or lift off at MUE to give a not available message to any text or phone and to turn off the notification and do nothing else. Thus this method would be appreciated as much faster and easier than any silent mode with buttons.
Fig 12.
This shows one embodiment a user could design their own independent touch interface containing the steps of the embodiment to record a touch and select one or more operations for a touch on one or more locations on the swipe as shown in Figs 3 A-D or 4A-G.
One of the advantages of a WYTIWYG interface is that it is simple for a user to record a touch and select one or more operations to perform for that touch,or to modify a touch to perform one or more further operations at one or more further locations touched along the path of a digit moving along the screen e.g. the swipe 2 as shown in Fig 4 being modified by an edit program.
Thus the user could perform an initial touch 11 swipe. This then activates the swipe 15 to get the camera application, the swipe 16 which accesses the video application, the swipe 17 that allows for the prior art voice recorder application, the variable swipe 18 which allows a user to scroll through for the latest notifications for SMS, the variable swipe 19 for scrolling through the latest notifications for missed calls, the swipe 20 for invisible dialhng on a blank screen (the user just dials the number using Fig 5 B or Fig 5C using an invisible keypad and then touches or taps a send button (not shown but may be positioned to the left of the region for the 0 in the same relative position to the send in Fig 5E), the swipe 21 for the invisible texting on a blank screen (e g. the use swipes as described in Fig 5E and then touches the send button), and the variable scroll swipe 22 for the music player which can scroll through albums showing the first songs of the album or playlist showing the first songs of the play list as shown in Fig 4A and when the user has scrolled up or down to find the right album can lift the digit off to start playing that playlist.
The user can turn off any display screen or any selected application by performing the swipe 3, and if the user has performed an initial swipe 11 and changes their mind about accessing one of 15 to 22 swipes, the swipe 3 will deactivate that initial touch 11. Furthermore all these prior art applications displayed with the conventional GUI appearance in the prior art all do not require the user to fully unlock the device. All these can be accessed quickly without unlocking the remainder of the functions of the phone, so the use could answer or perform any of these operations without the phone being unlocked so that the user is restricted to just those applications on the device. This method could also be good for a user with children by allowing selected applications to be accessible by the child without fully unlocking the device. However, if the user wishes to unlock the device the user would just perform the swipe 2, which would turn the display on and unlock to the last screen in the normal manner.
Fig 13
This shows a flow diagram of the prior art which show without steps 131 to 135; it was impossible for the prior art device to perform an operation by touch or a touch operation 136 by every device with a TDS (touch device), especially all the modem prior art touch devices operating by iOS, Android, Windows Phone and any other equivalent software. The prior art touch device could be any device with a TDS e.g. iTouch,iPod Touch,Nintendo, Sat Nav, iPhone, iPad, iWatch, or Windows Surface or any equivalent to any of these devices all which had a TDS and an external button and displayed a graphical image on the screen (graphical user interface GUI) which had one or more graphical elements displayed (graphical display element e.g. desktop, window, icon, menu, or any other graphical control) on the turned on TDS, and by touching one of the GDEs e.g. the slider 7a in Fig 2AB). Indeed all the iOS devices (iTouch, iPad, iWatch and iPhone) or any equivalent device are the preferred device in the prior art to illustrate the difference of the invention, all the drawings have used an iOS device as a representative prior art touch device which has at least the essential component of a TDS and an external button 1 or switch on the side of the device, and perform at least the steps of 131 to 136 of the flow diagram of Fig 13 to perform a touch operation. However, it would be understood that any iOS device could be substituted by any other Android or Windows Phone or any other equivalent device with a TDS and an external button and operated according to the flow diagram of Fig 13. However the SP should assume that when the representative iPhone or iTouch is described it could refer to any device manufactured by any company with at least a TDS and an external button l(or switch or any mechanical equivalent) on the surface of the device.
Thus if we use Fig 2AB we can see all the steps of 131 to 136 of Fig 13 in order to perform the touch operation to unlock the device. Fig 2 AB shows that device has a turned off TDS 131. This in the prior art meant as quoted by the latest iOS manual that the screen "can do nothing" and it "saves battery power". Thus all the devices within the field of this invention have a state where the device has a turned off touch-sensitive display screen TDS 131, where, both, the touch component TC (01 shown in Fig IB) is turned off so it is impossible for the TDS to detect touch when it is not powered, let alone perform an operation, and the display component DC (02 shown in Fig IB) is turned off to save battery power. This has been the state of the Art for all devices since 1992, which was illustrated in the priority documents diagrams showing the IBM Simon in 1992, and the Beta Apple Notepad in 1993, had this configuration. Indeed the state of a TDS turned off 131 was essential ever since the IBM Simon, as it increased the phone processor and memory being powered by turning off the TDS from 1 hour to 8 hours to improve the power management of the device. And when the iPhone was manufactured, one of its major problems was battery life, so it used this conventional mode of sleep mode, whereby a user could press on and off an external button 1 on the surface of the device to conserve power.
Thus while the device is powered, there always is a period of time in the prior art device where the TDS is turned off 131 and not powered. Thus since it is impossible for any touch to be detected with the TDS turned off 131, the user has to use another input method, a button 1 press 132 or equivalent to turn on the TDS as shown on the iphone in Fig 2AA. In Fig 2AB this shows the display screen turned on 133 by the button 1 press 132 , and the display screen shows a GUI 134 appearance of an unlock screen, with a GDE 135 slider 7a within the GDE 135 slider control 7b boundary. Thus this GUI 134 (graphical user interface) means that in order to perform an operation the user must see a displayed image e.g. the unlock screen and then perform an input e.g. a button 1 press to turn off the displayed image or a touch operation 136 swipe 7 to perform the unlock. However, the GUI 134 requires as essential the step that a GUI image needs to be displayed on the screen to make the user know what input is needed, thus without the image of the unlock screen in Fig 2 AB, the user would know that it was impossible with the iPhone to perform the unlock touch operation 136. Thus Fig 2AB shows that the display screen showing a GUI 134 is an essential step in order to perform the operation. Without the button press 132, the display screen being turned on (step 133) or the GUI unlock image (step 134) being displayed on the TDS, it would not be possible to perform the touch operation 136. This was because the GUI 134 was a "What you see is what you get" WYSIWYG interface, that is the screen reminds the user by its appearance of what input operations are possible by that screen. So if the user sees the unlock screen in Fig 2AB, the user knows that GUI 134 image is programmed to detect the input of the touch operation 136 of swipe 7 and being responsive to a button press 1 to turn that image off. However, if the user sees a GUI 134 blank screen or GUI 134 of a turned off screen, the user knows with a GUI 134 blank screen that no touch can perform any touch operation 136, as the blank screen appearance 13 l(or 9) of the GUI 134 was designed in the prior art to perform no operations by touch. Thus the GUI 134 appearing on the screen and showing an unlock screen of the GUI 134 is an essential step to perform a touch operation 136 on that device, as the GUI 134 showing a GUI but a turned off blank GUI screen 134 appearance was designated by this appearance to never perform a touch operation.
Furthermore, in addition to not showing a GUI 134 blank screen of the sleep mode but an unlock screen of the GUI
134 to enable the user to perform touch, the user requires an additional GDEs, the GDE 135 slider 7a and the GDE 135 shder control within its boundary 7c in order to perform the touch operation 136 of the unlock. If the GDE 135 slider 7a was not present then the touch operation 136 of the unlock could not be performed.
Thus the requirement of the GDE 135 slider 7a being present to perform the touch operation means that it is impossible for the prior art to claim that it was only one or more locations touched apart from the visual feedback from the GDE 135 slider 7a to perform touch. This becomes obvious if we consider how the slider 7a is required as essential in addition to perform the swipe 7 (as the identical locations touched of the swipe 7) on a GUI 134 blank screen in sleep mode would not perform the unlock operation; i.e. without the essential two steps of a GUI 134 screen image of the unlock screen, and the GDE of the shder 7a) being displayed on the turned on TDS, with the TC being turned on 133 to detect the touch, it would have been impossible for the prior art touch device or prior art touch software to perform the unlock touch operation 136.
Thus a SP carefully considering just the swipe to unlock operation of the iPhone as a representative touch device would understand it was impossible for the prior art WYSIWYG GUI 134 touch device ever to perform a touch operation 136 independently as the touch operation without being dependent on steps 131, 132,133, 134, 135 would be inoperable and impossible to perform by the prior art touch device or the prior art touch software (e.g. iOS, Android and Window Phone). And this would be obvious to an averagely skilled person SP because any user would realise it was impossible today on the 28th Nov 2015 because all the steps 131 -136 are still essential for all the devices with a TDS turned off during a period when it is powered 131 and an external button 132 on all the latest devices operated by just released iOS 9, or Android Milkshake or Windows Phone or Windows 10 devices or any other equivalent software.
Fig 14
The comparison of Fig 13 to Fig 14 shows why the WYTIWYG (What You Touch Is What You Get) is superior to the WYSIWYG(What You See Is What You Get). The first obvious reason, that a SP would recognise is that the Fig 13 is at least 6 steps to perform a touch operation, and Fig 14 is one step making the WYSIWYG incredibly inefficient compared to the WYTIWYG. The second reason is that at all times the user can perform a touch at 141, whereas it is impossible to perform at touch at 131. The third reason is that a button press 1 132 requires effort of finding the button and pressing the button, the display screen has to be turned on 133, the GUI 134 screen determining and limiting the touch operation by its appearance, the GUI 134 which is programmed to several inputs can stop touch e.g. the pressing button 1, the GDE 135 is required to be touched to perform only the predetermined operation of the touch e.g. slider performs the unlock, and needs to be performed within a time limit of screen in activity but the user just needs to touch the screen to perform the touch operation 142 in Fig 14 with none of these limits having all the benefits of claim 11 over the touch GUI. The user then has to waste a digit movement from the button press 1 132 to the screen which is a wasted and unnecessary movement compared to just touching the screen in 142. Thus in every way the WYSIWYG is inferior to the independent touch interface.
Fig 15
There is no near prior art for this invention as it is a new interface operating by a completely different operation in Fig 14 compared to the nearest prior art touch GUI interface of the '443 patent in Fig 13.
This is because this new independent touch interface of Fig 14 is completely different from the prior art interface, in that it does not require a display screen to be turned on.
The command-line interface CLI required a display screen to be turned on to see a user typing one or more lines on the screen to operate the GUI. The GUI required a display screen to show a graphical display elements GDE 135 of a desktop blank screen, windows, icons and menus to be located by a pointing device and click to execute a command of the GDE.
The '443 patent is the nearest prior art patent which programmed the mobile phone screen to perform all operations by contact and not pressing (without having to click) the screen. The '443 patent explained 4 steps to build an touch phone from the Apple Notepad (Beta Version - later named Newton Messagepad) to a touch mobile phone which operates by contact and not pressing the screen, from the description in the '443 Zeroclick Device. 701. Get Notepad. Column 79 lines 10-11.702. Remove or deactivate resistive-touch screen and with a transparent touchpad programmed to perform an operation by touch instead of being used for resting the finger to point in the resistive touch screen GUI. Column 78 lines 6-12.703. Enable the touchpad (original name for capacitive touch in May 2001 when filed) to be transparent to show the buttons on the LCD screen e.g.control area 1 as shown on Fig 67. Column 78 line 36-42.704. Make the screen size of the notepad to become the size of the Fig 67 screen size to make a touch-sensitive screen phone Column 78 line 32-37.However, the LCD screen was needed to be turned on to show the control area 1 or GDE so the user could touch it in Fig 67.
The '443 described an unlock screen Fig 67 (called a start sequence) by which the touch could be arranged so that the screen would not be activated or unlocked unless a specific touch including a swipe as described in claim 1, or 6 of the '443 was done to unlock the screen. However, it could never claim to be a touch interface of Fig 14 because it required the user to touch a displayed GUI 134 screen in Fig 67 with a displayed GDE 135 e.g. Control area 1.
Thus the nearest it may be described as with all the latest prior art devices or touch software, is a touch GUI of Fig 13. It required a user to touch a GDE 135 on the screen in order to perform an operation not touch without even the display screen being turned on which is the invention of Fig 14.
Additional Description to the published patent PCT/GB2015/063690 P1
This patent application has fully incorporated incorporated and claimed priority from the published patent PCT/GB2015/063690 P1 or N1 with corrections and with the new claims under Article 19. It claims priority to GB 1604767.2 P2 filed on 21st March 2016 with DAS code 8DC8. It also claims priority to N5 (GB1609963.2) filed on 7th June 2016 and GB1609962.4 filed in 7th June 2016, N8(GB1609970.7) filed on 7th June 2014 with DAS code 1AD3, and N7(GB 1609962.4) filed on 7th June 2016.
It is understood that the modified claims of this invention include at least the scope of P1 as originally filed or the new claims of P1 filed under Article 19. Furthermore all the information submitted to the EPO of why the invention is not anticipated, not obvious and definite and the citied priority document by the EPO or WIPO search cannot anticipate these claims is assumed incorporated into this patent.
The purpose of this further description is to clarify why the device, method, and memory of the claims of P1 is definite, inventive and cannot be anticipated and is not obvious from the prior art. Furthermore, further Figures have been added to prove the superior properties or increased capacity of the touch device over the touch display device.
How the invention came about. I had already filed a ‘443 patent on 3rd May 2001, which I notified the head software develop of Apple the concept of a revolutionary new idea of a mobile phone operated by just touch and had shown them source code (user interface code) of Locatorfrm from a Nov 2000 priority document how the pointing and clicks (which would have been done by pressing the screen in the Apple Notepad or Messagepad e.g.
Column 78-79 specifically 79 line 10-12 of ‘443) could now be done by contact, swipes, slides, and taps by just touch or contact on a mobile phone capacitive touch screen rather than the conventional programming of the pointing (and no other operation being done by all mobile phones by contact) and the clicks being done by pressing on the resistive screen The locatorfrm program written in visual basic code (or It could be written in any user interface code of any computer programming language). The Locatorfrm program when compiled showed pointing could be done by a different specified swipe also by contact on a capacitive touch screen (touchpad) (this was done by contact on the resistive touch screen of the Notepad which guaranteed to the user that no other operation would be performed by touch ). The Locatorfrm program showed how also by contact of touchpad (transparent capacitive touch screen overlaid on a LCD or TFT or other screen) could perform the click (which was done by pressing the screen in all mobile phones) could be performed by touch or contact using contact, swipes, and slides to perform these clicks also by touch instead of the pressing of the screen. Thus this working model of Locatorfrm demonstrable by compiling the user interface code into the object code executable or running the visual basic code in a scripting language proved that for every pixel of the screen there was a different method available by contact alone of performing both clicks (this was shown in locatorfrm by a red dot - but the programmer could understand that this could do any operation of the device available in a GUI by a click) and pointing for every pixel of the screen. It was this working model of Locatorfrm source code or user interface code or code from the 1 Nov 2000 priority document which when compiled showed that prior art two levels of pressure [e.g. touch to only be able to rest the finger on the screen and position a finger on a virtual image of the phone and DESIGNED TO REASSURE THE PERSON THAT WHILE TOUCHING THE MOBILE PHONE LATEST TOUCH SCREENS THAT NO OPERATION OF THE DEVICE COULD BE OPERATED BY TOUCH, and then all the rest of the operations of the phone e.g. the Ericson R380 (a latest mobile phone at the time) could be perform by clicking or pressing the screen to perform all the functions or operations of the device] could both be done by just touch or contact on the mobile phone touch screen or a Notepad screen using a touch-sensitive capacitive screen (touchpad) programmed only to detect contact of a finger on a screen overlaid over a display screen (e.g. LCD) screen. This was the invention of the pressureless touch-sensitive display screen mobile phone or device which could perform all operations (pointing and clicking) by just contact of one or more virtual buttons or control areas by contact swipe and sides [as demonstrated by the source code of locatorfrm which today unmodified and compiled still demonstrates on the surface pro 4 swipes, contact, sides all performing clicking (the red dot in the locator form which could be any operation of the mobile device previously done by pressing the screen in the R380 latest mobile phone) and the pointing by contact demonstrated by a different type of swipe or slide] Thus this program locatorfrm provided the source code (user interface code - i.e. code to build a user interface when compiled) showing how the technical problem of not using pressure (i.e. a Zeroclick or Zero Pressing - or touch or contact interface) was possible on a mobile phone using a transparent capacitive screen overlaid on a LCD screen. This zeroclick or touch interface performing pointing and clicking was demonstrated and proved by the working model of Locatorfrm which any skilled person could understand by reading the code and any non skilled person could understand by seeing the working model of the code demonstrating this touch interface which could operate by location information and not pressure information.
Thus since I was a skilled person already able to understand the iOS, and Android or equivalent Windows touch software as I had notified these companies of the idea, I discovered on my desktop computer that the display lead had come loose so the touch-screen monitor had the display component turned off and yet it was able by the blank screen to fully operate the task menu program positioned at the right edge - specifically performing the operation 167 which was the equivalent operation of performing the slide operation on the identical monitor if the display lead was plugged in. At that moment I realised in the same way that pressure screens (Click screens operating all functions by pressing the screen apart from pointing) were dead and the new mobile phone screen would be a touch screen (ZeroClick could operate both pointing and clicking) when I filed the ‘443 in May 2001 and notified Apple in July 2002 by fax. I now realised the conventional programming of the touch-sensitive display screen using a sleep mode to prolong battery life and prevent accidental triggering by turning off both the touch and display component of the touch-sensitive display screen was dead, and the new true touch device that could operate all operation of the iOS, Android or Windows touch software could be entirely performed by touch on a touch-sensitive display screen with the display not being required to be turned on.
It all happened by an accident a lead not attached and because I had already conceived of the touch interface for the pressureless touch screen phone, I realised instantly the increase in capacity of this touch interface and the power conservation of this interface conserving power in a different way than the prior art which used a mechanical button to conserve power by switching off power to both the display and touch component of the touch-sensitive display screen as shown in Fig 1B using the prior art prejudices of the unnecessary steps of Fig 13 131 -135 - which were all essential in the prior art if the state of the prior art device was a turned off touch-sensitive display screen with both the TC and the DC component turned off as described in Fig 1B. I realised by configuring a screen according to Fig 1C and Fig 14 where the touch component of the screen (e.g. the desktop windows monitor had a usb connection with the touch component live) but the display component (e.g. the display lead disconnected on the Windows software monitor). That I could fully control a Windows device by a display screen which had the display screen turned off (DC off) but the touch component e.g. the usb lead to the monitor turned on. The desktop computer if all the other input devices were removed then only could operate by touch with a display screen turned off. Flowever, I realised if instead of manually turning off the display screen (i.e. removing the lead to the monitor), the turning off and on of the display screen could be done electronically by instructions through source code being performed by the processor, and this could be applied to existing iOS, and Android and Windows mobile devices like ipads and iphones. I then realised that the whole operation of any of these devices could be completely controlled by entering a number on an invisible imaginary touchpad on the device. It was then the invisible touch, or the independent touch (i.e. not needing a display element),or the instant touch (because the TC was always on but the display component (e.g. disconnected lead - need never be used) to fully control a device by touch without any visual feedback of the screen by just using the turned off appearance of the display to fully control the operations of the device.
Furthermore I realised that although this Windows desktop may be configured to only be operable by a touch-sensitive display attached to a computer (e.g. the normal desktop or mobile hardware to run an operating system like windows or Apple software including iOS or lynx etc or Android) with the display lead disconnected so the user needs to fully operate all the operations by only touch on a turned off display; electronically the touch on the turned off display could operate or turn on the display and the computer could be modified by adding other input devices like a keyboard or a mouse or a pressure screen or any prior art input device that may be configured to work on Windows or any compatible software - thus making this touch interface on a blank touch screen fully compatible with every prior art input method of operating a computer. Thus a new interface was born, a true touch interface on a touch-sensitive display screen which had the property to fully operate the device by touch with the display turned off and then could fully operate the iOS or Android or Windows software in the conventional way. Thus this obviously had far greater capacity than the original iOS or Android software. Indeed because the developer was able to use only touch it made obvious all the ergonomic inefficiencies of the iOS or Android software. The unnecessary button presses 1 on the device to turn on the touch display screen from being turned off to save power and prevent accidental triggering. The TC being always on in Fig 14 141 made the touch instant and always accessible and therefore since all operations of the device could be performed by touch on a turned off display that the device had instant and faster input by touch than any prior art software or hardware that required a touch-sensitive display screen to be turned on to unlock and then perform an operation of the unlocked device.
Indeed it was a eureka moment the instant I saw the display lead disconnected from the monitor. The future of programming was touch on a touch device. To help clarify this there are a few definitions needed to be understood.
Example illustrating the prior art method.
This is obvious if we consider an iTouch shown in Fig 1A of P1.
Fig 1A shows an iTouch (but this could be any equivalent android, Microsoft or equivalent device). ITouch is just a representative touch display device of the prior art.
The iTouch is in a state where the touch-sensitive display is turned off or not receiving power in the prior art 131. The iTouch has a button 1 and the button 1 needs to be pressed 132 in order to provide power to the touch-sensitive display so both the touch component TC and the display component DC are powered on 133. Before this step is done no matter what touch is done on the touch-sensitive display screen 131 it would be impossible to operate a single operation from this touch-sensitive display screen because it is not powered. Indeed would be obvious to any skilled person that if there was not an input step (e.g. by the user performing some external input to the device like an external button press or moving the device or getting a finger or hand in close proximity to the device or speaking or interacting in some way externally with the device) by the user 132 then it would be impossible to turn the touch-sensitive display from off 131 to on 133, and if the state of the touch-sensitive display device or touch display device was 131 it would be permanently inoperative to touch. Furthermore even when the device is turned on the device shows an unlock screen 134. This is a displayed screen which the user has to touch in a predetermined way to access further operations of the device. E.g. the iTouch may required both a swipe to unlock predetermined movement followed by entering a passcode in Fig 5A. The Kindle or other ereader devices may only require entering a passcode equivalent or similar to Fig 5A of the iTouch. Either way it is not sufficient for just the touch-sensitive display to be turned on to unlock these devices and perform a subsequent operation of the device if the user wants to keep their personal information locked and secure. The only way the user can perform the unlock is by performing a touch on the unlock screen and/or passcode screen 134 as this is an essential step for the user to get access to their data on the device. Indeed on the unlock screen 134 the user has GDEs 135 to unlock the device to the phone by swiping the phone icon 135 or text icon 135 or taking a photo icon 135 or any other equivalent GDE 135 e.g. perform other operations like adjusting the settings by performing a downward wipe over the GUI screen 134 in android. Thus unless these predetermined touches are performed related to the relevant GDE’s 135 or related to the screen appearance 134, on the iTouch or android it would be clear that no subsequent operation dependent on an unlock screen being displayed and the correct touch performed dependent on the correct location and predetermined touch required by the appearance of the screen to perform the correct touch operation 136. Thus this description shows how the representative iTouch or an equivalent Android unlock requires both the screen appearance to be correct e.g. the unlock screen 134 showed in Fig 2AB is necessary for the iTouch to display and the user has to touch the slider button 7a (the GDEs 135 of 7a,7b, and 7c) and perform a slide touch with the screen producing visual feedback within the path boundary 7b to move the slider to 7c. Without the performance of at least this unlock step as to keep information secure a further step to unlock of a passcode step Fig 5A is required to allow access to further GDEs displayed in Fig 2AC as icons on the desktop. Thus any of these touch operations 136 of the device (i.e. subsequent operations after the device is unlocked) requires a further touch on a GDE 135 e.g. a finger to make contact with the alarm icon and lift off while within the boundary of the alarm icon perform the touch operation 136 to open the application screen to appearance of the alarm. Thus if we take the touch operation 136 of opening the application page of the alarm or clock application on the iTouch it will be obvious to even a user let alone a skilled person that steps 131 - 135 are all essential steps if the touch-sensitive display is in the state of not being powered 131 and therefore designed that the screen can operate no operations by touch and display nothing as this until the invention of the touch device of P1. It would be appreciated by all skilled persons if any of the steps 131 -135 were not performed then the touch operation 136 of opening the alarm application would be impossible from the state of the touch-sensitive display screen being turned off, which is the default state available to all equivalent devices to prevent unintentional touch being detected by the touch-sensitive display and to minimise power wastage of the touch-sensitive display screen.
So how does the invention of P1 work.
Fig 14 explains the essential 3 steps.
The first step 141 is to have the touch-sensitive display 141 able to detect touch at all times when the device is powered even when the display component of the touch-sensitive display is turned off, and this can be minimised by only having a small area called the start area of the touch-sensitive component of the touch screen continually powered. Thus this method is a radically different way of conserving power compared to the prior art step 131. Indeed a skilled person would have considered as going in the reverse direction of the prior art as this step uses electrical power to keep the touch-sensitive screen continually turned on, and thereby drain the electrical power supply of the device when the device is not used than a mechanical button turning off the touch-sensitive display 131 so that the touch-sensitive display is draining no power in these circumstances. Thus a skilled person would have never considered solving a power conservation problem of the touch-sensitive screen by continually keeping a touch component of the touch-sensitive display on while the device was not in use when the prior art method wasted no power on the touch-sensitive display. This would be shortening the battery life of a mobile device, more than a mechanical means and would be a backward step if the device was not in use.
The second step 142 of the invention in Fig 14 is even more radical. It has to with using a combination of an input (e.g. a button press 132) to then turn on the touch-sensitive display screen: this means that a touch operation is possible on a touch-sensitive display that is turned off.
This would have been considered going in the opposite direction of the art which was to turn on both the touch and display component of touch-sensitive screen exactly at the time when the user needed to use the device. The skilled person in the art believed by turning on the touch and display component 133 by the button 132 would provide maximum visual effectiveness 134 exactly when the user needed to use the device and no possibility of unintentional error by touching the screen and no possibility of power wastage when the user was not intending to use the screen by pressing the button. The skilled person without imagination believed there was not a better way of getting no power wastage and no risk of unintentional operations by touch when the device was not in use, and therefore this method has been used as the only method for every device in the field where a device provides a sequence of touch or touches of the touch-sensitive display screen to unlock the data in the device. Indeed the need of the touch and display screen to be on meant that the display screen can provide a unique appearance of the screen 134 so that the user knows by performing a predetermined touch on the unlock screen can unlock or gain access to subsequent touch operations related to graphical display elements 135 (e.g. buttons or icons or background desktop or any distinct area of the screen that may be associated with a touch operation). Again the skilled person believed by providing this visual appearance from a touch-sensitive display screen was essential to help the user see the graphical elements 135 (e.g. slider control) available to unlock the device and thereby remember the touch operation 136 (swipe to unlock), and the visual (e.g. movement of the slider button) and/or auditory feedback (e.g. a click noise when lifting a finger off) or a vibration feedback responsive to touch on the screen or GDE 135 to inform the user that the correct predetermined movement was been reliably performed to make it as easy as possible for the user to perform the operation as the visual appearance of the screen would all help the user of the touch required to perform the operation.
Thus the mode of operation was believed to be the safest and most user friendly way of operating a touch display device. Indeed no device has ever been manufactured prior to the touch device application of P1 as the prejudice of the skilled person’s imagination could not see how the touch operation e.g. of setting the alarm could be done in a better way without all the steps of 131-135 been necessary for all the above reasons BEFORE the touch operation e.g. opening the alarm application in Fig 2AB.
Indeed such is the belief in a touch-sensitive display screen been fully on 133 to display a screen to enable the user to have visual feedback and/or auditory feedback and/or tactile feedback including vibration and/or pressure being required in the latest prior art devices e.g. the touch-sensitive display device (e.g. latest apple watch 2) - this shows the skilled person believes that external mechanical buttons and pressure screen is the way forward to control these devices, and that a combination of having a touch sensitive display screen which is not powered unless there is a button press (132) or other input 132 (like gyroscopic input or whether a digit is detected in close proximity to the screen )
All the prior art suggested that the more visual feedback, auditory feedback and even tactile feedback when a user was operating the device by touch the better and more reliably the user would perform the operation as the screen appearance and the visual feedback of the GDE e.g. the change of appearance of slide to unlock on the iTouch screen would be reminding the user what to do next. Indeed the use of GDE’s (e.g. all the different controls known in Windows or any other graphical interface displayable on a screen) which had an appearance to remind the user of the function of the graphical image and or the touch necessary to activate one or more operations of the device.
Step143 This third step is even more radical. It says that touch on a touch-sensitive screen with the display turned off can control every other input and output of the computer: in short it can control everything. Thus a contact of a blank turned off touch-sensitive display screen can be the only input to fully control everything on the touch device as long as the device is powered. This goes against the entire art which states that touch on a turned off display screen with no lights or marking or indicators should be turned off and not detected to prevent accidental or unintentional touch performing an operation and to minimise power loss by turning off all power to the touch-sensitive display screen. This goes completely opposite to every teaching of the command line interface CLI e.g. dos and the Visual Interfaces like visual basic using a display screen showing graphical display elements like the GUI and the touch display devices which require the display to be turned on to perform the unlock and access a touch operation after the device is unlocked.
Definition
The following word or phrases in bold meaning to be understood in this application are: A touch device is a device powered to perform an operation by touch on a touch-sensitive display without the display on with less power consumption than the device in the prior art, a touch display device, which performs the operation with the display turned on.
It solves the accidental triggering by touch and decreased power consumption by a different way than turning off or not powering the touch-sensitive display, which is used by all prior art touch display devices in the prior art field of the invention.
Instead the invention of the touch device is to continuously keep the touch-sensitive display capable of performing an operation by touch, and decreases the power consumption of the touch device by a different method than by turning off the power to the touch-sensitive display in the prior art touch display device and instead performing many navigation and common operations of the device with the touch-sensitive screen turned off. A touch display device is a device of the prior art that includes at least an external button or switch on the device to turn on the touch-sensitive display, and if the touch-sensitive display is turned off to prevent accidental triggering and to conserve power, the device is inoperative by touch on the touch-sensitive display. The touch display device is the field of the invention of a touch device. It is the devices which had a description in their manuals which required the touch-sensitive display to be turned off to prevent any touch being detected and able to cause accidental triggering of the device by touch e.g. while in a user’s pocket or being carried or the display being accidentally touched, and also provided power conservation by this method by the touch component and the display component both not receiving any power when it was not been used.
Touch is contact of one or more fingers on a touch-sensitive display according to a predetermined movement to perform an operation of the device.
User interface code means lines of instruction usually in text form in a file (e.g. visual basic code in the priority documents of the ‘443) when compiled or run as a scripting language build an application or operating system of a device that can execute the lines of instruction into an interactive graphical element on the display to an input and/or the processor performing one or more operations of the device. User interface code is a broader term than visual basic code as it can be lines of instruction written for a particular compiler (e.g. the visual basic compiler or emulator program of the scripting language). Thus visual basic code would be always written in the style and syntax of a visual basic compiler. Whereas user interface code informs the skilled person that this code which can create applications not only for just the Windows environment but for any environment like the Apple operating system or lynx or a new compiler language like iOS or Android which had not been created. Thus user interface code could be visual basic code or assembler code or C/C++ code or future software code: in other words it could be code written in any computer language which could create a user interface e.g. a Window in a Windows Operating System which is how visual basic code works in the priority document. User interface code makes it clear to any skilled person that the code can be written in any computer language that can create an interface for a user on the computer. It is a very broad term which would be understood to give maximum scope to the different languages of code that may be used to create the user interface on the screen. The meaning to any skilled person would be lines of instruction that can create a computer interface for a user. The purpose of including this in the claim language was to clarify to the skilled person that any touch example in any of the visual basic code (e.g. a swipe, slide, contact with a touch-sensitive screen e.g. demonstrated in the locatorfrm program if the code was compiled to operate the selected operation of a click, but a different swipe deactivated the selected operation of the click) written in the priority document clarifying to any skilled person that the ideas written in code in visual basic code was protected (the non skilled person could just see how the program operated by contact of a finger on a capacitive touch screen operating by contact). A touch operation is an operation performed by contact of one or more fingers on a touch-sensitive display A state is a configuration of the touch display device when the device is inoperative by touch on the touch-sensitive display unless another external input of the user is used to turn on the touch-sensitive display.
Unlock in the touch display device or a representative device of the device in the prior art described in claim 1 of article 19 of P1 or claim 1 as filed in this application means performing on a turned on display screen a touch e.g. swipe 7 of Fig 2AB, and this then gives access to operations e.g. alarm icon on the desktop in Fig 2AC or any other icon not available to the user unless the unlock touch is done e.g. swipe 7 and/or entering by touch a passcode Fig 5G to allow access to these further operations if the swipe and the passcode is not entered on a turned on display screen. Indeed it would be appreciated that it would be impossible to accurately enter the passcode in Fig 5G if display was not turned off. Thus unlock is gaining access to further one or operations which cannot be performed on a device with a passcode state without having the screen turned on. Whereas with the touch device any operation may be accessed by performing a touch on the touch device without first being unlocked, the thing that prevents these been done unintentionally is by making the touch impossible to perform unintentionally. This was described in P1 and claimed in 5 c).
Unlock Touch Operation on the touch device. Although it would be possible to type a number of code into an invisible keyboard and operate any operation safely e.g. claim 5c. For example if a user used an invisible number pad shown in Fig 5 B and the number pad was designed to enter the numbers into the device by taps over the relevant region. Thus the user may perform a swipe 11 and then tap the 3 a certain number of times e.g. 5x and then enter a further number by tapping the invisible keyboard that performed a certain operation like phoning a friend. It would be appreciated that if the number sequence was chosen to make it impossible to enter unintentionally e.g. swipe 11 then 3 tapped 5 times then enter 0 three times then 1 to operate the phone number to the friend this operation would never happen by accident! Obviously there is a balance between preventing accidental triggering unintentional (and something like the above sequence of numbers being entered to perform an operation like firing a missile at another country which you wanted the risk of accidental triggering being very small) and keeping the operation of the touch device being fast. So the conventional method of having an Unlock Touch Operation is a good quick way of having a specific touch operation to access all further operations of the device e.g. swipe 2 in Fig 1 A. It only differs from the conventional unlock in that does not require the steps of Fig 13 131-135 because the touch component is permanently on 141 and can instantly perform the touch operation 142 to control any operation of the device 143, and it uses a different way to conserve power e.g. small start area powered on a screen and performing operations of common operations without turning on the screen and no wastage of power and effort of having to have to press a button to turn on the display screen and touch and then perform the swipe 7 in Fig 2AB to then show further icons to perform further operations in the device. The operations of the further operations done by Fig 2AC can be done by touch on a touch-sensitive display screen all without the display being turned on.
Having said that the touch operation 142 can be performed in similar steps to the prior art to provide backward compatibility.
Thus the touch operation of the touch device can be divided into three main operations. 1. The Unlock Touch Operation. This is a specific touch operation of the invention of the touch device. This is the user performing an initial touch of claim 1 e.g. swipe 2 to give initial access to the further touch operation available in the device in the prior art which was only available if the device was unlocked. The only difference is in the touch device once the Unlock Touch Operation is performed, it does not need to power the display to show Fig 2BB. Fig 2BB was just showing one option that the a beginner user of the touch device may prefer to use, to get the user use to unlocking by an unlock touch operation e.g. a swipe 2 on the blank screen in Fig 2BA. However, as the method is clear in Fig 14 that there is no requirement for any visual feedback of Fig 2BB. The screen could have the turned off display appearance after the swipe 2 as described. E.g. see description in Fig 16. This shows how an unlock touch operation e.g. swipe 166 is performed to give access to further operations of a Windows e.g. the PHQ-9 menu 167 to perform the vertical slide from 161 to 162 to time the operation of the vertical swipe and enter the data selected e.g. all the entries which indicate the person is maximally depressed thereby giving a score of 28. 2. Further Touch Operations. This is any operation that previously could not be accessed without an unlock in the prior art touch display device e.g iOS or Android or equivalent software, which includes the device in the prior art of claim 1 having the state to allow the user to only access the data of the further touch operations after a password or passcode has been entered on a turned on or powered touch-sensitive display screen. As described before Touch Operations could be accessed in the touch device by any touch which was sufficiently secure not to be accessed unintentionally but having a system with an unlock touch operation, which gains access to further touch operations e.g. the vertical movement 167 described in diagram 16 shows when the display is still turned off. 3. The Lock Touch operation. This is demonstrating the equivalent of swipe 3 in Fig 1C on an iTouch or as demonstrated by swipe 3 in Fig 2BB which locks all the further operations made accessible by unlock (include passcode entry if necessary e.g Fig 5G) swipe 7 in the prior art iTouch. By performing this swipe 3 it has the same effect as pressing the power on button off in this configuration. However the Lock Touch Operation is different as there is no requirement for the display screen to be turned on as illustrated by Fig 16 which the user performs swipe 168 on a touch display screen 164 when the display has been turned off during the performing the unlock, performing the further operation and now performing the lock which requires the unlock touch operation as described above to be performed in order to get access or perform a further touch operation. As can be seen the essential difference between the performance of these touch operations 142 compared to the equivalent touch operations 136 in the prior art touch display device is that the display is turned on to perform these touches whereas in the new touch of the touch device there is no requirement for the display component to be turned on e.g. 141 of Fig 14 but only the touch component as also described in Fig 1C.
Backward compatibility. One of the most important aspects of designing a new interface e.g. operating a touch device by touch on a touch=sensitive display screen with the display component not powered to perform an unlock touch operation e.g. 166, a further touch operation e.g. 167 and a lock operation 168 which can all be performed by just a touch component of the touch-sensitive display turned on. This is where Fig 16 shows how backward compatible the new system is. Claim 1 describes the minimum requirement of the method. That is the method requires as essential the ability for the touch component to detect touch on the touch-sensitive display. Thus at all times while the device is powered touch on the touch-sensitive display is the only essential input required 141 to perform an operation 142, called a touch operation because an operation is performed by a touch or contact of a predetermined movement of one or more fingers on the touch-sensitive display screen. As already discussed regarding Fig 14 141 there is no requirement of a display to be turned on to perform any touch operation to operate any operation of the device. Thus the touch device is truly a device which can operate a device by just touch on a touch-sensitive display with the display turned off. This is shown by the complete operation of an unlock 166, further operation 167 and lock 168 operation being all performed by touch without the display turned on the touch-sensitive display device. Thus the description of Fig 16 proves that a windows operating system can be fully operated by contact of one or more digits according to a predetermined movement to perform an operation.
More detailed explanation of why when I saw the display lead disconnected I realised the invention of touch on a touch device as defined above. 1. I realised the inefficiency of the prior art programming method of iOS, Android and Windows having all the unnecessary essential steps 131 - 135 needing to be performed in order to allow a touch operation 136 to occur. I realised if any of the steps 131 -135 was not working on an iOS or Android device the device would be inoperative to touch. 2. I realised the capacity of the touch of the Fig 14 invention was far greater than the prior art iOS, Android or Window method and in every aspect superior, yet it required only one element to operate the device - just touch without the requirement for the touch-sensitive display screen and this could increase the battery life of the device so that it was not necessary to turn off the touch-sensitive display screen increase the longevity of the working life of the device when it was not in use, and the problem of accidental triggering was solved by making sure the touch which could always be detected on an always on TC device e g. 141. Could be solved by making sure that the device even when it was not in use could not be unintentionally triggered by a person playing on the blank turned off touch-sensitive display screen by making sure only an intention operation of the device would operate.
Thus the invention as described in Fig 1A became obvious when the superior properties of the method of Fig 14 compared to 11. A method of claim 1, whereby a performance of the operation, including the operation being a task of a sequence of operations, is improved compared to the operation in the prior art in one or more of the following aspects: a. more instant, Fig 1 A shows an iTouch. Fiaving made the decision to have the TC of the touch-sensitive display screen continuously powered, (and the power consumption decreased of the touch component only requiring a small start area to be powered e.g. in the RUC then the device can be simplified to not needing any external buttons available on the iTouch. Furthermore the decision not to require the display to be turned on to perform the touch operation 142 and be able to operate all operations 143 has simplified the power requirements of the device to only require to continuously power a small start area e.g. RUC if the device was in a locked touch operation state. Thus although the power requirement was actually more for the device when it was not in use, and theoretically any operation could be performed 142, by carefully designing touch to perform an operation on the touch component to be unlikely to be unintentional operations could be performed at less power because the display component of the touch-sensitive display was not needed. This increased the capacity by making the touch on the touch-sensitive screen more instant from when the device was not being used and the user would have chosen the device to conserve power by turning off both the touch component and the display screen of the touch-sensitive screen. Furthermore the application described by using solar power cells sunlight may produce more power than the start area being continually powered. Thus the first step of invention was to realise there was benefit in continually powering the touch component only of the touch-sensitive display screen and not the display screen. It produces the benefit of not only conserving power in a different way but the small sacrifice for having the touch component or small area of the touch component continually powered means that the touch device is always responsive to touch - the user can touch the RUC and perform a swipe 2 or swipe 11 and that may unlock the device e.g. show the desktop and make accessible the conventional icons in the conventional way as shown in Fig 2BA, however the swipe 2 may just unlock the device and leave the display unpowered but further operations would be made available.
This means throughout the whole battery life of a device the device is instantly responsive to touch, and the power drainage would be so low by the touch component especially if new screens were developed which could only require a start area of the screen to be permanently on, that the device may have a battery life of over a week, as no other external input e.g. voice (which heavily drains the device, or other input sensors on the surface of the device would be needed to be turned on. the most common task could be performed by a vertical swipe, it was realized that more accessible, Fig 1 A makes obvious that touching the RUC of the screen is faster and more accessible than pressing a button 1 then swiping 7. The user just needs to perform swipe 2 or 11 to unlock the device and it uses no power to unlock the device. Furthermore if the phone is ringing in a pocket a touch within the whole surface of the touch-sensitive display screen could silence the phone, while the user took out the phone and then performed a swipe 11 to answer the phone. As the user is sliding the phone to the left performing the swipe 11 the screen could show who is ringing giving the user the choice to continue the swipe 11 to answer the phone or performing another predetermined movement to deactivate or not answer the phone e.g. moving vertically towards over the screen. Thus this example shows how the display screen is not wasting power even when someone is ringing, and the answering of the phone and silencing the phone is quicker and easier than any other method.
Quicker. The pressing of a button then swiping 7 would always take more time than the equivalent operation of the swipe 2 or swipe 11.
Easier. The natural position of a right hand holding a phone would have the thumb resting over the RUC of the device or the start area. This is the most comfortable position of the thumb. The user has to just slide the finger to the left along the left upper edge for a distance which could be determined by experiment to be sufficient distance to be unintentional e.g. slide 11 on touch-display screens of a certain size. e. less power consumption. As can be seen instead of using the universal method of conserving power and preventing accidental triggering by switching of the touch component and display component. The new inventive method is to have the touch component (which could all be powered but preferably only powering a small area of the screen). f. more reliable, A device which has no moving parts is usually more reliable that a device that has moving parts. The history of devices which have a mechanical button going like the on off button is considerable. The fact that this new method relies on less components to perform the same operation e.g. touching the RUC and swiping 2 or swipe 11 to perform the unlock in Fig 1A and can operate even if the display screen becomes damaged means that this device is more reliable as it has less elements to go wrong. g. increased capacity, The fact that as claim 2 states that the touch can operate all operations that previously required the display screen to be turned on in order to unlock and perform an operation on an operation that was dependent on the the unlock or passcode screen been displayed and then dependent on the subsequent desktop or other screen available after unlock to be displayed shows that this new touch device has the option to BOTH operate the full capacity of the old system which performed operations of touch on a turned on display screen, and also has the increased capacity of a new type of operation (whereby all operations e.g. 5 C can be operated by touch on a touch-sensitive screen which with the display turned off means that the new simpler interface relying on touch on a touch device can provide the user with both the capacity of displaying the h. less effort, It requires less effort for a user to unlock device with a right hand gripping the iTouch by touching the RUC and moving it in a left direction to perform the swipe 2 or swipe 11 or other distance of swipe to unlock the device by using just the feedback of a turned off touch-sensitive display screen appearance. i. simpler, It is simpler to perform all operation by just one input on the external surface of the device. Just touch of the RUC and swiping to the left a certain distance as shown in Fig 1 A. j. safer in an accident, In an accident the display component can be damaged while the touch component may be intact. This can send a GPS signal coordinate to an emergency service by performing a touch on touch-sensitive display screen without the display being turned on, and in addition the pattern can be such that a child would not know how to accidentally set the signal off until old enough to responsibly deal with this information. This would also improve safety by reducing unnecessary calls. k. more ergonomic, The position of apposition of the finger holding the iTouch in Fig 1A is a very relaxed and comfortable position and the gliding or sliding movement over the screen along part of the upper display edge is very relaxed and and thus the unlock and other operations e.g the task of swipingshown in Fig 16 167 can be performed by this relaxed form of touch with movements radiating from the most comfortable position of the thumb, making this a one digit control device with the most comfortable position for the thumb. l. simpler for a user and skilled person to design their own touch operation or operations, Fig 3a-d and Fig 4a-G give one embodiment how easy it would be to capture any touch on a touch screen of one or more digits according to a predetermined movement (i.e. claim 3) and then design a user-defined operation on the device. Thus the user can start designing their own touches to operate the device. Thus an advanced user may design the device to have a standard swipe 11 to perform the unlock touch operation and a swipe 3 a lock touch operation and then design one operation for a user e.g. a vertical swipe or reverse swipe or any other easy operation to perform one operation e.g. phone home. Thus if this was given to a very young child the child could be taught how to use this operation as the device could be arranged only to perform this one operation and the only battery life used would be the small amount of power to power the start area. Indeed if the phone had solar power this could continually power the start area not using any power and keep the phone charged or topped up see claim 13. This would maximise battery life and if properly designed may mean that the device could operate for long periods of time in an expedition environment e.g. going across the Sahara desert and the battery life lasting the whole trip. Yet at any critical moment the user could instantly phone in an emergency. m. less likely to lose a stylus, This is described in claim 10 when the user may wear a stylus like a ring ail day so they do not lose the stylus and the stylus could identify a pointing finger so no operation of any clicks could happen apart from the pointing finger touching and pointing to a location and then another digit tapping to activate a click or a selected operation apart from pointing at the location the user was pointing. This means that the user could rest his hand safely on the touch screen and nothing would happen if the pointing stylus was not touching the screen. Another version using fingers without styli of pointing and clicking use of digits is described in claim 12. This enables a configuration of touch to emulated the operation of a pointing device. n. more aesthetic device surface appearance, This is described in claim 15 of article 19 of P1. The appearance of the device surface can now be smooth because the only requirement is a touch-sensitive display component on the external surface of the device.
o. uses less digit movement or effort to perform the operation or a task of more than one operation than any other input method in any software in the prior art, This is obvious if Fig 1 A is considered, and Fig 16 is considered. This makes obvious that if the user has to perform digit movement first to press a button 1 and then to touch the screen at the RUC that is using more digit movement than just touching the RUC. p. improves user intelligence by performing operations without visual feedback. The conventional wisdom was that user’s using a device with multiple different images operating in a consistent GUI method to taps, contact, slide or swipes to providing visual or auditory feedback is the best method to operate devices by touch. However, where this may be true if a person has to read information, simple commands and simple navigation, and simple tasks could easily be performed by touch on a touch-sensitive display without display being turned on. Indeed most of the designs of current iOS, Android or Windows requires the user to perform many unnecessary mechanical button presses, waste unnecessary power on turning on the display in order to perform operations. Indeed it has caused the unexpected property of making user’s of these devices rely on an over dependence of visual signals to prompt them to act. The effect of requiring a user to see a turned off display component of the touch-sensitive display screen and making them think without any visual cues and operate a device actually means the user can perform tasks quicker and in a more tactile and subconscious manner. Like riding a bike or driving a car, just performing touch on a touch device is more natural and leads to people having clearer thought processes as they are not forever dependent on looking at the visual feedback of the screen. The method also removes the unnecessary steps of having to look at an unlock screen and press a button but can go directly by touch to the operation of the device they desire. Furthermore the ability for the user to organise how the device may operate by user-defined touch operations which the user can experiment using a developers environment like that described in Fig 16 with one screen being a blank touch screen 164 and the other screen being a normal screen 163 is going to allow for rapid development of this new interface. As most new interfaces will be niche products to begin with, this new programming will be chosen by those who have a better memory and who like to design their own interfaces. The new style of having a completely new way of operating by touch without restrains of having a certain visual picture on the screen is going to lead to a vast variety of different programming styles of touch on a touch device. However, patterns will immerge and the simpler faster yet safe ways of navigating to operations and executing them by touch dependent on only the turned off appearance of the touch device touch-sensitive screen is going to lead to more ergonomic programming. q. improves user recall of the user by performing operations without visual feedback, It is self evident that this type of touch device would lead to improving the user’s memory because the user has to remember all the touch operations performed by just seeing the touch-sensitive display screen of the touch device. r. improves the user’s action by performing operations without visual feedback. The touch device allows the user by designing their own user-defined touch operations to slowly build more or more operations performed by touch . The swipe 11 in combination with entering a passcode by either taps or swipes on the blank non powered display of the touch device is one embodiment of how all actions could be rapid but as safe against accidental triggering as entering a pin code for a visa card, making accidental triggering very unlikely especially if a user has to enter in the pin number several times if it appears that another user has entered in the pin number a set number of times. s. Improves the security of the information on the device. Thus it would be appreciated that this method makes the information on the personal device very safe, but also very fast to perform, and indeed since user can tailor the predetermined movements to their own liking of how to unlock the device (with a wrong predetermined movement preventing or deactivating the unlock), then the touch on these devices will be much safer and the information much more secure than any other current iOS and Android as currently the multi-input methods like voice and SMS text open vulnerabilities for software hackers to access the device. With this device software hackers will never be able to access the device unless the correct predetermined movement of the unlock touch operation is performed, and user could define their own algorithms of numbers that a user has to enter if anyone has tried to hack into their phone by touch after a certain amount of time. Even the element of software manufacturer and the government each having an algorithm which have both to be used for an individual phone serial number, and then a mechanical device which could enter hundreds of touches precisely for each device using the algorithm to hack a terrorist phone may be used. Thus touch would not only be the safest way of keeping personal information safe on a device, it would also enable law enforcement with legitimate court orders to use part of a passcode (a sequence of several hundred numbers) and the law enforcement department having another sequence of the remaining several hundred numbers to be entered to open the encryption on a device using a mechanical robot touching the screen with pens acting like fingers. It would be appreciated a device which operated by claim 1 as the only way to unlock the device to allow access to encrypted data could be designed to be safer than any other current iOS, Android or any other equivalent software method, but also be used to hack into devices if the device was part of a crime scene with certain judges orders. t. Is fully backward compatible to perform the operation by any other input including a pointing device, a keyboard, a gyroscope, a light sensor, a proximity sensor, a GPS, touch on a touch-sensitive display device like iOS, Android etc, using pressure on a touch screen to perform an operation, using pen with a pressure tip. Indeed any possible known prior art method may be initiated and made available to a user by performing the operation in claim 1. This is what 142 in Fig 14 meant. The touch device may be backward compatible to any prior art input method. This is obvious if we consider Fig 16. If the only input method attached to device is This claim 16 is trying to capture the scope of the touch operation 142 being able to enable the user to use any other input method in the prior art in addition to touch on the touch device to perform all the operations available to the computer. Fig 16 makes this scope perfectly clear in that using the left hand touch-sensitive display screen with the display turned off 164 then this operates according to claim 1 and if this is the only input to the computer then the device operated by claim 1, and by this only input the device may be operated. But if the device is modified according to claim 16, then because the wording of claim 1 is that the display being turned on , or a mechanical button is not required, or another external input is not required, then this means claim 1 could be further modified to use other input method in addition to being able to at its simplest configuration fully control the device by just touch. u. improves the performance of any prior art input method, including mouse, other pointing device, keyboard, contact input on a graphical display, pressure input on a graphical display, gyroscopic input, light sensor input and proximity sensor input.
This points out since touch on a touch device without requiring any delay to turn on a display screen or any delay of any other input method, means at any time instantly touch on a touch-sensitive screen has the greatest capacity to perform an operation quicker and easier than any other method. Because of this it exposes exactly how slow conventional data input methods or tasks performed on prior art computer devices. Indeed touch performing an operation or task is the fastest possible means of performing a task if the user considers how to minimise digit movement to maximise efficiency.
To maximise efficiency there should be no delay when a user wants to perform an operation. The device has to be capable of instantly being able to operate the operation of the device. Furthermore it has to be able to perform the operation or task as fast as possible including the time it takes to unlock the device and lock the device after finishing the task. Fig 18 shows how fast data entry or tasks can be performed on a device which is configured to maximise efficiency of performing the task. In essence Fig 18 shows how a menu based on the description of Fig 6A, 6AA, 6AB, 6B, 6C, 6D, 6E, 6F, 6G, has been designed to perform the a task of the PHQ-9, and shows how that task by touch on a touch device can be completed in a fraction of a second. Indeed GB1604767.2 P2 filed on 21st March 2016 with DAS code 8DC8 demonstrates how the menu item works by sequential photographs. The sequential photographs showed the speed of the conventional website for completing the NHS PHQ-9 took over 20sec. Using programming based on touch of a touch device which used the turned off display as the only visual feedback to the user, the design of the touches required were horizontal and vertical movement relative to the border of the turned off display as these were the fastest, easiest and most reliable movements to accurately perform the task or operation. Furthermore the turned off touch screen could be divided reliably into columns and rows. E.g.
Fig 5B which the user could visualise with their mind, and/or that meant that a user could reliably contact, or remove, or perform a swipe or perform a slide of a finger within different imaginary regions of the rectangles/squares created by the imaginary grid on a blank screen to perform an operation. The slide on the screen could be right-angled movements or reverse movements within the imaginary regions or between the regions and edges of the display could be used. Initially I divide the screen into 9 main regions e.g. 1-9 and imaginary region 0 on the edge in Fig 5B principally because I realised that the ability for a user to enter numbers into an invisible numberpad would enable the computer to perform every operation available to the device by entering a number. E.g. if the number entered was a 10 digit number e.g. 1,000,000,000 then this would allow the first four digits e.g. 1,000 to be entered so that the touch was intentional with the same risk of accidental triggering and still allow a user to perform 1 million operations of the device just using 10 taps on the screen, with the screen turned off. This was the eureka moment, I realised by using the blank screen there was infinite capacity to perform all the operations available on the device just by tapping imaginary regions. E.g. if a user needed 10,000,000 operation he could just increase the number to an 11 digit number and number the 10 million operations respectively from the number 10,000,000,000 to 10,009,999,999. So with just 11 taps on a blank screen 10 million operations could be performed completely safely just using a mental grid on a touch screen. This was not including the vast capacity of different contact with 6 easily identified regions on the edge of the screen of the corners and mid points of the edges of the display screen e.g. 0 in Fig 5b as being an imaginary region on the Mid Bottom Edge MBE of the display screen. Thus I realised that I was looking at the future of computer, in the same way I realised with the Locatorfrm program solving how by contact on a touch screen I could perform a point and a click for each pixel when the prior art required a mouse movement and a button press (or the equivalent pointing device of a touchpad or touch screen (e.g. using a pressure activated tip on a pen - so light pressure pointed and heavy pressure clicked e.g. tapping the screen, pressing the resistive screen, or pressing and releasing the pressure tip stylus). I had solve how to emulated the point and click interface for each pixel by just contact of a finger on a screen. I realised it was a eureka moment and on the 26th July 2002 sent a fax to Apple informing them they would blaze a trail if they built a mobile phone which operated by contact or touch. In the same way, now I realised I had another eureka moment, which was going in the total opposite direction of the art, a device which was a touch-sensitive display that could be fully operated by touch using only the appearance of the turned off touch-sensitive display. I realised at that point that for intelligent people with imagination to picture an imaginary grid on a a screen the only input these user’s would need would be touch on a touch-sensitive display. It was then the touch device was born. It was the concept that people would operating a device in the future by touch without the display powered for most of the time. I could picture people answering phones and making phone calls on mobile devices without the touch-sensitive display ever been turned on. I could picture that for many common operations the display of the touch-sensitive screen would not be working. I then realised that not only would this device have increased capacity it also would be a different way of conserving power of a device with a touch-sensitive display screen.
It was the display screen being on that used most of the power of the device. Now there was a method that millions of operations could be performed by 11 taps with the safety of preventing accidental triggering of a pin number. v. Most of the time most users only use 100 operations regularly on their phone, so the majority of these operations could be performed without the display screen being on. This would mean that the battery life of the mobile phone would also be increased if these operations were done without the display screen being on. As the touch on the blank screen has already greater capacity to perform operations safely than every possible operation available to the operating system of any prior art GUI or touch GUI device, then only the applications that actually needed the screen to be turned on like reading text would need the display turned on. E.g. if a user only made phone calls to 10 friends each day with his mobile phone and used it for nothing else then all those operations could be performed by touch on a turned off touch-sensitive display screen. In that case that phone would have a much longer battery life even lasting up to a week if the user never turned on the display screen for any of those phone calls. This saving in power not using the display for any of those operation could then be used to keep the phone turned on to touch all the time while the display would be off. This would mean that no button or other input was needed on the surface of the device because the touch on the touch-sensitive display already had greater capacity than any other method to fully operate all the operations of the device. That would mean the device was more reliable (mechanical parts like buttons or switches break down more than parts that do not move) and could have a smooth aesthetic surface (a long felt need of designers - but not possible until the realisation that touch on a touch-sensitive display screen of a device had more capacity and was the simplest input of a computer device - making the need for external buttons to perform operations or any other input unnecessary as the touch had the capacity for all operations and better power conservation and indeed could have the safety of preventing accidental triggering of a pin number. Thus if the touch component of the touch-sensitive display screen could be always on then the user could instantly use the phone safer, quicker, with less power performing the operation by touch on a turned off touch-sensitive display screen. Indeed if the phone was used only for phoning friends the display screen would never need to be turned on, and if the user phoned their friends often during the day, then reprogramming the phone never to turn on the display screen for any of these phone calls. I then thought of the application which most parents by phones for their children. That is they are going away on a trip and they need a phone to call the parents back in emergencies.
Thus the phone needs to conserve maximum power. You do not want a child wasting battery power which they can do by pressing the button of any touch-sensitive display device. Indeed a young child with ADHD could drain a battery in a few hours by constantly turning the display of the screen on. w. Having become convinced the superior properties of the new method of operating devices by touch without having to turn on the display to unlock and perform operations of the device.
Further experimentation showed that the simplicity needed to operate the device e.g. horizontal and vertical movements over a grid structure was the fastest way to input information, and fully compatible to any grid structure which is displayed on a screen.
Thus the quicker way of performing operations by touch on a touch 3. Having realised the capacity and the power conservation of the touch interface could mean that the touch device may have a longer battery life if configured for certain tasks by the flow diagram of Fig 14, compared to the essential programming of all iOS, Android or Window devices or any equivalent software of the devices of the field which were programmed within instruction within their code that operated according to flow diagram of Fig 13. I discovered all the other increases of capacity by using a single element (which would make the iOS and Android or Windows devices inoperative by touch and there was no other external input on the surface of the device if their touch-sensitive display screen was in the state 131 or indeed if any other of the unnecessary prejudices of 132 (which could be any other input like a proximity sensor or voice activation or touching a fingerprint button or a gyroscopic input in the android or iOS watches -instead of the button press to turn on the touch sensitive display screen touch component TC and display component in 133). Indeed the screen being of a certain appearance 134 or having a certain graphical display element 135 are realised as completely unnecessary programming steps if the display lead is disconnected from the screen as shown in Fig 1C or as described with the desktop monitor whose lead was disconnected. It was the realisation that all these steps were completely unnecessary for perform the touch operation 136, and indeed had prevented more ergonomic ways of performing touch. Because a user using a turned off display screen only has the turned off display screen for visual feedback so all the touches by definition have to be vertical or horizontal movements or touches in regions like corners of the display screen or mid edges or in regions like that demonstrated in Fig 5B or Fig 5C. Furthermore with no distraction of having to conform visual images to perform actions according to conventional button presses or menu scrolling new ways of operating buttons and menus can be devised - which are more efficient and quicker and easier at performing operations. Indeed it was this which enabled the very efficient task menu invention to be developed. 4. The Task menu (which allows a visual menu to be displayed on a computer and operated rapidly by pointers (e.g. a pointing device (mouse) pointer, or a finger pointer or a stylus pointer) faster than any other task software performing the same task (see description of Fig 18) in 40x to 80X of performing the task by the conventional NHS Website of the PHQ-9 at the time of the invention of the Task Menu. The invention of the Task Menu came out of trying to work out how a user could perform a task in the most efficient manner using the invention of claim 1. e.g. select one option for each of a list of menu items or items to complete the task. Tasks as illustrated by the PHQ-9 are the backbone to any business or medical recording task. Thus by solving how to perform a task in the most efficient manner you provide every business with a method of making their business more productive and successful, because in short a task allows a user to ask all the important information to achieve a certain business task completely to the highest known standard, and allows all that information to be recorded in the quickest way. Thus by organising a business according to all the important tasks, a business which can perform these tasks in the most efficient manner is going to naturally succeed over the other business. As a doctor we have 100,000 tasks or more to perform every possible task for every possible condition in medicine to the highest standard. Therefore as already described to start making protocols and importing them into a task menu for each of these task is a huge job. The principle requirement of the job would be for doctors to start defining each of the tasks as a task menu, and that is why a simple scripting language of csv (character separated values instead of the normal comma separated value as explained later is needed) which enables doctors to focus on the difficult bit of building the content of all these task menus (both the list of items and their options for each task). Then programmers can import these relatively easily into the task menu on a medical computer on the web which could centrally supply all the up to date tasks to all the doctors. Thus this provides instant standardisation to all NHS staff, and means all staff of the NHS are always using the latest tasks. Thus by a task menu for each possible task a doctor could be required to perform for a patient, and since these tasks as illustrated by Fig 18 could be completed 40x - 80x faster than the conventional NHS Website, this could double the effective workforce of the doctors and nurse by getting them to record the same data in a different manner of the task menu.
Brief Description of the additional Drawings .
Fig 16 shows how the invention was discovered. And how all the superior properties were discovered over the prior art by comparing the touch method of screen 164 with the other input methods using screen 163 with the display screen turned on.
Fig 18. This shows the Task Menu the quickest was of performing an operation of a task in a business or medical recording setting.
Further Detailed Description of the Drawing in addition to the description of the drawings Filed in P1.
Fig 16.
Fig 16 shows how the superior properties of an the method of claim 1 or Fig 14 of P1 could be discovered. Two monitors could be connected to the desktop. One monitor 164 is a touch-sensitive display screen with the display screen not required so the user can explore the properties of touch to perform an operation on the touch-sensitive display screen. The other 163 shows the visual image of the Windows GUI. Using software programming tools like Adobe flash the touches on the left screen with the display turned off can be developed where the user using the right monitor can see and use other inputs to rapidly develop the programs like the Task Menu. Thus new touches on a blank screen can be developed and then incorporated into less efficient prior art touch software programs like iOS or Android or Windows to be able to improve the programming of the prior art software by one of the properties described in claim 11 .
This shows how the invention was first conceived. It shows a desktop touch-screen monitor 164 which was attached to a windows desktop as the only input. The display lead was disconnected from the monitor so that the touch-sensitive display screen could only input touch information and there was no output from the computer to the display and the display was turned off mechanically. Fig 16 shows two displays a left screen 164, and a right screen 163 attached to a single computer the right screen is an ordinary display screen which is not touch sensitive and is connected to the display port of the computer. The left screen has only the USB lead connected to the computer and the display lead not connected to the computer. Thus the left screen 164 is a blank screen providing only touch input i.e. the detection of one or more fingers touching the blank screen is providing coordinate input for each finger in contact or moving in contact on the screen including detecting the initial coordinate of contact of each finger and also the last coordinate of contact of the finger at the moment when the finger is removed or lifted off the screen. Thus this configuration allows two fingers to be detected (however, this touch screen could be substituted for manufactured screens which could at least detect 20 digits all moving independently on the screen ). Thus the technology to control a computer entirely by a left screen using touch on a blank screen where the display is not powered by removing the display lead from the computer has been available from at least the 3rd May 2001 when the ‘443 patent was filed.
However, it has never been done or described or no device ever made controlling the operations of a mobile device by a blank screen until P1. (the right screen 163 is just provided so that a skilled person can see the GUI of Windows. To see what happens on 163 by touches being made by the user on the left screen 164
This was because a skilled person without imagination would have considered it a backward step completely against the prior art not to have a screen with the capacity to have a display turned on an provide visual display to a user not to user that display to help provide visual feedback to the user so the user could be reminded how to unlock the device (and/or enter a password) and then perform a touch operation e.g. like opening the alarm icon on the desktop 8 in Fig 2AB. Indeed the skilled person would say it was impossible for the user using an iTouch to unlock it without the unlock screen Fig 2AA and then to select the alarm icon on Fig 2AB to open the alarm icon. Indeed the skilled person would argue that the user would press other icons on the desktop instead without visual feedback or may not unlock the device at all. Indeed the skilled person without P1 would have thought in Dec 2014 and even still today it would be a crazy idea to fully control a touch display device by just touch as defined in the touch device. This is denying the user a visual input and feedback to help the user perform a touch operation by visual feedback when mobile touch display devices have been designing more and more high resolution screens to provide the user with better and better visual feedback to make the user interface more effortless and easier that a bright screen. The skilled person would have said what possible good would it be to have a blank screen controlling a mobile phone where the only input and output operational to control the operation of the device may be just touch on the screen. A user could be performing all sorts of different operations and never know they were performing the operation. It would be much safer for a user especially a child to have a device which the child knew was turned off to touch when the screen was turned off and completely safe, than deliberately configuring a device to be able to be unlocked which was password protected and access private data by touch on a blank screen.
However, when using a configuration like that shown in Fig 16 over two years ago, the inventor discovered the benefits of configuring a touch device to be continually powered to perform operations including unlocking, entering a password, and performing an operation of the prior art touch display device using only touch on a touch display screen with the power turned off to the display component but the display. In short it discovered the superior properties of the configuration of Fig 14 over the Fig 13 programming within iOS or Android or Window touch software or any equivalent.
Although the application in P1 is for a touch device, the inventor made the original discovery about the superior properties of this interface on a desktop with two screens configured as shown in Fig 16. This allowed the inventor to realise that by using the top corner RUC of a touch-sensitive screen with the display turned off 164 a user gripping the corner with his hand and digit making contact with the power turned off to the display in Fig 1C but the usb touch lead of the monitor connected to the computer. Thus this arrangement let me discover the following unknown properties of the invention of a touch device. A device which can operate by touch on a touch-sensitive display screen with the display turning off.
It became obvious if the screen 164 was continually powered according to Fig 1C, then if the right screen 163 was disconnected from the computer and the flash program of a touch menu could detect the swipe 2 then this would be a very safe and instant way to unlock a computer. The specified movement of the swipe 2 on the screen would be very fast to perform (indeed as already shown in Fig 5C 11 - the movement along the top edge of the swipe 2 could be a shorter and quicker movement e.g. swipe 11 rather than swipe 2 in Fig 1 A. Indeed it could be even shorter especially if the user has to change direction e.g. 60 in Fig 6B. The length of the left direction swipe from the RUC of the screen 164 or the RUC or the display screen in Fig 1A which the thumb bail is placed over can be varied depending on the developers need. It may be only the minimum direction towards the left along the upper left edge that can reliably and consistently be determined as a user intentionally moving to the left on the left edge. It was the realization that the power requirement of the usb lead was considerably less to control the touch than power the display component of the monitor. That if a small start area in the RUC of the display screen 164 then the power requirements could be designed to be negligible if the touch was continually on, yet this would be an instant way to turn on a desktop computer from hibernation or sleep mode or unlocked. Furthermore, by positioning a flash program so that an operation of the device could be performed by a vertical movement downward then an operation or task of several operations could be performed by touch perfectly reliably so that the user could unlock, perform an operation and lock a touch device completely reliably on a display screen. Thus by using the edge of a blank screen 164 it became obvious that a device could be continually on and not be accidentally triggered e.g. if the swipe 2 was a reverse movement twice e.g. detectable intentional movement towards the left along the upper display edge of the screen 164 or on the iTouch in Fig 1A then another detectable intentional movement to the right towards the RUC along the upper display edge of Fig 1A and then repeating that reverse movement to be the unlock movement - this would never happen accidentally especially if the starting position had to be the RUC and all other touches not as specified by this predetermined movement detected would disable the unlock. So by just this simple arrangement on a desktop computer 164 the invention of the touch device was born. The power conservation would be unlocking e.g. swipe 2 on screen 164 i.e. swipe 166,
The operation of performing a swipe on the Task menu as illustrated by Fig 18 by swipe or slide 184 which is the swipe 167 illustrated in Fig 16. The user then can perform the swipe 168 to lock the device which is the equivalent of swipe 3 in Fig 1C or Fig 2BB. Thus this simple GUI programming method showed how a windows interface could be locked, (i.e. unable to access any other further operations of the windows interface unless the swipe 166 was first done (equivalent of swipe 2 in Fig 1A). This then made accessible one or more operations of the Windows interface, but in this case performing a task 184 of PHQ-9 of defining a person who was maximally depressed by sliding (or swiping) a digit from 181 to 182 on Fig 18 . Then if the user performed a swipe 3 on Fig 1C or swipe 168 to lock the device all the actions of input of touch by the touch screen are prevented from activating any operations in the Windows interface until a swipe 167 is performed.
Thus Fig 16 shows thow Windows interface could be totally unresponsive to touch to perform any operation unless a swipe 166 is done to unlock the interface on a turned off display. The a task is performed 167 by a vertical swipe, and then on the left screen by performing the lock swipe 168 a user cannot operate any operations of the device according to claim 1 until a swipe 166 is performed.
It would be a very simple matter for a software engineer having access to ail the programming of a device to be able to turn the display on and off while the touch component was turned on, and also to only allow touch on the touch-sensitive device only to perform an operation if a predetermined operation is completed.
The novel aspect of this invention was the inventive step of removing the prior art essential steps of 131-136 and operating the touch of the touch device as illustrated by the left screen of 164 by touch on a turned off screen.
Fig 17
This is self explanatory. It shows how by using a Hierarchical grid structure HCG, e.g. see ‘443 all the important subjects within medicine could be selected and drilled down from a simple menu at the top most level of the Reid code and within 6-7 levels all codes (representing over 200,000 codes could be located. Each one of these could be selected and have a task menu or several task menus associated with each read code, (the structure does not need to use the read code but any classification) and indeed this may vary from country to country that may want to organise medical data and the important tasks. Since the read code every when it was first conceived over 200, 000 codes this means that a comprehensive medical database which has a task menu for each important code. Thus the first flow diagram box 171 shows how the first box of this central database could be organised for the NHS Spine in England but for equivalent central medical databases in other countries. 172-175 Describes the advantage of having this central arrangement which is easily updatable by ordinary users e.g. doctors or nurses. 172, and it describes the advantages of doing so 173-175.
It would be understood the central database could be a heirarchial database of every task menu for any particular business, e.g. lawyers civil servants indeed every company could have a central database with a collection of task menus to improve the efficiency of the business and its recording methods for recording accurate and complete data regarding client business needs in a rapid manner to make that business profitable.
The financial advantage of 175 is very compelling as using a task menu is the most efficient way of performing a task.
Fig 18
Fig 18 shows a graphical menu displayed on a touch-sensitive display screen.
By scrolling through the Fig 21- Fig 345 of the in Adobe using a mouse wheel the serial photographs shows a movie demonstrating the following. Fig 21 -Fig 208 shows a movie of how a conventional medical task of a user using a mouse by sequential photographs of an NHS Website performs the task selecting an option of “Nearly Every Day” for 9 items to give a score out of 27/27 or maximally depressed on a PHQ-9 depression questionnaire in 22:21 seconds using a mouse as fast as possible on the NHS website. In contrast using the identical pointing device (mouse) as demonstrated by serial photographs Fig 209-Fig 217 performs the same task in 0.17 seconds on the task menu. In Fig 218 -236 demonstrates how same task can be completed by a stylus in 0.3 seconds. In Fig 237 to 250 it shows how on a desktop touchscreen can perform the same task by using a finger making contact with the desktop touch-sensitive displays screen, and lastly Fig 251 to Fig 271 shows how on a touch-screen display Android device like an ipad could load the same Task Menu (as could an ipad or iphone) and operate the same task in 0.33 Seconds. Thus all these movies by serial photographs illustrate how fast this new touch menu is compared to performing the same task by mouse on the conventional NHS Website.
Fig 272 - Fig 345 illustrate the elements why this new task menu is so much faster than performing the same task on the NHS Website.
First the four common tasks to be selected are represented by the four options columns in each item area of the list. E.g. the item area of the first item of the list “little interest or pleasure in doing things is represented by upper line 185 above the quoted words and the line 186. Within the item area there are four option rectangles 180, 181, 182, 183. The task of this menu item was to time how a user could select 183 option for all the 9 item areas (arrow 184), and to time how long it would take to select these options starting at area 187 moving the finger, stylus or pointer through all 9 options and sliding into area 188. As the above description shows that respectively a mouse, stylus, finger on a desktop monitor and then on a ipad like touch pad could all take less than 0.4 of a second making this task menu 80x faster than the same website using a mouse with clicks. Furthermore picture shows the identical ipad like device with the touch-sensitive display turned off. It would be appreciated that this same task could be performed in under 0.4 seconds on a touch-sensitive display screen with the screen turned off. Thus even by touch on a touch device without requiring the display screen to be turned on, this task menu can allow a user to perform a task a magnitude of 80X faster than the conventional NHS Website.
Most medical checklists comprise of asking a sequence of questions with one or more options to be answered for each question, and using a format similar to the task menu which has the ability to ask one or more options (e.g. 180,181,182,183) for each item. The user starts at start region 187 by the various pointers (mouse pointer, finger, or stylus) moving in a downwards direction towards 188 the end region. While in each item area the user can move between the various options selecting them if the pointer moves within the option. E.g. Fig 279 shows the finger starting at the start region and moving into the first item area and the option area 180 causing the score to be 0/27 with only the 180 option area showing a line that it has been selected. Fig 280 of P2 shows that the line has moved into option area 180 and this option area 180 has a value 0 the score is 0/27. Fig 281 of P2 shows that the line has moved into option area 181 of Fig 18 and this option area 181 has a value 1 the score is 1/27. (the visual feedback line represents the path of the finger coordinates detected by the touch-sensitive screen). Fig 282 of P2 shows that the line has moved into option area 182 of Fig 18 and as this option area 182 has a value 2 the score is 2/27. Fig 283 of P2 shows that the line has moved into option area 183 of Fig 18 and as this option area 183 has a value 3 the score is 3/27. Fig 289 of P2 shows that the line has moved back into option area 180 and this option area 180 has a value 0 the score is 0/27. This shows that a subsequent movement into any option area can undo the previous option area selection. Fig 279 of P2 shows that the digit has moved into the next item of the list of 9 items. This saves the last option selected in the preceding item area (e.g. 180 which has a value of 0 for this option area). Fig 279 of P2 shows that as the finger acting as a pointer has moved into option 181 of the second menu item the score is changing from 0 (180) to a value of 1 (181) by an ambiguous 0 with a 1 in it /27 in the total. Fig 316 of P2 shows the digit has selected option areas 180 each with a value of 0. By moving up to a preceding item area this deselects the option of the item area that were last or below the preceding item area, so by Fig 290 of P2 all the last 8 item values have been deselected and the user has a chance to reselect the option for the first item area. Fig 333 of P2 shows the user starting all over again, and this time the subsequent photographs shows the user selected all the rest of the options 183 for each of the menu item until having selected the option area 183 for the last item area 339. When the user moves into the end region 188 this then uses the data selected (e.g. option area 183 for each of the 9 menu items ), this then performs the final operation of the task of calculating the time it took to complete the task (and it could save the data to a patient’s notes). It would be understood there would not be a quicker way that a user could enter data or undo data of a task by this method.
As already shown by Fig 6AA and Fig 6AB that when the user got the pointer (finger, stylus or pointing device (mouse) pointer) to 188 of Fig 18 then instead of the operation being calculating the time, the operation perform could be repopulating the menu items on the screen with the subsequent items as shown in Fig 6AB. The user could then move upwards and select the option area for these menu items displayed on the screen and when the user then gets back to 187 of Fig 18 this could trigger the last remaining menu items which the user could select by moving downwards until 188 then completes the list of items that spans 3 pages and then records that data to a local computer or a NHS computer accessible by the web or WIFI to a specific patient record. The NHS Computer could then send further items for the user to complete to make sure the recorded data was at the highest level of competency or completeness.
Indeed it is easy to see how such a description of Fig 18 would make it the most efficient way to record medical data regarding a patient. Indeed, this method could be enhanced with claim 1. With claim 1 the task menu is invisible because the display component of the touch-sensitive display is turned off. The user needs to perform a slide 11 a to unlock the device making the task menu available to the user the user then moves downwards like 60 in Fig 6A. This then could take the GPS coordinate location of the patient in hospital and identify a maximally depressed patient by the GPS coordinate location of the bed in the ward, it may then display the task menu and the user then moves from 187 to 180 following the arrow to record the 9 items to complete the task of the PHQ-9 and after the doctor has reached 188 the data is recorded to the patients notes on the local device and/or to an NHS Computer connectively coupled to wifi or by phone signals.
Thus it would be appreciated that this whole process from start to finish is the most efficient method available for recording touch on a prior art touch-sensitive display device.
It would be appreciated this method could be applied to any business task, which required a user to ask and/or record data concerning an option selected out of multiple options for each item area of a list of item areas. The task menu described above using the superior properties of touch on a touch device will naturally lead to this task menu being completed quicker by a pointer method (e.g. finger, stylus, or mouse) compared to any prior art method.
Thus the proof of P2 showing the pointer method using a finger, stylus and mouse e.g. Fig 209 - Fig 345 of P2 with the task menu can be significantly faster at completing a task than any other prior art method like that shown in Fig 21 - Fig 208, means that any organisation that wants a 10 - 20X increase in data recording speed will convert all medical recording into similar recording methods for all Medical Tasks. Since the computer is used by nurses and doctors for over 50% of their working day, this could mean in effect nearly doubling the effective staff workforce by designing Task Menus for every common medical task. The fact that the NHS Computer could then download further menu items to even more precisely clarify the questions needed to solve a medical task (this could be applied to any business task) means that the diagnostic rate first time of doctors could radically improve and the time to correct diagnosis and treatment of each condition could radically improve.
Indeed having all the templates of NHS England medical tasks and other business tasks tailored to that of a task menu will mean even greater speed of operation as having used one task menu all the other will the same style. The data recording of the whole NHS then becomes uniform for every given medical task, so in effect the NHS becomes a research tool of the whole British population. This will lead to exponential improvement in diagnosis and treatment of all conditions, all by making the task menu the common recording method for all conditions. Furthermore once one staff member has recorded a task menu, then this will be available to other staff members preventing the repetition of question (sometimes 6-7 times to the patient in the same day by different members of the hospital or primary care team). Instead the user will be prompted to ask ever more details and useful information building on the information recorded about a patient.
This also could be applied to any business where instead of a medical task the same could apply to ever more detailed questions regarding the client’s need and how to meet it, which could only lead to improved business service to the clients of the business.
The only problem is how to get widespread new protocols e.g. questions and options to be selected for every medical condition or medical management or medical assessment task. The most rapid way of doing this is by doctors designing templates, and to do this the Task Menu needs to be programmable by non doctor’s rapidly. P2 discussed several embodiments of how it could be done. The most sensible way of dividing the workload would be to get doctors to design the content of each medical task, and get programmers to then integrate this data into the NHS Spine. This could be done by a hierarchical menu system like the read code listing all important tasks relevant for every doctor in every speciality organised with the most common tasks being the most accessible. Certain doctors will become leads on certain conditions and will be responsible for continually updating one or more of the task menus, (it may even be a requirement for appraisals that every doctor in the NHS must be part of one task menu development). The doctor will be able to populate a task menu template using CVS (which stands for comma-separated values but in reality it would better stand for character-separated values, and using this method in text format by emails between doctors new templates for each menu item and the options for each menu item could be rapidly designed by the doctors and when they were satisfied with the content of a Task menu (e.g.PHQ-9) then they will then send the finished task menu explaining what information they would like recorded and how this information should be stored and used in the NHS Spine and the programmers will integrate this and convert their rapid prototyping of the task menu (i.e. just focusing on the medical content) into a task menu integrated into the NHS spine with all the relevant codes. This would be a very simple task even for new programmers, and later it might be done almost completely automatically once the doctors have produced the content of the items and the options.
Likewise a similar pattern can be used for any programmer. In any business. This will lead to the NHS having a leading medical system which was not only allowing every doctor to perform to the highest known standard of medical care, but it having artificial intelligence because all relevant information about conditions would be recorded in a consistent manner by the templates. Furthermore if the templates were all upgraded centrally by the spine the everyone would be constantly using the latest templates. Moreover, it would be very easy to see which patients benefitted most by what treatment from these templates and this could lead to exponential improvement of diagnosis and management of conditions. The NHS would be in a unique position of having technology which could pin point the key areas of new medical research which would help people.
This obviously could be used by other countries to improve their medical computer recording and diagnosis and management of patients.
Furthermore fuzzy logic can be used to determine the intention of movement of a user e.g. if the user is main going to be selecting one out of four options e.g 180, 181, 182 and 183 and the normal pattern would be to select all the items (e.g. all 9) assigned to just one of those four columns. Then if the user is using a turned off display of a touch-sensitive display screen to select one of the above options for all 9 of the items, and the user is performing the operation fast. Then accidentally if the user intended to select all 9 as option 183 in Fig 18 by happened to move into in a diagonal direction with his pointer into 182. Fuzzy logic would realise that since the direction was for most of the column vertically downward that it would have been the user intention to have selected all 9 columns as 183 rather than 7 precisely being 183 and the last two being 182. Indeed if the user is taught when wanting to select different options for a menu item not to do it by a diagonal movement but to do it in definite horizontal movements to move between options as shown in Fig 280 to Fig 283 of P2 as an example. Again this programming would be easy to do to interpret which column a user was selecting or regions or areas on the display edge. Thus with this type of fuzzy logic programming which uses the major direction vector as the major intentional movement direction and the initial starting point to determine which column is selected, it is easy to see how determining even 9 different item areas and 5 different column areas would be easy for a user to do using the visual cues of a blank screen. Indeed practice may make it possible for some individuals to be precise about much smaller divisions. Indeed as the method would become increasingly relying on the tactile feel of the touch device the process of performing operations on a blank screen cold become easier and easier for a person who had practice the skill of it. Indeed, intelligent user may be able to remember vast amounts of information related to movement on a blank screen, and obviously these individuals will be the leader of the future.
However, it is predicted when these initial bright pioneers who start using this interface, and appreciating the simplicity and beauty of having an interface which only relies on touch input on a touch device with the touch-sensitive display screen not required to have the display screen turned on. New simple programming methods will be developed using vertical and horizontal movements on a blank screen.
Conclusion as Fig 17 shows in box 175 a national organisation may dramatically free the worker time from data entry by using a task menu as defined in claim 31 and described in the specification.

Claims (34)

LISTING OF THE CLAIMS:
1. A method for a device, in which the device in a prior art had a state, wherein a touch-sensitive display screen does not receive electrical power, and requires another external input by a user to turn on a touch-sensitive display screen to perform an operation by a touch on the touch-sensitive display screen, the method is characterized by not requiring the state by the steps of (i) the touch-sensitive display screen detects a digit of the user in contact on the touch-sensitive display screen without requiring the display component to be powered, (ii) a processor is communicatively coupled to the touch component and a display component of the touch-sensitive display screen, iii) the processor performs the operation by the touch component detecting the touch of a predetermined movement of one or more digits on the touch-sensitive display screen, and iv) performs the operation by the touch without requiring a mechanical external button input or other external input by the user.
2. A method of claim 1, whereby the touch to perform the operation is performable by a user either on a touch-sensitive display screen without visual feedback of a turned off display component of the screen and/or with visual feedback of a turned on display component of the screen.
3. A method of claim 1, whereby a touch of a movement by one or more digits of the user on the screen is detected and captured by the touch component of the screen as the predetermined movement to perform a user-defined operation on the device.
4. A method of claim 1, whereby the touch is a path of a digit on the screen performing the operation of one or more operations of the device at one or more locations touched on the screen, includes a) a starting point location touched of contact on the touch component of the screen of the digit moving along the path, b) a location of the plurality of locations touched by a digit moving along the path in continuous contact on the touch component of the display screen, and c) a removal location of the digit from contact on the touch-sensitive display screen.
5. A method for the device of claim 1, whereby the operation is performed by a sequence of touches on the screen including, one or more swipe/s swipe and/or one or more tap/s on the touch-sensitive display screen, and the operation is one or more of the following: a) the operation is a number input, b) the operation is a character input, and c) the operation is any operation of the device.
6. A method of claim 1, whereby the operation of a task of one or more operations is performed by the touch and includes the task being performed by a swipe with one or more of the following steps: a) the swipe reduces power consumption of the device by only being connectively coupled to another computer or device during the task, b) turning on the display component of the screen to show a list of operations of the task, c) each operation of the task is represented as an item of a list or an item of a menu, d) each operation represented as an item has one or more options represented as areas within the item and one option is selected if the digit slides within that area on the screen, e) each operation selected is undone by moving within the area of another option within the item, f) a specialized slide operation navigates to and/or selects one or more additional data elements for an item, g) all operations of the task may be performed by the swipe, including if a list extends over several sequential graphical appearances of thetouch-sensitive display screen, by the swipe moving downwards and upwards over the screen to access all items of the list, and/or h) the task represented as a list over multiple screen appearances may be completed by a series of swipes and/or taps.
7. A method of claim 1, whereby the operation includes sending information between another communicatively connected computer either wirelessly or wired to download or upload data from the other computer, and/or the download data is provided as a list of one or more listed items, and/or the user selects one option out of multiple options for one or more multiple listed items to record data by a single swipe, and/or multiple swipes and/or taps, and/or the listed item are displayed on one or more multiple screen appearances, and/or save the recorded data to non transitory memory on the other computer and/or the device, and/or integrate the saved recorded data with existing data on the other computer, and/or the device, and/or the other computer sends further listed items deduced from the saved recorded data for further data to be recorded by the user.
8. A method of claim 7, wherein the data is medical data, and/or the download data is listed medical record data from a patient’s record, and/or listed items to record for one or more presenting complaints of the patient, and/or the other computer could be a primary care computer, or a secondary care computer, or a regional or national computer population database, or the NHS spine, or an organization patient database including an medical insurance patient database, and/or the saved recorded data provides further management steps for the user to perform for the patient.
9. A method of claim 7, wherein the data related to a GPS coordinate location is uploaded and download including one or more of the following: a) a courier service data, b) an ambulance service data, c) a location in a building, d) the downloaded data includes the list of addresses, e) the downloaded data includes data relates to a hospital or primary care location in a building, f) the downloaded and uploaded data includes data derived from the touch on the screen, g) the user is the courier driver or paramedic or another user performing a single swipe, or tap, ora sequence of touches, on the touch-sensitive screen of the device at the GPS coordinate location, h) displays a signature box for the correct address for the occupant to sign by touch, i) displays another screen appearance responsive to touch, j) when signed automatically saves data to another computer and/or on the device, k) resets the Satellite Navigation of the device for the next address, and l) the device could be sealed repeatedly, including a plastic bag or sterile covering as the seal, to reduce contamination of the device and/or users.
10. A method of claim 1, whereby the touch-sensitive screen detects a stylus as the digit, and/or the stylus is attached to a digit, and/or the detection of the attached stylus on a digit by the touch-sensitive screen switches off the detection of the other digits which do not have an attached stylus, and/or the attached stylus is a ring, and/or the attached stylus is attached to a ring or attachment around the circumference or partly around the circumference of a digit, and/or the attached stylus does not need to touch the screen or obstruct the digit tip from typing on a mechanical keyboard, and/or a digit of the attached stylus is identified as a a pointing digit pointing to a location on the screen, and/or a digit of a second attached stylus is identified as clicking digit which by a touch performs the operation at the location touched by the pointing digit, and/or the stylus tip is responsive to pressure to perform the operation.
11. A method of claim 1, whereby a performance of the operation, including the operation being a task of a sequence of operations, is improved compared to the operation in the prior art in one or more of the following aspects: a. more instant, b. more accessible, c. quicker, d. easier, e. less power consumption, f. more reliable, g. increased capacity, h. less effort, i. simpler, j. safer in an accident, k. more ergonomic, l. simpler for a user and skilled person to design their own touch operation or operations, m. less likely to lose a stylus, n. more aesthetic device surface appearance, o. uses less digit movement or effort to perform the operation or a task of more than one operation any other input method in any software in the prior art, p. improves user intelligence by performing operations without visual feedback, q. improves user recall of the user by performing operations without visual feedback, r. improves decision making of the user by performing operations without visual feedback, s. Is fully backward compatible to perform the operation by any other input including a pointing device, a keyboard, a gyroscope, a light sensor, a proximity sensor, a GPS, t. Can improve the performance of any prior art input method.
12. A method of claim 1, whereby a dominant digit is a pointing digit, and a secondary digit is a clicking digit, and the dominant digit performs the operation equivalent to pointing of a pointing device by touching on the screen at a location on a graphical user interface, and/or the clicking digit performs the operation equivalent of the one or more clicks of the pointing device and/or performing a click operation at the location of the dominant digit on the screen, and/or other operations by touching on the screen according to the predetermined movement, and/or the pointing digit only touching the screen can never trigger the operation of a click, and/or the dominant digit only touching the screen as the pointing digit cannot perform the operation apart from pointing.
13. A method of claim 1, whereby power consumption of the touch component performing the operation is decreased compared to the power consumption of the touch-sensitive displays screen in the prior art performing the operation, including one of the following: a) the touch does not require the display component to be turned on and thereby decreases the power consumption of the touch-sensitive display screen performing the operation compared to the device in the prior art, b) the touch component is divided into two or more areas, and in a lower powered mode only a smaller area than the whole touch component of the touch-sensitive display screen is powered, c) a solar power cell or a series of solar power cells positioned within the area of the touch component of the screen powers the touch component of the screen, and d) the smaller area and/or solar power cell by a predetermined touch on the screen turns on one or more further areas of the touch component to be powered to detect the touch to perform the operation.
14. A method of claim 1, wherein the device is a watch, and/or the watch face is the touch component of the screen, and/or the touch is performed on the watch face with the display component turned off and/or on, and/or the display component is a transparent LCD display screen under the watch face, and/or the watch is an analog watch, and/or the watch is a Swiss watch.
15. A method of claim 1, whereby an appearance of the device compared to the device in the prior art is different, and/or the device performs the operation without requiring one or more of the following prior art dependencies: an external button press, the display component to be turned on, a graphical appearance displayed on the display component, a graphical element displayed on the display component, and a time dependency to turn off the display component if the touch-sensitive display screen is not touched, and/or one or more external buttons or inputs in the device in the prior art are not required on the surface of the device, including one or more of the following: a) power on or off button, b) a home button, c) a volume up and down button, d) a headphone socket, e) a computer lead socket, and f) a power socket.
16. A method of claim 1, whereby the processor further detects and is communicatively coupled to an input in the prior art, including one of the following: a pointing device, ora keyboard, or a force applied of the digit to the screen, and/or to perform an operation by the input according to the method in the prior art.
17. A method of claim 1, whereby the operation is a GPS coordinate sent by a signal from the device, including to an emergency service, and/or a text message and/or dial a predetermined number, by one of the following: a) an internal button or switch to perform the operation, and/or to reset the device, and/or to disconnect the power of the battery from the device, and b) using a touch at one or more locations on the touch-sensitive display screen to perform the operation by a separate circuit to the touch component and/or a circuit of solar power cells within the touch or display component of the screen area.
18. A method of claim 1, whereby the touch performs the operation including any operation of the device in the prior art, including unlocking, or any other operation on the device which becomes available after being unlocked.
19. A method of claim 1, whereby the operation turns the device to silent when vibrating or emitting a sound by the touch on the touch-sensitive display screen.
20. A method of claim 1, whereby the device performs the operation by only the touch on the touch-sensitive display screen at any time while the device is powered, whereas this is impossible for the device in the prior art in the state.
21. A method of claim 1, whereby battery power is conserved better by having an always on touch component to perform the operation by the touch than in the device in the prior art which required a turned on display component to perform the operation.
22. A method of claim 1, whereby the touch to perform the operation to turn on the display component is less likely to be accidentally triggered by the user than pressing an external mechanical button on the device in the prior art.
23. A method of claim 1, whereby the touch performing the operation is a safer and/or more reliable method than in the prior art of keeping information secure within the device.
24. A method of claim 1, whereby the touch component being always on enables the user to perform the operation by the touch on the touch component instantly and by a faster method than the device in the prior art performing the operation from the state.
25. A method of claim 1, whereby the touch performing the operation is an easier method than the device in the prior art in the state performing the operation.
26. A method for claim 1, whereby the ability to reset an inoperative touch component is performed by an internal button or switch, or by a separate additional electrical circuit to the touch component, or by one or more light sensors within the touch-sensitive display screen.
27. A method of claim 1, whereby the user's memory is improved by performing the operation as the user performs the operation by the touch on a screen with no visual feedback of a turned on display component.
28. A method of claim 1, whereby the touch performs the operation of undoing the operation.
29. A method for claim 1, whereby the touch performs the operation in less steps that the device in the prior art in the state.
30. A method for claim 1, whereby digit movements of the touch of the one or more digits on the touch component of the screen is less than the digits movements on the device in the prior art in the state to perform the operation, including the operation being a task of one or more operations.
31. A method a user performs a task by the steps of a) selection of one or more virtual options for each item for a list of virtual items on a touch-sensitive display screen, and b) the selection of the one or more options can be undone, and c) when the selection of options for the list of items is completed by a pointer input a processor connectively coupled to the touch-sensitive display screen performs an operation of the task, and d) the pointer input includes a finger movement, a stylus movement, a pointing device pointer movement, and a mouse pointer movement on the touch-sensitive display screen, and e) the pointer input may only be a finger movement input on a touch-sensitive display screen to perform the task.
32. A method of claim 31, further comprising one of the following a) the operation of the task is to record medical data to a patients notes, b) the operation allows a navigation to more items on a further screen of items, c) the operation provides further items and options for each further item to added to the list of items, d) the operation could be a business operation derived from the selection, e) the operation is faster than any other input method to perform the task, f) a central computer with multiple task menus can reduce user’s time in data entry compared to prior art method in the NHS, and g) the operation uploads and downloads information to the NHS spine or other central computer.
33. A device incorporating the method of any of the preceding claims, and/or the device is a mobile device.
34. A non-transitory computer readable medium for a device, the computer readable medium storing computer executable instructions that, when executed by the processor, causes the processor to perform the method of any of the preceding claims.
GB1620562.7A 2015-12-02 2016-12-02 A method of touch and a touch device Withdrawn GB2547504A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
PCT/GB2015/053690 WO2016087855A2 (en) 2014-12-02 2015-12-02 Independent touch it
GBGB1604767.2A GB201604767D0 (en) 2016-03-21 2016-03-21 Itdti or text it
GBGB1609970.7A GB201609970D0 (en) 2016-06-07 2016-06-07 A method for a device and a device operated by touch
GBGB1609963.2A GB201609963D0 (en) 2016-06-07 2016-06-07 A method for a device and a device operating by touch

Publications (2)

Publication Number Publication Date
GB201620562D0 GB201620562D0 (en) 2017-01-18
GB2547504A true GB2547504A (en) 2017-08-23

Family

ID=58159720

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1620562.7A Withdrawn GB2547504A (en) 2015-12-02 2016-12-02 A method of touch and a touch device

Country Status (1)

Country Link
GB (1) GB2547504A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582369A (en) * 2017-09-25 2019-04-05 鹤壁天海电子信息系统有限公司 A kind of equipment starting method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078093A1 (en) * 2003-10-10 2005-04-14 Peterson Richard A. Wake-on-touch for vibration sensing touch input devices
WO2010043277A1 (en) * 2008-10-17 2010-04-22 Sony Ericsson Mobile Communications Ab Method of unlocking a mobile electronic device
US20100171753A1 (en) * 2009-01-05 2010-07-08 Samsung Electronics Co. Ltd. Apparatus and method for controlling display of an electronic device
US20120212435A1 (en) * 2011-02-18 2012-08-23 Samsung Electronics Co. Ltd. Apparatus and method for operating touch pad in portable device
EP2977884A1 (en) * 2014-07-22 2016-01-27 LG Electronics Inc. Mobile terminal and method for controlling the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078093A1 (en) * 2003-10-10 2005-04-14 Peterson Richard A. Wake-on-touch for vibration sensing touch input devices
WO2010043277A1 (en) * 2008-10-17 2010-04-22 Sony Ericsson Mobile Communications Ab Method of unlocking a mobile electronic device
US20100171753A1 (en) * 2009-01-05 2010-07-08 Samsung Electronics Co. Ltd. Apparatus and method for controlling display of an electronic device
US20120212435A1 (en) * 2011-02-18 2012-08-23 Samsung Electronics Co. Ltd. Apparatus and method for operating touch pad in portable device
EP2977884A1 (en) * 2014-07-22 2016-01-27 LG Electronics Inc. Mobile terminal and method for controlling the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582369A (en) * 2017-09-25 2019-04-05 鹤壁天海电子信息系统有限公司 A kind of equipment starting method and device

Also Published As

Publication number Publication date
GB201620562D0 (en) 2017-01-18

Similar Documents

Publication Publication Date Title
US20200409548A1 (en) Independent Touch
US11301130B2 (en) Restricted operation of an electronic device
US11594330B2 (en) User interfaces for health applications
JP7451639B2 (en) Context-specific user interface
KR102663883B1 (en) Clock faces for an electronic device
US20210349619A1 (en) System, Method and User Interface for Supporting Scheduled Mode Changes on Electronic Devices
US20200356224A1 (en) Using an illustration to show the passing of time
US20220083183A1 (en) Device management user interface
Yoon et al. Lightful user interaction on smart wearables
US20170115782A1 (en) Combined grip and mobility sensing
US20220392588A1 (en) User interfaces for shared health-related data
CN109388302B (en) Schedule display method and terminal equipment
US10281986B2 (en) Methods, controllers and computer program products for accessibility to computing devices
GB2547504A (en) A method of touch and a touch device
Godinho et al. Improving accessibility of mobile devices with EasyWrite
EP3227770A2 (en) Touch display control method
KR102614341B1 (en) User interfaces for health applications
US20230389861A1 (en) Systems and methods for sleep tracking
US20240146350A1 (en) Smart ring
US20240161888A1 (en) User interfaces for shared health-related data
WO2019180465A2 (en) Safe touch
WO2023235608A1 (en) Systems and methods for sleep tracking
Lee Slide operation method for a touch screen–the concept of connecting audio component

Legal Events

Date Code Title Description
R108 Alteration of time limits (patents rules 1995)

Free format text: EXTENSION ALLOWED

Effective date: 20201208

Free format text: EXTENSION APPLICATION

Effective date: 20201113

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)