WO2022019416A1 - Method and electronic device for enabling virtual input on electronic device - Google Patents

Method and electronic device for enabling virtual input on electronic device Download PDF

Info

Publication number
WO2022019416A1
WO2022019416A1 PCT/KR2021/000213 KR2021000213W WO2022019416A1 WO 2022019416 A1 WO2022019416 A1 WO 2022019416A1 KR 2021000213 W KR2021000213 W KR 2021000213W WO 2022019416 A1 WO2022019416 A1 WO 2022019416A1
Authority
WO
WIPO (PCT)
Prior art keywords
surface area
electronic device
user
gesture
keyboard
Prior art date
Application number
PCT/KR2021/000213
Other languages
French (fr)
Inventor
Sandeep Singh SPALL
Rachit Jain
Harsh SINHA
Mohit Kumar
Anubhav AGRAWAL
Gaurav Kumar BHARDWAJ
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2022019416A1 publication Critical patent/WO2022019416A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1662Details related to the integrated keyboard
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • G06F3/0426Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected tracking fingers with respect to a virtual keyboard projected or printed on the surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present disclosure relates to a virtual input on an electronic device.
  • various mechanisms are used for typing an input on a keyboard.
  • the user of an electronic device faces many challenges while using a virtual and hard keyboard.
  • the challenges can be, for example, but not limited to a slow typing, user intention while typing, small keys, non-custom keyboard and an external hardware requirement.
  • Many conventional methods and electronic devices have been proposed for typing an input on a keyboard, but these conventional methods and electronic device may have disadvantages in terms of robustness, reliability, operation dependency, time, cost, complexity, design, hardware components usage, size and so on.
  • the principal object of the embodiments herein is to provide a method and an electronic device for enabling virtual input so as to provide an enhanced user experience for typing, so that a user of the electronic device can type smartly and quickly without seeing on keys at a cost effective manner and without using additional hardware.
  • Another object of the embodiments herein is to enable the virtual typing by finger tapping/swiping/handwriting on any free surface.
  • the electronic device will detect finger movements and notify the detected finger movements on a virtual input controller and the virtual input controller will map finger tips to the keyboard map for hassle free typing. This results in enhancing a keyboard typing experience as per user requirements.
  • Another object of the embodiments herein is to provide the virtual input on one or more application executing on the electronic device to control various functions (e.g., scrolling the application, a pausing a video in the electronic device or the like) on the one or more application by using finger tapping/swiping/handwriting on any free surface.
  • the electronic device will detect finger movements and notify the detected finger movements on the virtual input controller and the virtual input controller will map finger tips to the keyboard map for hassle free virtual input on the one or more application. This results in enhancing the application control as per user requirements without using any external hardware in a cost effective manner.
  • embodiments herein disclose a method for enabling virtual input by an electronic device.
  • the method includes measuring, by the electronic device, sensor data corresponding to a gesture performed by a user on a surface area. Further, the method includes detecting, by the electronic device, a virtual input event by a finger of the user based on the sensor data. Further, the method includes determining, by the electronic device, a portion at which the gesture is performed by the user on the surface area based on the virtual input event. Further, the method includes detecting, by the electronic device, a key corresponding to the portion at which the gesture is performed. Further, the method includes performing, by the electronic device, an action based on the detected key.
  • the surface area is one of a normal surface area and an artificial surface area.
  • the normal surface area is one of a glass surface, a rubber surface, a plastic surface, a wooden surface, and a mattress surface.
  • the artificial surface area is one of a printed keyboard and a handwritten keyboard on the normal surface area.
  • the sensor data includes an accelerometer sensor data, a prism data and an ultrasonic data.
  • the sensor data is obtained from at least one of a camera, a prism, and Human body communication (HBC).
  • HBC Human body communication
  • the typing event includes a tap of the finger, a swipe of the finger, and a handwriting of the user.
  • measuring, by the electronic device, the sensor data corresponding to the gesture performed by the user on the surface area includes automatically activating, by the electronic device, a camera of the electronic device, detecting, by the electronic device, whether the surface area corresponds to a normal surface area or an artificial surface area displayed in a field of view of the camera, and performing, by the electronic device, one of: capturing an image of the normal surface area displayed within the field of view of the camera in response to determining that the surface area corresponds to the normal surface, generating a virtual area by mapping a keyboard layout of the electronic device to the image of the normal surface area, detecting the gesture performed by the user in the virtual area, and measuring the sensor data when the gesture is performed in the virtual area, and capturing an image of the artificial surface area displayed within the field of view of the camera in response to determining that the surface area corresponds to the artificial surface, generating a keyboard layout provided in the artificial surface by processing the image of the artificial surface area, and measuring the sensor data when the gesture is performed on the normal surface area.
  • mapping the keyboard layout of the electronic device to the image of the normal surface area includes detecting, by the electronic device, one of a left hand of the user, a right hand of the user and both left and right hands of the user, and performing, by the electronic device, one of mapping a left hand keyboard layout to the image of the normal surface area in response to detecting the left hand of the user, mapping a right hand keyboard layout to the image of the normal surface area in response to detecting the right hand of the user, and mapping a split hand keyboard layout to the image of the normal surface area in response to detecting both the left hand and the right hand of the user.
  • detecting, by the electronic device, the typing event by the finger of the user based on the sensor data includes applying, by the electronic device, a machine learning model to detect the finger of the user based on the gesture performed by the user on the surface area, applying, by the electronic device, the machine learning model to detect a tip of the finger of the user based on the gesture performed by the user on the surface area, and detecting, by the electronic device, the typing event performed by the tip of the finger of the user based on the sensor data.
  • detecting, by the electronic device, the key corresponding to the portion includes detecting, by the electronic device, whether the surface area corresponds to a normal surface area or an artificial keyboard, and performing, by the electronic device, one of: mapping the portion at which the gesture is performed with the key of a keyboard of the electronic device in response to determining that the surface area corresponds to the normal surface area, and detecting the key corresponding to the portion based on the mapping, and mapping the portion at which the gesture is performed to the key of a key map provided in the artificial surface area in response to determining that the surface area corresponds to the artificial surface area, and detecting the key corresponding to the portion based on the mapping.
  • an electronic device for enabling virtual input.
  • the electronic device includes a sensor coupled to a memory and a processor.
  • the sensor is configured to measure sensor data corresponding to a gesture performed by a user on a surface area.
  • a virtual input controller is coupled to the memory and the processor.
  • the virtual input controller is configured to detect a virtual input event by a finger of the user based on the sensor data. Further, the virtual input controller is configured to determine a portion at which the gesture is performed by the user on the surface area based on the virtual input event. Further, the virtual input controller is configured to detect a key corresponding to the portion at which the gesture is performed. Further, the virtual input controller is configured to perform an action based on the detected key.
  • FIG. 1 is an example scenario in which an electronic device enables a virtual input, according to an embodiment as disclosed herein;
  • FIG. 2 shows various hardware components of the electronic device, according to an embodiment as disclosed herein;
  • FIG. 3A to 3C is a flow chart illustrating a method for enabling the virtual input on the electronic device, according to an embodiment as disclosed herein;
  • FIG. 4A to 4B is an example scenario in which the electronic device displays corresponding letters on a display by detecting a location of a finger position on a virtual keyboard and mapping to a keyboard key-map to realize corresponding key presses on the virtual keyboard, according to an embodiment as disclosed herein;
  • FIG. 5 is an example scenario in which a raw image transformation data, a contour detection and OCR data and a finger tracking data are depicted, according to an embodiment as disclosed herein;
  • FIG. 6 is another example scenario in which the electronic device displays corresponding letters on a display by detecting a location of a finger position on a paper keyboard and mapping to a keyboard key-map to realize the corresponding key press on the paper keyboard, according to an embodiment as disclosed herein;
  • FIG. 7 is another example scenario in which the electronic device displays corresponding letters on the display based on a swipe gesture on the keyboard of the electronic device, according to an embodiment as disclosed herein;
  • FIG. 8 is another example scenario in which the electronic device displays corresponding letters on the display by typing using a printed keyboard on a surface, according to an embodiment as disclosed herein;
  • FIG. 9 is another example scenario in which the electronic device displays corresponding letters on the display by typing using finger tracking, according to an embodiment as disclosed herein;
  • FIG. 10 is another example scenario in which the electronic device displays corresponding letters on the display by swiping on the printed keyboard to type, according to an embodiment as disclosed herein;
  • FIG. 11 is another example scenario in which the electronic device displays corresponding letters on the display by swiping on any surface to type, according to an embodiment as disclosed herein;
  • FIG. 12 is another example scenario in which the electronic device displays corresponding letters on the display by recognition handwriting using a pen, according to an embodiment as disclosed herein;
  • FIG. 13 is another example scenario in which the electronic device displays corresponding letters on the display by typing using finger tracking of two hands, according to an embodiment as disclosed herein;
  • FIG. 14 is another example scenario in which the electronic device displays corresponding letters on the display by typing on a custom keyboard, according to an embodiment as disclosed herein;
  • FIG. 15 is an example scenario in which the electronic device maps a left hand keyboard layout to an image of a normal surface area, according to an embodiment as disclosed herein;
  • FIG. 16 is an example scenario in which the electronic device maps a right hand keyboard layout to an image of the normal surface area, according to an embodiment as disclosed herein;
  • FIG. 17 is an example scenario in which the electronic device maps a normal keyboard layout to the image of the normal surface area, according to an embodiment as disclosed herein;
  • FIG. 18 is another example scenario in which the electronic device maps the split hand keyboard layout to the image of the normal surface area, according to an embodiment as disclosed herein;
  • FIG. 19 is an example scenario in which the electronic device identifies the finger events on the virtual keyboard using Human Body Communication (HBC), according to an embodiment as disclosed herein;
  • HBC Human Body Communication
  • FIG. 20 is another example scenario in which the electronic device displays corresponding letters on the display by HBC, according to an embodiment as disclosed herein;
  • FIG. 21 is an example scenario in which the electronic device identifies the finger events on the virtual keyboard using a prism, according to an embodiment as disclosed herein;
  • FIG. 22 is another example scenario in which the electronic device displays corresponding letters on the display using a prediction technique, according to an embodiment as disclosed herein;
  • FIG. 23 is an example scenario in which the electronic device controls a gallery and a video application in a multi window scenario by detecting a key corresponding to the portion at which the tap is performed on a surface area, according to an embodiment as disclosed herein;
  • FIG. 24 is an example scenario in which the electronic device controls a scrolling operation on a music application by detecting the key corresponding to the portion at which the tap is performed on the surface area, according to an embodiment as disclosed herein;
  • FIG. 25 is an example scenario in which the electronic device controls a touch pad operation on a music application by detecting a key corresponding to the portion at which the tap is performed on the surface area, according to an embodiment as disclosed herein;
  • FIG. 26 is an example scenario in which the electronic device edits links on a webpage by using a virtual input, according to an embodiment as disclosed herein.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention.
  • the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention
  • inventions herein achieve a method for enabling virtual input by an electronic device.
  • the method includes measuring, by the electronic device, sensor data corresponding to a gesture performed by a user on a surface area. Further, the method includes detecting, by the electronic device, the virtual input by a finger of the user based on the sensor data. Further, the method includes determining, by the electronic device, a portion at which the gesture is performed by the user on the surface area based on the virtual input. Further, the method includes detecting, by the electronic device, a key corresponding to the portion at which the gesture is performed. Further, the method includes performing, by the electronic device, an action based on the detected key.
  • the virtual input corresponds to a virtual typing event and virtually performing an action on an application (e.g., scrolling the application, a pausing a video in the electronic device or the like).
  • the method can be used to enable the virtual typing so as to provide an enhanced user experience for typing, so that a user of the electronic device can type smartly and quickly without seeing on keys at a cost effective manner and without using additional hardware.
  • the method can be used to virtually perform the action on the application (e.g., scrolling the application, a pausing a volume of the application or the like.) in an easy manner without using additional hardware.
  • the proposed method can be used to enable the virtual typing by finger tapping/swiping on any free surface.
  • the electronic device will detect the finger movements and notify the detected finger movements on a virtual input controller and the virtual input controller will map finger tips to the keyboard map for hassle free typing. This results in enhancing a keyboard typing experience as per user requirements.
  • the proposed method can be used to enable the virtual typing by finger tapping/swiping on any free surface at a cost effective manner and without additional hardware.
  • FIGS. 1 through 26 there are shown preferred embodiments.
  • FIG. 1 is an example scenario in which an electronic device (100) enables a virtual input, according to an embodiment as disclosed herein.
  • the electronic device (100) can be, for example, but not limited to a cellular phone, a smart phone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an Internet of Things (IoT), a virtual reality device, a smart phone, a flexible electronic device, a curved electronic device, and a foldable electronic device.
  • a user of the electronic device (100) wants to type on the electronic device (100) using a virtual keyboard.
  • the electronic device (100) automatically activates a camera (102).
  • the electronic device (100) detects whether a surface area (400) corresponds to a normal surface area or an artificial surface area displayed in a field of view of the camera (102).
  • the normal surface area can be, for example, but not limited to a glass surface, a rubber surface, a plastic surface, a metal surface, a wooden surface, and a mattress surface.
  • the artificial surface area can be, for example, but not limited to a printed keyboard and a handwritten keyboard on the normal surface area.
  • the proposed method is applicable to all the surfaces which can detect vibrations.
  • the camera (102) captures an image of the artificial surface area displayed within a field of view of the camera (102). Further, the electronic device (100) maps a keyboard layout provided in the artificial surface by processing the image of the artificial surface area. Further, the electronic device (100) measures sensor data when a gesture is performed on the artificial surface area.
  • the sensor data can be, for example but not limited to, an accelerometer sensor data, a gyroscope sensor data and an ultrasonic data. Further, the sensor data is obtained from at least one of a camera, a prism, and Human body communication (HBC).
  • the camera (102) captures the image of the normal surface area displayed within the field of view of the camera (102). Further, the electronic device (100) detects a left hand of the user, a right hand of the user or both left and right hands of the user. If the left hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the electronic device (100) maps the left hand keyboard layout to the image of the normal surface area. The operations and functions of the left hand keyboard layout is explained in the FIG. 15.
  • the electronic device (100) maps the right hand keyboard layout to the image of the normal surface area.
  • the operations and functions of the right hand keyboard layout is explained in the FIG. 16.
  • the electronic device (100) maps the split hand keyboard layout or the normal keyboard layout to the image of the normal surface area.
  • the operations and functions of the split hand keyboard layout is explained in the FIG. 18.
  • the split hand keyboard layout or the normal keyboard layout are determined by identifying a distance between one hand to another hand of the user.
  • the electronic device (100) activates the split hand keyboard layout. If the distance between the one hand to another hand of the user does not exceed the predefine threshold then, the electronic device (100) activates the normal keyboard layout.
  • the predefine threshold is set by the user of the electronic device (100) or an original Equipment manufacturer (OEM).
  • the electronic device (100) generates the virtual area. After generating the virtual area, the electronic device (100) detects the gesture performed by the user in the virtual area. Further, the electronic device (100) measures the sensor data when the gesture is performed in the virtual area.
  • the electronic device (100) applies the machine learning model to detect one or more finger (300) of the user based on the gesture performed by the user on the surface area (400). Further, the electronic device (100) applies the machine learning model to detect a tip of the finger (300) of the user based on the gesture or the event performed by the user on the surface area (400).
  • the gesture or the event can be, for example, but not limited to a swipe gesture and a hover gesture.
  • the electronic device (100) detects a typing event performed by the tip of the finger (300) of the user based on the sensor data.
  • the typing event can be, for example, but not limited to a tap of the finger (300), a swipe of the finger (300), and a handwriting of the user. Further, the electronic device (100) determines the portion at which the gesture is performed by the user on the surface area (400).
  • the electronic device (100) After determining the portion at which the gesture is performed by the user on the surface area (400), the electronic device (100) detects whether the surface area corresponds to the normal surface area or the artificial surface area. If the surface area corresponds to the normal surface area then, the electronic device (100) maps the portion at which the gesture is performed with the key (200) of a keyboard of the electronic device (100). Further, the electronic device (100) detects the key (200) corresponding to the portion based on the mapping.
  • the electronic device (100) maps the portion at which the gesture is performed to the key (200) of a key map provided in the artificial surface area. Further, the electronic device (100) detects the key corresponding to the portion based on the mapping. Further, the electronic device (100) performs the action based on the detected key (200).
  • FIG. 2 shows various hardware components of the electronic device (100), according to an embodiment as disclosed herein.
  • the electronic device (100) includes the camera (102), a communicator (104), a memory (106), a processor (108), a display (110), a virtual input controller (112), a sensor (114), a machine learning controller (116) and a prism controller (118).
  • the sensor (114) can be, for example, but not limited to an accelerometer sensor (114a) and an ultrasonic sensor (114b).
  • the processor (108) is coupled with the camera (102), the communicator (104), the memory (106), the display (110), the virtual input controller (112), the sensor (114), the machine learning controller (116) and the prism controller (118).
  • the prism controller (118) is used to obtain the sensor data.
  • the virtual input controller (112) is configured to automatically activate the camera (102). After activating the camera (102), the virtual input controller (112) is configured to detect whether the surface area (400) corresponds to the normal surface area or the artificial surface area displayed in the field of view of the camera (102). If the surface area (400) corresponds to the artificial surface area displayed in the field of view of the camera (102) then, the camera (102) captures the image of the artificial surface area displayed within the field of view of the camera (102). Further, the virtual input controller (112) is configured to map the keyboard layout provided in the artificial surface by processing the image of the artificial surface area. Further, the virtual input controller (112) is configured to measure the sensor data, using the sensor (114), when the gesture is performed on the artificial surface area.
  • the camera (102) captures the image of the normal surface area displayed within the field of view of the camera (102).
  • the virtual input controller (112) is configured to detect the left hand of the user, the right hand of the user or both left and right hands of the user. If the left hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the virtual input controller (112) is configured to map the left hand keyboard layout to the image of the normal surface area. If the right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the virtual input controller (112) is configured to map the right hand keyboard layout to the image of the normal surface area. If the left hand and right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, virtual input controller (112) is configured to map the split hand keyboard layout to the image of the normal surface area.
  • the virtual input controller (112) is configured to generate the virtual area and detect the gesture performed by the user in the virtual area.
  • the virtual input controller (112) is configured to measure the sensor data when the gesture is performed in the virtual area. Further, the virtual input controller (112) is configured to apply the machine learning model to detect the finger (300) of the user based on the gesture performed by the user on the surface area (400) using the machine learning controller (116). The machine learning model identifies the gesture that is being performed in a touch with the surface in close proximity to the electronic device (100). Further, the machine learning controller (116) is configured to apply the machine learning model to detect a tip of the finger (300) of the user when the tip of the finger is not detected based on the gesture performed by the user on the surface area (400).
  • the virtual input controller (112) is configured to detect the typing event performed by the tip of the finger (300) of the user based on the sensor data. Further, the virtual input controller (112) is configured to determine the portion at which the gesture is performed by the user on the surface area (400).
  • the virtual input controller (112) is configured to detect whether the surface area corresponds to the normal surface area or the artificial surface area. If the surface area corresponds to the normal surface area then, the virtual input controller (112) is configured to map the portion at which the gesture is performed with the key (200) of the keyboard of the electronic device (100). Further, the virtual input controller (112) is configured to detect the key (200) corresponding to the portion based on the mapping.
  • the virtual input controller (112) is configured to map the portion at which the gesture is performed to the key (200) of the key map provided in the artificial surface area. Further, the virtual input controller (112) is configured to detect the key corresponding to the portion based on the mapping. Further, the virtual input controller (112) is configured to perform the action based on the detected key (200).
  • the virtual input controller (112) also includes a hand detector model (not shown), a hand landmark model (not shown), and a gesture recognizer model (not shown).
  • the hand detector model operates on full image and returns oriented hand bounding box.
  • the hand landmark model operates on cropped image region defined by a hand detector and returns high fidelity 3D hand key points (i.e., finger tips location & distance between the fingers).
  • the gesture recognizer model classifies the previously computed key points into a discrete set of gestures.
  • the accelerometer sensor (114a) is used to measure acceleration forces.
  • the forces may be static, like the continuous force of gravity or, as is the case with many electronic devices (100), dynamic to sense movement or vibrations.
  • the sensor (114) provides the three coordinate axis x, y and z of the vibration.
  • the finger tap detection on the surface is detected using accelerometer vibrations.
  • the machine learning model is used to calibrate and prune for false tap vibrations using the machine learning controller (116).
  • the ML model includes a calibration model, a tap pruning model, and a tap recognizer model.
  • the calibration model that operates on the all vibrations and calibrates the electronic device (100) for false vibrations based on the environment.
  • the tap pruning model operates on the finger taps defined by the calibration model.
  • the tap pruning model learns the user key vibration mapping. In an example, the user key press intensity is different for each key.
  • the tap pruning model will learn the user key press intensity to identify the desired key press.
  • the tap recognizer model classifies the previously computed taps configuration into a discrete set of keys.
  • the electronic device (100) also identifies the finger events on the virtual keyboard using HBC and prism.
  • the HBC related operation is explained in the FIG. 19.
  • the prism related operation is explained in the FIG. 21.
  • the electronic device (100) also identifies the finger events on the virtual keyboard using an ultrasonic event transfer method.
  • the ultrasonic event transfer method transmits ultrasonic waves and receives echoes reflected from the palms and fingers.
  • a pulsed Doppler radar signal processing technique is used to obtain time-sequential range-Doppler features from ultrasonic waves, measure objects' precise distances and velocities simultaneously through a single channel.
  • the electronic device (100) also identifies the finger events on the virtual keyboard using a State-transition based hidden Markov model (HMM) method for micro dynamic gesture recognition.
  • HMM State-transition based hidden Markov model
  • the processor (108) is configured to execute instructions stored in the memory (106) and to perform various processes.
  • the communicator (104) is configured for communicating internally between internal hardware components and with external devices via one or more networks.
  • the memory (106) also stores instructions to be executed by the processor (108).
  • the memory (106) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • the memory (106) may, in some examples, be considered a non-transitory storage medium.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (106) is non-movable.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
  • RAM Random Access Memory
  • the processor (108) may include one or a plurality of processors.
  • one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
  • CPU central processing unit
  • AP application processor
  • GPU graphics-only processing unit
  • VPU visual processing unit
  • NPU neural processing unit
  • the one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory.
  • the predefined operating rule or artificial intelligence model is provided through training or learning.
  • learning means that, by applying a learning algorithm to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made.
  • the learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
  • the AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation on a plurality of weights.
  • Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
  • the learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction.
  • Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • FIG. 2 shows various hardware components of the electronic device (100) but it is to be understood that other embodiments are not limited thereon.
  • the electronic device (100) may include less or more number of components.
  • the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention.
  • One or more components can be combined together to perform same or substantially similar function to enable the virtual input on the electronic device (100).
  • FIG. 3A to 3C is a flow chart (S300) illustrating a method for enabling virtual input on the electronic device (100), according to an embodiment as disclosed herein.
  • the operations (S302-S340) are performed by the virtual input controller (112).
  • the method includes automatically activating the camera (102) of the electronic device (100).
  • the method includes detecting whether the surface area corresponds to the normal surface area or the artificial surface area displayed in a field of view of the camera (102). If the surface area (400) corresponds to the artificial surface area displayed in the field of view of the camera (102) then, at S306, the method includes capturing the image of the artificial surface area displayed within the field of view of the camera (102).
  • the method includes mapping the keyboard layout provided in the artificial surface by processing the image of the artificial surface area.
  • the method includes capturing the image of the normal surface area displayed within the field of view of the camera (102).
  • the method includes mapping the keyboard layout provided in the normal surface.
  • the method includes detecting one of a left hand of the user, a right hand of the user and both left and right hands of the user. If the left hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, at S316, the method includes mapping the left hand keyboard layout to the image of the normal surface area. If the right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, at S318, the method includes mapping the right hand keyboard layout to the image of the normal surface area. If the left hand and right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, at S320, the method includes mapping the split hand keyboard layout to the image of the normal surface area.
  • the method includes generating the virtual area.
  • the method includes detecting the gesture performed by the user in the virtual area.
  • the method includes measuring the sensor data when the gesture is performed in the virtual area.
  • the method includes applying the machine learning model to detect the finger of the user based on the gesture performed by the user on the surface area (400).
  • the method includes applying the machine learning model to detect a tip of the finger (300) of the user based on the gesture performed by the user on the surface area (400).
  • the method includes detecting the typing event performed by the tip of the finger (300) of the user based on the sensor data.
  • the method includes determining the portion at which the gesture is performed by the user on the surface area (400).
  • the method includes mapping the portion at which the gesture is performed with the key (200) of the keyboard of the electronic device (100).
  • the method includes detecting the key (200) corresponding to the portion based on the mapping.
  • the method includes performing the action based on the detected key (200).
  • the action can be a virtual typing, the application control or the like.
  • the user of the electronic device (100) can make custom layout and calibrate his/her own keyboard in a custom typing mode.
  • the custom typing mode can include custom keys (e.g., Predictions, play, pause etc.), multilingual keyboard (English, Hindi keys, Korean keys, etc.).
  • the method can be extendable to any type of typing over any tangible surface and just slide your fingers around the any tangible surface.
  • FIG. 4A to 4B is an example scenario in which the electronic device (100) displays corresponding letters on a display (110) by detecting a location of a finger position on the virtual keyboard and mapping to the keyboard key-map to realize the corresponding key press on the virtual keyboard, according to an embodiment as disclosed herein.
  • the electronic device (100) automatically identifies layout (i.e., creating virtual keyboard). Based on the proposed method, a front camera of the electronic device identifies a keyboard design drawn by the user and understands a layout of the keyboard design and input keys on the keyboard. S402, the virtual input controller (112) performs the image perspective transform after identifying the keyboard design drawn by the user. After performing the image perspective transform, at S404, the virtual input controller (112) generates a key-map using an image processing technique and an Optical Character Recognition (OCR) technique.
  • OCR Optical Character Recognition
  • the virtual input controller (112) detects the tips of the finger (300) using the ML model.
  • the virtual input controller (112) detects the finger tap using the accelerometer sensor (114a). Based on the finger tap, the virtual input controller (112) generates the desired key press using a finger tracking data, such that a character (L) is identified and displayed over the electronic device (100).
  • the ML model predicts key touch based on current accelerometer X, Y, Z and median values.
  • the notation "a” of the FIG. 5 illustrates a raw image transformation data.
  • the notation "b” of the FIG. 5 illustrates contour detection and OCR data.
  • the notation "c” of the FIG. 5 illustrates finger tracking data. Each letter has different contour.
  • FIG. 6 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by detecting the location of the finger position on the paper keyboard (400a) and mapping to the keyboard key-map to realize the corresponding key press on the virtual keyboard, according to an embodiment as disclosed herein.
  • the electronic device (100) automatically identifies layout (i.e., creating virtual keyboard).
  • the method can be used to combine with the finger identification on the virtual keyboard using the camera (102) and the accelerometer sensor (114a).
  • the virtual input controller (112) detects the location of the finger position and then maps to the keyboard key-map to realize the corresponding key press on the keyboard. Based on the mapping, virtual input controller (112) displays the corresponding letter on the display (110).
  • the electronic device (100) displays corresponding letters on the display (110) based on a swipe gesture on the keys (200) of the keyboard of the electronic device as shown in the FIG. 7. In another example, the electronic device (100) displays corresponding letters on the display (110) by typing using the printed keyboard paper as shown in the FIG. 8.
  • FIG. 9 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by typing using finger tracking, according to an embodiment as disclosed herein.
  • the finger tips will be shown on the electronic device (100), while typing, so that the user of the electronic device (100) can easily track the finger (300) on the keyboard while typing.
  • the electronic device (100) displays corresponding letters on the display (110) by swiping on the custom keyboard to type as shown in the FIG. 10. In another example, the electronic device (100) displays corresponding letters on the display (110) by swiping on any surface to type as shown in the FIG. 11.
  • FIG. 12 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by recognition handwriting on a plastic keyboard (400c) using a pen, according to an embodiment as disclosed herein.
  • the electronic device (100) displays corresponding letters on the display (110) by recognition handwriting on the paper keyboard without using the pen. Similarly, the electronic device (100) displays corresponding letters on the display (110) by recognizing handwriting using the user finger (300) on any surface to type.
  • the electronic device (100) displays corresponding letters on the display (110) by typing using finger tracking of two hands as shown in the FIG. 13. In another example, the electronic device (100) displays corresponding letters on the display (110) by typing on a custom keyboard (400d) as shown in the FIG. 14.
  • FIG. 15 is an example scenario in which the electronic device (100) maps the left hand keyboard layout (1500) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 15, the electronic device (100) detects the left hand of the user. After detecting the left hand of the user, the electronic device (100) maps the left hand keyboard layout (1500) to the image of the normal surface area.
  • FIG. 16 is an example scenario in which the electronic device (100) maps the right hand keyboard layout (1600) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 16, the electronic device (100) detects the right hand of the user. After detecting the right hand of the user, the electronic device (100) maps the right hand keyboard layout to the image of the normal surface area.
  • FIG. 17 is an example scenario in which the electronic device (100) maps a full hand keyboard layout (1700) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 17, the electronic device (100) detects both left and right hands of the user. After detecting both left and right hands of the user, the electronic device (100) maps the full hand keyboard layout (1700) to the image of the normal surface area, according to an embodiment as disclosed herein.
  • FIG. 18 is another example scenario in which the electronic device (100) maps the split hand keyboard layout (1802 and 1804) to the image of the normal surface area, according to an embodiment as disclosed herein.
  • the electronic device (100) detects both left and right hands of the user. After detecting both left and right hands of the user, the electronic device (100) maps the split hand keyboard layout (1802 and 1804) to the image of the normal surface area, according to an embodiment as disclosed herein.
  • FIG. 19 is an example scenario in which the electronic device (100) identifies the finger events on the virtual keyboard using HBC, according to an embodiment as disclosed herein.
  • the HBC uses body tissue as a transmission medium (1900) to transmit informatics.
  • an ECG signal from chest are modulated into micro-Ampere electric current and coupled into body by electrodes.
  • the ECG signal is detected by receiving electrodes on a wrist.
  • the transmitting and receiving electrodes results in galvanic coupling signal transmission and the signal propagation is based on ionic current.
  • a small current induced electric potential can be detected at a receiver by a high-gain differential amplifier.
  • FIG. 20 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by the HBC, according to an embodiment as disclosed herein.
  • the user holds the electronic device (100) in one hand and types on the surface (2002) using second hand.
  • the electronic device (100) will translate finger events on the display (110) and corresponding letters are displayed on the display (110). This technique is also applicable to the ultrasonic method.
  • the user will slide fingers to type on the electronic device (100) while watching the display (110).
  • FIG. 21 is an example scenario in which the electronic device (100) identifies the finger events on the virtual keyboard using the prism (2102), according to an embodiment as disclosed herein.
  • the prism (2102) with the camera (102) is used to simulate conditions like 2 camera's to determine the depth map using stereo analysis.
  • the purpose to use the prism (2102) is to capture different angle of perspective.
  • the prism (2102) identifies the finger tap events on the virtual keyboard more accurately.
  • FIG. 22 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) using a prediction technique, according to an embodiment as disclosed herein.
  • the prediction technique can be an existing prediction technique.
  • FIG. 23 is an example scenario in which the electronic device controls a gallery and a video application in multi window scenario by detecting a key (2302 and 2304) corresponding to the portion at which the tap is performed on the surface area, according to an embodiment as disclosed herein.
  • the electronic device (100) detects both hands. After detecting the both hands, the finger positions detected and calibrated over functionalities corresponding to each application. Based on the user performing action using fingers (300) on either hand, the electronic device controls a gallery and a video application in multi window scenario.
  • the electronic device (100) controls the scrolling operation on a music application by detecting the key (2402) corresponding to the portion at which the tap is performed on the surface area as shown in the FIG. 24
  • the electronic device (100) controls the touch pad operation on a music application by detecting a key (2502) corresponding to the portion at which the tap is performed on the surface area as shown in the FIG. 25.
  • the user of the electronic device (100) controls buds volume using finger slide while listening to music.
  • the electronic device (100) edits links on a webpage by using the virtual input (2602) over the surface.
  • the embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments herein disclose a method for enabling virtual input by an electronic device (100). The method includes measuring, by the electronic device (100), sensor data corresponding to a gesture performed by a user on a surface area (400). Further, the method includes detecting, by the device (100), a virtual input event by a finger (300) of the user based on the sensor data. Further, the method includes determining, by the electronic device (100), a portion at which the gesture is performed by the user on the surface area (400) based on the virtual input event. Further, the method includes detecting, by the electronic device (100), a key (200) corresponding to the portion at which the gesture is performed. Further, the method includes performing, by the electronic device (100), an action based on the detected key (200). The virtual input event corresponds to a typing event or an application control event.

Description

METHOD AND ELECTRONIC DEVICE FOR ENABLING VIRTUAL INPUT ON ELECTRONIC DEVICE
The present disclosure relates to a virtual input on an electronic device.
In general, various mechanisms are used for typing an input on a keyboard. The user of an electronic device faces many challenges while using a virtual and hard keyboard. The challenges can be, for example, but not limited to a slow typing, user intention while typing, small keys, non-custom keyboard and an external hardware requirement. Many conventional methods and electronic devices have been proposed for typing an input on a keyboard, but these conventional methods and electronic device may have disadvantages in terms of robustness, reliability, operation dependency, time, cost, complexity, design, hardware components usage, size and so on.
Thus, it is desired to address the above mentioned disadvantages or other shortcomings or at least provide a useful alternative.
The principal object of the embodiments herein is to provide a method and an electronic device for enabling virtual input so as to provide an enhanced user experience for typing, so that a user of the electronic device can type smartly and quickly without seeing on keys at a cost effective manner and without using additional hardware.
Another object of the embodiments herein is to enable the virtual typing by finger tapping/swiping/handwriting on any free surface. The electronic device will detect finger movements and notify the detected finger movements on a virtual input controller and the virtual input controller will map finger tips to the keyboard map for hassle free typing. This results in enhancing a keyboard typing experience as per user requirements.
Another object of the embodiments herein is to provide the virtual input on one or more application executing on the electronic device to control various functions (e.g., scrolling the application, a pausing a video in the electronic device or the like) on the one or more application by using finger tapping/swiping/handwriting on any free surface. The electronic device will detect finger movements and notify the detected finger movements on the virtual input controller and the virtual input controller will map finger tips to the keyboard map for hassle free virtual input on the one or more application. This results in enhancing the application control as per user requirements without using any external hardware in a cost effective manner.
Accordingly, embodiments herein disclose a method for enabling virtual input by an electronic device. The method includes measuring, by the electronic device, sensor data corresponding to a gesture performed by a user on a surface area. Further, the method includes detecting, by the electronic device, a virtual input event by a finger of the user based on the sensor data. Further, the method includes determining, by the electronic device, a portion at which the gesture is performed by the user on the surface area based on the virtual input event. Further, the method includes detecting, by the electronic device, a key corresponding to the portion at which the gesture is performed. Further, the method includes performing, by the electronic device, an action based on the detected key.
In an embodiment, the surface area is one of a normal surface area and an artificial surface area.
In an embodiment, the normal surface area is one of a glass surface, a rubber surface, a plastic surface, a wooden surface, and a mattress surface.
In an embodiment, the artificial surface area is one of a printed keyboard and a handwritten keyboard on the normal surface area.
In an embodiment, the sensor data includes an accelerometer sensor data, a prism data and an ultrasonic data.
In an embodiment, the sensor data is obtained from at least one of a camera, a prism, and Human body communication (HBC).
In an embodiment, the typing event includes a tap of the finger, a swipe of the finger, and a handwriting of the user.
In an embodiment, measuring, by the electronic device, the sensor data corresponding to the gesture performed by the user on the surface area includes automatically activating, by the electronic device, a camera of the electronic device, detecting, by the electronic device, whether the surface area corresponds to a normal surface area or an artificial surface area displayed in a field of view of the camera, and performing, by the electronic device, one of: capturing an image of the normal surface area displayed within the field of view of the camera in response to determining that the surface area corresponds to the normal surface, generating a virtual area by mapping a keyboard layout of the electronic device to the image of the normal surface area, detecting the gesture performed by the user in the virtual area, and measuring the sensor data when the gesture is performed in the virtual area, and capturing an image of the artificial surface area displayed within the field of view of the camera in response to determining that the surface area corresponds to the artificial surface, generating a keyboard layout provided in the artificial surface by processing the image of the artificial surface area, and measuring the sensor data when the gesture is performed on the normal surface area.
In an embodiment, mapping the keyboard layout of the electronic device to the image of the normal surface area includes detecting, by the electronic device, one of a left hand of the user, a right hand of the user and both left and right hands of the user, and performing, by the electronic device, one of mapping a left hand keyboard layout to the image of the normal surface area in response to detecting the left hand of the user, mapping a right hand keyboard layout to the image of the normal surface area in response to detecting the right hand of the user, and mapping a split hand keyboard layout to the image of the normal surface area in response to detecting both the left hand and the right hand of the user.
In an embodiment, detecting, by the electronic device, the typing event by the finger of the user based on the sensor data includes applying, by the electronic device, a machine learning model to detect the finger of the user based on the gesture performed by the user on the surface area, applying, by the electronic device, the machine learning model to detect a tip of the finger of the user based on the gesture performed by the user on the surface area, and detecting, by the electronic device, the typing event performed by the tip of the finger of the user based on the sensor data.
In an embodiment, detecting, by the electronic device, the key corresponding to the portion includes detecting, by the electronic device, whether the surface area corresponds to a normal surface area or an artificial keyboard, and performing, by the electronic device, one of: mapping the portion at which the gesture is performed with the key of a keyboard of the electronic device in response to determining that the surface area corresponds to the normal surface area, and detecting the key corresponding to the portion based on the mapping, and mapping the portion at which the gesture is performed to the key of a key map provided in the artificial surface area in response to determining that the surface area corresponds to the artificial surface area, and detecting the key corresponding to the portion based on the mapping.
Accordingly embodiments herein disclose an electronic device for enabling virtual input. The electronic device includes a sensor coupled to a memory and a processor. The sensor is configured to measure sensor data corresponding to a gesture performed by a user on a surface area. A virtual input controller is coupled to the memory and the processor. The virtual input controller is configured to detect a virtual input event by a finger of the user based on the sensor data. Further, the virtual input controller is configured to determine a portion at which the gesture is performed by the user on the surface area based on the virtual input event. Further, the virtual input controller is configured to detect a key corresponding to the portion at which the gesture is performed. Further, the virtual input controller is configured to perform an action based on the detected key.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
-
This method is illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
FIG. 1 is an example scenario in which an electronic device enables a virtual input, according to an embodiment as disclosed herein;
FIG. 2 shows various hardware components of the electronic device, according to an embodiment as disclosed herein;
FIG. 3A to 3C is a flow chart illustrating a method for enabling the virtual input on the electronic device, according to an embodiment as disclosed herein;
FIG. 4A to 4B is an example scenario in which the electronic device displays corresponding letters on a display by detecting a location of a finger position on a virtual keyboard and mapping to a keyboard key-map to realize corresponding key presses on the virtual keyboard, according to an embodiment as disclosed herein;
FIG. 5 is an example scenario in which a raw image transformation data, a contour detection and OCR data and a finger tracking data are depicted, according to an embodiment as disclosed herein;
FIG. 6 is another example scenario in which the electronic device displays corresponding letters on a display by detecting a location of a finger position on a paper keyboard and mapping to a keyboard key-map to realize the corresponding key press on the paper keyboard, according to an embodiment as disclosed herein;
FIG. 7 is another example scenario in which the electronic device displays corresponding letters on the display based on a swipe gesture on the keyboard of the electronic device, according to an embodiment as disclosed herein;
FIG. 8 is another example scenario in which the electronic device displays corresponding letters on the display by typing using a printed keyboard on a surface, according to an embodiment as disclosed herein;
FIG. 9 is another example scenario in which the electronic device displays corresponding letters on the display by typing using finger tracking, according to an embodiment as disclosed herein;
FIG. 10 is another example scenario in which the electronic device displays corresponding letters on the display by swiping on the printed keyboard to type, according to an embodiment as disclosed herein;
FIG. 11 is another example scenario in which the electronic device displays corresponding letters on the display by swiping on any surface to type, according to an embodiment as disclosed herein;
FIG. 12 is another example scenario in which the electronic device displays corresponding letters on the display by recognition handwriting using a pen, according to an embodiment as disclosed herein;
FIG. 13 is another example scenario in which the electronic device displays corresponding letters on the display by typing using finger tracking of two hands, according to an embodiment as disclosed herein;
FIG. 14 is another example scenario in which the electronic device displays corresponding letters on the display by typing on a custom keyboard, according to an embodiment as disclosed herein;
FIG. 15 is an example scenario in which the electronic device maps a left hand keyboard layout to an image of a normal surface area, according to an embodiment as disclosed herein;
FIG. 16 is an example scenario in which the electronic device maps a right hand keyboard layout to an image of the normal surface area, according to an embodiment as disclosed herein;
FIG. 17 is an example scenario in which the electronic device maps a normal keyboard layout to the image of the normal surface area, according to an embodiment as disclosed herein;
FIG. 18 is another example scenario in which the electronic device maps the split hand keyboard layout to the image of the normal surface area, according to an embodiment as disclosed herein;
FIG. 19 is an example scenario in which the electronic device identifies the finger events on the virtual keyboard using Human Body Communication (HBC), according to an embodiment as disclosed herein;
FIG. 20 is another example scenario in which the electronic device displays corresponding letters on the display by HBC, according to an embodiment as disclosed herein;
FIG. 21 is an example scenario in which the electronic device identifies the finger events on the virtual keyboard using a prism, according to an embodiment as disclosed herein;
FIG. 22 is another example scenario in which the electronic device displays corresponding letters on the display using a prediction technique, according to an embodiment as disclosed herein;
FIG. 23 is an example scenario in which the electronic device controls a gallery and a video application in a multi window scenario by detecting a key corresponding to the portion at which the tap is performed on a surface area, according to an embodiment as disclosed herein;
FIG. 24 is an example scenario in which the electronic device controls a scrolling operation on a music application by detecting the key corresponding to the portion at which the tap is performed on the surface area, according to an embodiment as disclosed herein;
FIG. 25 is an example scenario in which the electronic device controls a touch pad operation on a music application by detecting a key corresponding to the portion at which the tap is performed on the surface area, according to an embodiment as disclosed herein; and
FIG. 26 is an example scenario in which the electronic device edits links on a webpage by using a virtual input, according to an embodiment as disclosed herein.
-
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as units or modules or the like, are physically implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware and software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention
The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
Accordingly, embodiments herein achieve a method for enabling virtual input by an electronic device. The method includes measuring, by the electronic device, sensor data corresponding to a gesture performed by a user on a surface area. Further, the method includes detecting, by the electronic device, the virtual input by a finger of the user based on the sensor data. Further, the method includes determining, by the electronic device, a portion at which the gesture is performed by the user on the surface area based on the virtual input. Further, the method includes detecting, by the electronic device, a key corresponding to the portion at which the gesture is performed. Further, the method includes performing, by the electronic device, an action based on the detected key. The virtual input corresponds to a virtual typing event and virtually performing an action on an application (e.g., scrolling the application, a pausing a video in the electronic device or the like).
Unlike conventional methods and systems, the method can be used to enable the virtual typing so as to provide an enhanced user experience for typing, so that a user of the electronic device can type smartly and quickly without seeing on keys at a cost effective manner and without using additional hardware. The method can be used to virtually perform the action on the application (e.g., scrolling the application, a pausing a volume of the application or the like.) in an easy manner without using additional hardware.
The proposed method can be used to enable the virtual typing by finger tapping/swiping on any free surface. The electronic device will detect the finger movements and notify the detected finger movements on a virtual input controller and the virtual input controller will map finger tips to the keyboard map for hassle free typing. This results in enhancing a keyboard typing experience as per user requirements. The proposed method can be used to enable the virtual typing by finger tapping/swiping on any free surface at a cost effective manner and without additional hardware.
Referring now to the drawings, and more particularly to FIGS. 1 through 26, there are shown preferred embodiments.
FIG. 1 is an example scenario in which an electronic device (100) enables a virtual input, according to an embodiment as disclosed herein. The electronic device (100) can be, for example, but not limited to a cellular phone, a smart phone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an Internet of Things (IoT), a virtual reality device, a smart phone, a flexible electronic device, a curved electronic device, and a foldable electronic device.
Consider a scenario, a user of the electronic device (100) wants to type on the electronic device (100) using a virtual keyboard. Based on a proposed method, the electronic device (100) automatically activates a camera (102). After activating the camera (102), the electronic device (100) detects whether a surface area (400) corresponds to a normal surface area or an artificial surface area displayed in a field of view of the camera (102). The normal surface area can be, for example, but not limited to a glass surface, a rubber surface, a plastic surface, a metal surface, a wooden surface, and a mattress surface. The artificial surface area can be, for example, but not limited to a printed keyboard and a handwritten keyboard on the normal surface area. The proposed method is applicable to all the surfaces which can detect vibrations.
If the surface area (400) corresponds to the artificial surface area displayed in the field of view of the camera (102) then, the camera (102) captures an image of the artificial surface area displayed within a field of view of the camera (102). Further, the electronic device (100) maps a keyboard layout provided in the artificial surface by processing the image of the artificial surface area. Further, the electronic device (100) measures sensor data when a gesture is performed on the artificial surface area. The sensor data can be, for example but not limited to, an accelerometer sensor data, a gyroscope sensor data and an ultrasonic data. Further, the sensor data is obtained from at least one of a camera, a prism, and Human body communication (HBC).
If the surface area (400) corresponds to the normal surface area displayed in the field of view of the camera (102) then, the camera (102) captures the image of the normal surface area displayed within the field of view of the camera (102). Further, the electronic device (100) detects a left hand of the user, a right hand of the user or both left and right hands of the user. If the left hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the electronic device (100) maps the left hand keyboard layout to the image of the normal surface area. The operations and functions of the left hand keyboard layout is explained in the FIG. 15.
If the right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the electronic device (100) maps the right hand keyboard layout to the image of the normal surface area. The operations and functions of the right hand keyboard layout is explained in the FIG. 16. If the left hand and right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the electronic device (100) maps the split hand keyboard layout or the normal keyboard layout to the image of the normal surface area. The operations and functions of the split hand keyboard layout is explained in the FIG. 18. The split hand keyboard layout or the normal keyboard layout are determined by identifying a distance between one hand to another hand of the user. If the distance between the one hand to another hand of the user exceeds a predefine threshold then, the electronic device (100) activates the split hand keyboard layout. If the distance between the one hand to another hand of the user does not exceed the predefine threshold then, the electronic device (100) activates the normal keyboard layout. The predefine threshold is set by the user of the electronic device (100) or an original Equipment manufacturer (OEM).
Further, the electronic device (100) generates the virtual area. After generating the virtual area, the electronic device (100) detects the gesture performed by the user in the virtual area. Further, the electronic device (100) measures the sensor data when the gesture is performed in the virtual area.
Further, the electronic device (100) applies the machine learning model to detect one or more finger (300) of the user based on the gesture performed by the user on the surface area (400). Further, the electronic device (100) applies the machine learning model to detect a tip of the finger (300) of the user based on the gesture or the event performed by the user on the surface area (400). The gesture or the event can be, for example, but not limited to a swipe gesture and a hover gesture. Further, the electronic device (100) detects a typing event performed by the tip of the finger (300) of the user based on the sensor data. The typing event can be, for example, but not limited to a tap of the finger (300), a swipe of the finger (300), and a handwriting of the user. Further, the electronic device (100) determines the portion at which the gesture is performed by the user on the surface area (400).
After determining the portion at which the gesture is performed by the user on the surface area (400), the electronic device (100) detects whether the surface area corresponds to the normal surface area or the artificial surface area. If the surface area corresponds to the normal surface area then, the electronic device (100) maps the portion at which the gesture is performed with the key (200) of a keyboard of the electronic device (100). Further, the electronic device (100) detects the key (200) corresponding to the portion based on the mapping.
If the surface area corresponds to the artificial surface area then, the electronic device (100) maps the portion at which the gesture is performed to the key (200) of a key map provided in the artificial surface area. Further, the electronic device (100) detects the key corresponding to the portion based on the mapping. Further, the electronic device (100) performs the action based on the detected key (200).
FIG. 2 shows various hardware components of the electronic device (100), according to an embodiment as disclosed herein. The electronic device (100) includes the camera (102), a communicator (104), a memory (106), a processor (108), a display (110), a virtual input controller (112), a sensor (114), a machine learning controller (116) and a prism controller (118). The sensor (114) can be, for example, but not limited to an accelerometer sensor (114a) and an ultrasonic sensor (114b). The processor (108) is coupled with the camera (102), the communicator (104), the memory (106), the display (110), the virtual input controller (112), the sensor (114), the machine learning controller (116) and the prism controller (118). The prism controller (118) is used to obtain the sensor data.
The virtual input controller (112) is configured to automatically activate the camera (102). After activating the camera (102), the virtual input controller (112) is configured to detect whether the surface area (400) corresponds to the normal surface area or the artificial surface area displayed in the field of view of the camera (102). If the surface area (400) corresponds to the artificial surface area displayed in the field of view of the camera (102) then, the camera (102) captures the image of the artificial surface area displayed within the field of view of the camera (102). Further, the virtual input controller (112) is configured to map the keyboard layout provided in the artificial surface by processing the image of the artificial surface area. Further, the virtual input controller (112) is configured to measure the sensor data, using the sensor (114), when the gesture is performed on the artificial surface area.
If the surface area (400) corresponds to the normal surface area displayed in the field of view of the camera (102) then, the camera (102) captures the image of the normal surface area displayed within the field of view of the camera (102). Further, the virtual input controller (112) is configured to detect the left hand of the user, the right hand of the user or both left and right hands of the user. If the left hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the virtual input controller (112) is configured to map the left hand keyboard layout to the image of the normal surface area. If the right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, the virtual input controller (112) is configured to map the right hand keyboard layout to the image of the normal surface area. If the left hand and right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, virtual input controller (112) is configured to map the split hand keyboard layout to the image of the normal surface area.
Further, the virtual input controller (112) is configured to generate the virtual area and detect the gesture performed by the user in the virtual area.
Further, the virtual input controller (112) is configured to measure the sensor data when the gesture is performed in the virtual area. Further, the virtual input controller (112) is configured to apply the machine learning model to detect the finger (300) of the user based on the gesture performed by the user on the surface area (400) using the machine learning controller (116). The machine learning model identifies the gesture that is being performed in a touch with the surface in close proximity to the electronic device (100). Further, the machine learning controller (116) is configured to apply the machine learning model to detect a tip of the finger (300) of the user when the tip of the finger is not detected based on the gesture performed by the user on the surface area (400). Further, the virtual input controller (112) is configured to detect the typing event performed by the tip of the finger (300) of the user based on the sensor data. Further, the virtual input controller (112) is configured to determine the portion at which the gesture is performed by the user on the surface area (400).
Further, the virtual input controller (112) is configured to detect whether the surface area corresponds to the normal surface area or the artificial surface area. If the surface area corresponds to the normal surface area then, the virtual input controller (112) is configured to map the portion at which the gesture is performed with the key (200) of the keyboard of the electronic device (100). Further, the virtual input controller (112) is configured to detect the key (200) corresponding to the portion based on the mapping.
If the surface area corresponds to the artificial surface area then, the virtual input controller (112) is configured to map the portion at which the gesture is performed to the key (200) of the key map provided in the artificial surface area. Further, the virtual input controller (112) is configured to detect the key corresponding to the portion based on the mapping. Further, the virtual input controller (112) is configured to perform the action based on the detected key (200).
The virtual input controller (112) also includes a hand detector model (not shown), a hand landmark model (not shown), and a gesture recognizer model (not shown). The hand detector model operates on full image and returns oriented hand bounding box. The hand landmark model operates on cropped image region defined by a hand detector and returns high fidelity 3D hand key points (i.e., finger tips location & distance between the fingers). The gesture recognizer model classifies the previously computed key points into a discrete set of gestures.
Further, the accelerometer sensor (114a) is used to measure acceleration forces. The forces may be static, like the continuous force of gravity or, as is the case with many electronic devices (100), dynamic to sense movement or vibrations. The sensor (114) provides the three coordinate axis x, y and z of the vibration. The finger tap detection on the surface is detected using accelerometer vibrations. The machine learning model is used to calibrate and prune for false tap vibrations using the machine learning controller (116).
Further, the ML model includes a calibration model, a tap pruning model, and a tap recognizer model. The calibration model that operates on the all vibrations and calibrates the electronic device (100) for false vibrations based on the environment. The tap pruning model operates on the finger taps defined by the calibration model. The tap pruning model learns the user key vibration mapping. In an example, the user key press intensity is different for each key. The tap pruning model will learn the user key press intensity to identify the desired key press. The tap recognizer model classifies the previously computed taps configuration into a discrete set of keys.
The electronic device (100) also identifies the finger events on the virtual keyboard using HBC and prism. The HBC related operation is explained in the FIG. 19. The prism related operation is explained in the FIG. 21.
Further, the electronic device (100) also identifies the finger events on the virtual keyboard using an ultrasonic event transfer method. The ultrasonic event transfer method transmits ultrasonic waves and receives echoes reflected from the palms and fingers. In the ultrasonic event transfer method, a pulsed Doppler radar signal processing technique is used to obtain time-sequential range-Doppler features from ultrasonic waves, measure objects' precise distances and velocities simultaneously through a single channel.
Further, the electronic device (100) also identifies the finger events on the virtual keyboard using a State-transition based hidden Markov model (HMM) method for micro dynamic gesture recognition.
The processor (108) is configured to execute instructions stored in the memory (106) and to perform various processes. The communicator (104) is configured for communicating internally between internal hardware components and with external devices via one or more networks.
The memory (106) also stores instructions to be executed by the processor (108). The memory (106) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory (106) may, in some examples, be considered a non-transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted that the memory (106) is non-movable. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
Further, at least one of the plurality of hardware components may be implemented through an artificial intelligence (AI) model. A function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor (108). The processor (108) may include one or a plurality of processors. At this time, one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
The one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
Here, being provided through learning means that, by applying a learning algorithm to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values, and performs a layer operation through calculation of a previous layer and an operation on a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
The learning algorithm is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
Although the FIG. 2 shows various hardware components of the electronic device (100) but it is to be understood that other embodiments are not limited thereon. In other embodiments, the electronic device (100) may include less or more number of components. Further, the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention. One or more components can be combined together to perform same or substantially similar function to enable the virtual input on the electronic device (100).
FIG. 3A to 3C is a flow chart (S300) illustrating a method for enabling virtual input on the electronic device (100), according to an embodiment as disclosed herein. The operations (S302-S340) are performed by the virtual input controller (112).
At S302, the method includes automatically activating the camera (102) of the electronic device (100). At S304, the method includes detecting whether the surface area corresponds to the normal surface area or the artificial surface area displayed in a field of view of the camera (102). If the surface area (400) corresponds to the artificial surface area displayed in the field of view of the camera (102) then, at S306, the method includes capturing the image of the artificial surface area displayed within the field of view of the camera (102). At S308, the method includes mapping the keyboard layout provided in the artificial surface by processing the image of the artificial surface area.
At S310, the method includes capturing the image of the normal surface area displayed within the field of view of the camera (102). At S312, the method includes mapping the keyboard layout provided in the normal surface. At S314, the method includes detecting one of a left hand of the user, a right hand of the user and both left and right hands of the user. If the left hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, at S316, the method includes mapping the left hand keyboard layout to the image of the normal surface area. If the right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, at S318, the method includes mapping the right hand keyboard layout to the image of the normal surface area. If the left hand and right hand of the user is detected on the normal surface area displayed within the field of view of the camera (102) then, at S320, the method includes mapping the split hand keyboard layout to the image of the normal surface area.
At S322, the method includes generating the virtual area. At S324, the method includes detecting the gesture performed by the user in the virtual area. At S326, the method includes measuring the sensor data when the gesture is performed in the virtual area. At S328, the method includes applying the machine learning model to detect the finger of the user based on the gesture performed by the user on the surface area (400). At S330, the method includes applying the machine learning model to detect a tip of the finger (300) of the user based on the gesture performed by the user on the surface area (400). At S332, the method includes detecting the typing event performed by the tip of the finger (300) of the user based on the sensor data. At S334, the method includes determining the portion at which the gesture is performed by the user on the surface area (400).
At S336, the method includes mapping the portion at which the gesture is performed with the key (200) of the keyboard of the electronic device (100). At S338, the method includes detecting the key (200) corresponding to the portion based on the mapping. At S340, the method includes performing the action based on the detected key (200). The action can be a virtual typing, the application control or the like.
In the proposed method, the user of the electronic device (100) can make custom layout and calibrate his/her own keyboard in a custom typing mode. The custom typing mode can include custom keys (e.g., Predictions, play, pause etc.), multilingual keyboard (English, Hindi keys, Korean keys, etc.).
The method can be extendable to any type of typing over any tangible surface and just slide your fingers around the any tangible surface.
The various actions, acts, blocks, steps, or the like in the flow chart (S300) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
FIG. 4A to 4B is an example scenario in which the electronic device (100) displays corresponding letters on a display (110) by detecting a location of a finger position on the virtual keyboard and mapping to the keyboard key-map to realize the corresponding key press on the virtual keyboard, according to an embodiment as disclosed herein.
Consider, the user of the electronic device (100) draws the keys specific to his/her need or a keyboard layout of his/her choice, the electronic device (100) automatically identifies layout (i.e., creating virtual keyboard). Based on the proposed method, a front camera of the electronic device identifies a keyboard design drawn by the user and understands a layout of the keyboard design and input keys on the keyboard. S402, the virtual input controller (112) performs the image perspective transform after identifying the keyboard design drawn by the user. After performing the image perspective transform, at S404, the virtual input controller (112) generates a key-map using an image processing technique and an Optical Character Recognition (OCR) technique. The OCR technique is used to read a custom characters from printed keyboard and build the key map.
At S406, the virtual input controller (112) detects the tips of the finger (300) using the ML model. At S408, the virtual input controller (112) detects the finger tap using the accelerometer sensor (114a). Based on the finger tap, the virtual input controller (112) generates the desired key press using a finger tracking data, such that a character (L) is identified and displayed over the electronic device (100). The ML model predicts key touch based on current accelerometer X, Y, Z and median values. The notation "a" of the FIG. 5 illustrates a raw image transformation data. The notation "b" of the FIG. 5 illustrates contour detection and OCR data. The notation "c" of the FIG. 5 illustrates finger tracking data. Each letter has different contour.
FIG. 6 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by detecting the location of the finger position on the paper keyboard (400a) and mapping to the keyboard key-map to realize the corresponding key press on the virtual keyboard, according to an embodiment as disclosed herein.
Consider, the user of the electronic device (100) draws the keys specific to his/her need or the keyboard layout of his/her choice, the electronic device (100) automatically identifies layout (i.e., creating virtual keyboard). The method can be used to combine with the finger identification on the virtual keyboard using the camera (102) and the accelerometer sensor (114a). The virtual input controller (112) detects the location of the finger position and then maps to the keyboard key-map to realize the corresponding key press on the keyboard. Based on the mapping, virtual input controller (112) displays the corresponding letter on the display (110).
In another example, the electronic device (100) displays corresponding letters on the display (110) based on a swipe gesture on the keys (200) of the keyboard of the electronic device as shown in the FIG. 7. In another example, the electronic device (100) displays corresponding letters on the display (110) by typing using the printed keyboard paper as shown in the FIG. 8.
FIG. 9 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by typing using finger tracking, according to an embodiment as disclosed herein. The finger tips will be shown on the electronic device (100), while typing, so that the user of the electronic device (100) can easily track the finger (300) on the keyboard while typing.
In another example, the electronic device (100) displays corresponding letters on the display (110) by swiping on the custom keyboard to type as shown in the FIG. 10. In another example, the electronic device (100) displays corresponding letters on the display (110) by swiping on any surface to type as shown in the FIG. 11.
FIG. 12 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by recognition handwriting on a plastic keyboard (400c) using a pen, according to an embodiment as disclosed herein.
In another example, the electronic device (100) displays corresponding letters on the display (110) by recognition handwriting on the paper keyboard without using the pen. Similarly, the electronic device (100) displays corresponding letters on the display (110) by recognizing handwriting using the user finger (300) on any surface to type.
In another example, the electronic device (100) displays corresponding letters on the display (110) by typing using finger tracking of two hands as shown in the FIG. 13. In another example, the electronic device (100) displays corresponding letters on the display (110) by typing on a custom keyboard (400d) as shown in the FIG. 14.
FIG. 15 is an example scenario in which the electronic device (100) maps the left hand keyboard layout (1500) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 15, the electronic device (100) detects the left hand of the user. After detecting the left hand of the user, the electronic device (100) maps the left hand keyboard layout (1500) to the image of the normal surface area.
FIG. 16 is an example scenario in which the electronic device (100) maps the right hand keyboard layout (1600) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 16, the electronic device (100) detects the right hand of the user. After detecting the right hand of the user, the electronic device (100) maps the right hand keyboard layout to the image of the normal surface area.
FIG. 17 is an example scenario in which the electronic device (100) maps a full hand keyboard layout (1700) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 17, the electronic device (100) detects both left and right hands of the user. After detecting both left and right hands of the user, the electronic device (100) maps the full hand keyboard layout (1700) to the image of the normal surface area, according to an embodiment as disclosed herein.
FIG. 18 is another example scenario in which the electronic device (100) maps the split hand keyboard layout (1802 and 1804) to the image of the normal surface area, according to an embodiment as disclosed herein. As shown in the FIG. 18, the electronic device (100) detects both left and right hands of the user. After detecting both left and right hands of the user, the electronic device (100) maps the split hand keyboard layout (1802 and 1804) to the image of the normal surface area, according to an embodiment as disclosed herein.
FIG. 19 is an example scenario in which the electronic device (100) identifies the finger events on the virtual keyboard using HBC, according to an embodiment as disclosed herein. The HBC uses body tissue as a transmission medium (1900) to transmit informatics. As shown in the FIG. 19, an ECG signal from chest are modulated into micro-Ampere electric current and coupled into body by electrodes. The ECG signal is detected by receiving electrodes on a wrist. The transmitting and receiving electrodes results in galvanic coupling signal transmission and the signal propagation is based on ionic current. A small current induced electric potential can be detected at a receiver by a high-gain differential amplifier.
FIG. 20 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) by the HBC, according to an embodiment as disclosed herein. As shown in the FIG. 20, the user holds the electronic device (100) in one hand and types on the surface (2002) using second hand. By using the HBC (2000), the electronic device (100) will translate finger events on the display (110) and corresponding letters are displayed on the display (110). This technique is also applicable to the ultrasonic method.
In another example, the user will slide fingers to type on the electronic device (100) while watching the display (110).
FIG. 21 is an example scenario in which the electronic device (100) identifies the finger events on the virtual keyboard using the prism (2102), according to an embodiment as disclosed herein. The prism (2102) with the camera (102) is used to simulate conditions like 2 camera's to determine the depth map using stereo analysis. The purpose to use the prism (2102) is to capture different angle of perspective. The prism (2102) identifies the finger tap events on the virtual keyboard more accurately.
FIG. 22 is another example scenario in which the electronic device (100) displays corresponding letters on the display (110) using a prediction technique, according to an embodiment as disclosed herein. The prediction technique can be an existing prediction technique.
FIG. 23 is an example scenario in which the electronic device controls a gallery and a video application in multi window scenario by detecting a key (2302 and 2304) corresponding to the portion at which the tap is performed on the surface area, according to an embodiment as disclosed herein. The electronic device (100) detects both hands. After detecting the both hands, the finger positions detected and calibrated over functionalities corresponding to each application. Based on the user performing action using fingers (300) on either hand, the electronic device controls a gallery and a video application in multi window scenario.
In another example, the electronic device (100) controls the scrolling operation on a music application by detecting the key (2402) corresponding to the portion at which the tap is performed on the surface area as shown in the FIG. 24
In another example, the electronic device (100) controls the touch pad operation on a music application by detecting a key (2502) corresponding to the portion at which the tap is performed on the surface area as shown in the FIG. 25. In another example, the user of the electronic device (100) controls buds volume using finger slide while listening to music. As shown in the FIG. 26, the electronic device (100) edits links on a webpage by using the virtual input (2602) over the surface.
The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

Claims (15)

  1. A method for enabling virtual input by an electronic device (100), comprising:
    measuring, by the electronic device (100), sensor data corresponding to a gesture performed by a user on a surface area (400);
    detecting, by the electronic device (100), a virtual input event by at least one finger (300) of the user based on the sensor data;
    determining, by the electronic device (100), at least one portion at which the gesture is performed by the user on the surface area (400) based on the virtual input event;
    detecting, by the electronic device (100), at least one key (200) corresponding to the at least one portion at which the gesture is performed; and
    performing, by the electronic device (100), at least one action based on the at least one detected key (200).
  2. The method as claimed in claim 1, wherein the surface area (200) is one of a normal surface area and an artificial surface area.
  3. The method as claimed in claim 2, wherein the normal surface area is one of a glass surface, a rubber surface, a plastic surface, a wooden surface, and a mattress surface.
  4. The method as claimed in claim 2, wherein the artificial surface area is one of a printed keyboard and a handwritten keyboard on the normal surface area.
  5. The method as claimed in claim 1, wherein the sensor data comprises at least one of an accelerometer sensor data, a prism data, and an ultrasonic data.
  6. The method as claimed in claim 1, wherein the sensor data is obtained from at least one of a camera, a prism, and Human body communication (HBC).
  7. The method as claimed in claim 1, wherein the typing event comprises at least one of a tap of the at least one finger (300), a swipe of the at least one finger (300), and a handwriting of the user.
  8. The method as claimed in claim 1, wherein measuring, by the electronic device (100), the sensor data corresponding to the gesture performed by the user on the surface area comprises:
    automatically activating, by the electronic device (100), a camera (102) of the electronic device (100);
    detecting, by the electronic device (100), whether the surface area (400) corresponds to a normal surface area or an artificial surface area displayed in a field of view of the camera (102); and
    performing, by the electronic device (100), one of:
    capturing at least one image of the normal surface area displayed within the field of view of the camera (100) in response to determining that the surface area corresponds to the normal surface, generating a virtual area by mapping a keyboard layout of the electronic device (100) to the at least one image of the normal surface area, detecting the gesture performed by the user in the virtual area, and measuring the sensor data when the gesture is performed in the virtual area, and
    capturing at least one image of the artificial surface area displayed within the field of view of the camera (102) in response to determining that the surface area corresponds to the artificial surface, generating a keyboard layout provided in the artificial surface by processing the at least one image of the artificial surface area, and measuring the sensor data when the gesture is performed on the normal surface area.
  9. The method as claimed in claim 8, wherein mapping the keyboard layout of the electronic device (100) to the at least one image of the normal surface area comprises:
    detecting, by the electronic device (100), one of a left hand of the user, a right hand of the user and both left and right hands of the user; and
    performing, by the electronic device (100), one of:
    mapping a left hand keyboard layout to the at least one image of the normal surface area in response to detecting the left hand of the user,
    mapping a right hand keyboard layout to the at least one image of the normal surface area in response to detecting the right hand of the user, and
    mapping a split hand keyboard layout to the at least one image of the normal surface area in response to detecting both the left hand and the right hand of the user.
  10. The method as claimed in claim 1, wherein detecting, by the electronic device (100), the typing event by at least one finger of the user based on the sensor data comprises:
    applying, by the electronic device (100), a machine learning model to detect the at least one finger (300) of the user based on the gesture performed by the user on the surface area;
    applying, by the electronic device (100), the machine learning model to detect a tip of the at least one finger (300) of the user based on the gesture performed by the user on the surface area (400); and
    detecting, by the electronic device (100), the typing event performed by the tip of the at least one finger of the user based on the sensor data.
  11. The method as claimed in claim 1, wherein detecting, by the electronic device (100), the at least one key (200) corresponding to the at least one portion comprising:
    detecting, by the electronic device (100), whether the surface area (400) corresponds to a normal surface area or an artificial keyboard; and
    performing, by the electronic device (100), one of:
    mapping the at least one portion at which the gesture is performed with at least one key (200) of a keyboard of the electronic device (100) in response to determining that the surface area (400) corresponds to the normal surface area, and detecting the at least one key corresponding to the at least one portion based on the mapping, and
    mapping the at least one portion at which the gesture is performed to at least one key (200) of a key map provided in the artificial surface area in response to determining that the surface area (400) corresponds to the artificial surface area, and detecting the at least one key (200) corresponding to the at least one portion based on the mapping.
  12. An electronic device (100) for enabling virtual input, comprising:
    a memory (106);
    a processor (108);
    at least one sensor (114), coupled to the memory (106) and the processor (108), configured to measure sensor data corresponding to a gesture performed by a user on a surface area (400); and
    a virtual input controller (112), coupled to the memory (106) and the processor (108), configured to:
    detect a virtual input event by at least one finger (300) of the user based on the sensor data,
    determine at least one portion at which the gesture is performed by the user on the surface area (400) based on the virtual input event,
    detect at least one key (200) corresponding to the at least one portion at which the gesture is performed, and
    perform at least one action based on the at least one detected key (200).
  13. The electronic device (100) as claimed in claim 12, wherein the surface area is one of a normal surface area and an artificial surface area.
  14. The electronic device (100) as claimed in claim 13, wherein the normal surface area is one of a glass surface, a rubber surface, a plastic surface, a wooden surface, and a mattress surface.
  15. The electronic device (100) as claimed in claim 13, wherein the artificial surface area is one of a printed keyboard and a handwritten keyboard on the normal surface area.
PCT/KR2021/000213 2020-07-24 2021-01-07 Method and electronic device for enabling virtual input on electronic device WO2022019416A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041031683 2020-07-24
IN202041031683 2020-07-24

Publications (1)

Publication Number Publication Date
WO2022019416A1 true WO2022019416A1 (en) 2022-01-27

Family

ID=79729231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/000213 WO2022019416A1 (en) 2020-07-24 2021-01-07 Method and electronic device for enabling virtual input on electronic device

Country Status (1)

Country Link
WO (1) WO2022019416A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130257732A1 (en) * 2012-03-29 2013-10-03 Robert Duffield Adaptive virtual keyboard
US20150293695A1 (en) * 2012-11-15 2015-10-15 Oliver SCHÖLEBEN Method and Device for Typing on Mobile Computing Devices
KR101749833B1 (en) * 2015-12-17 2017-06-21 연세대학교 산학협력단 Apparatus and method for providing virtual keyboard
KR101873842B1 (en) * 2015-03-11 2018-07-04 한양대학교 산학협력단 Apparatus for providing virtual input using depth sensor and method for using the apparatus
US20180284982A1 (en) * 2017-04-01 2018-10-04 Intel Corporation Keyboard for virtual reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130257732A1 (en) * 2012-03-29 2013-10-03 Robert Duffield Adaptive virtual keyboard
US20150293695A1 (en) * 2012-11-15 2015-10-15 Oliver SCHÖLEBEN Method and Device for Typing on Mobile Computing Devices
KR101873842B1 (en) * 2015-03-11 2018-07-04 한양대학교 산학협력단 Apparatus for providing virtual input using depth sensor and method for using the apparatus
KR101749833B1 (en) * 2015-12-17 2017-06-21 연세대학교 산학협력단 Apparatus and method for providing virtual keyboard
US20180284982A1 (en) * 2017-04-01 2018-10-04 Intel Corporation Keyboard for virtual reality

Similar Documents

Publication Publication Date Title
WO2018217060A1 (en) Method and wearable device for performing actions using body sensor array
CN111382624B (en) Action recognition method, device, equipment and readable storage medium
KR100518824B1 (en) Motion recognition system capable of distinguishment a stroke for writing motion and method thereof
WO2016190634A1 (en) Touch recognition apparatus and control method thereof
WO2012141352A1 (en) Gesture recognition agnostic to device orientation
WO2021031843A1 (en) Object position adjustment method, and electronic apparatus
CN109074154A (en) Hovering touch input compensation in enhancing and/or virtual reality
EP1546993A1 (en) Method and system for three-dimensional handwriting recognition
KR20210069491A (en) Electronic apparatus and Method for controlling the display apparatus thereof
WO2022105692A1 (en) Gesture recognition method and apparatus
CN107272892B (en) Virtual touch system, method and device
US10296096B2 (en) Operation recognition device and operation recognition method
WO2019190076A1 (en) Eye tracking method and terminal for performing same
WO2022211271A1 (en) Electronic device for processing handwriting input on basis of learning, operation method thereof, and storage medium
WO2013133624A1 (en) Interface apparatus using motion recognition, and method for controlling same
WO2016072610A1 (en) Recognition method and recognition device
WO2022019416A1 (en) Method and electronic device for enabling virtual input on electronic device
WO2014133258A1 (en) Pen input apparatus and method for operating the same
WO2015064991A2 (en) Smart device enabling non-contact operation control and non-contact operation control method using same
WO2016085122A1 (en) Gesture recognition correction apparatus based on user pattern, and method therefor
CN111142772A (en) Content display method and wearable device
EP3274796A1 (en) Touch recognition apparatus and control method thereof
JPH06149468A (en) Handwritten character processing system and pen state input device
WO2020184890A1 (en) Method and system for supporting object control by using two-dimensional camera, and non-transitory computer-readable recording medium
Chen et al. ViWatch: harness vibrations for finger interactions with commodity smartwatches

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21845366

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21845366

Country of ref document: EP

Kind code of ref document: A1