CN116954387A - Terminal keyboard input interaction method, device, terminal and medium - Google Patents

Terminal keyboard input interaction method, device, terminal and medium Download PDF

Info

Publication number
CN116954387A
CN116954387A CN202310956953.4A CN202310956953A CN116954387A CN 116954387 A CN116954387 A CN 116954387A CN 202310956953 A CN202310956953 A CN 202310956953A CN 116954387 A CN116954387 A CN 116954387A
Authority
CN
China
Prior art keywords
keyboard
terminal
target finger
keys
virtual keyboard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310956953.4A
Other languages
Chinese (zh)
Inventor
单体江
刘静
纪娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weifang Goertek Electronics Co Ltd
Original Assignee
Weifang Goertek Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weifang Goertek Electronics Co Ltd filed Critical Weifang Goertek Electronics Co Ltd
Priority to CN202310956953.4A priority Critical patent/CN116954387A/en
Publication of CN116954387A publication Critical patent/CN116954387A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a terminal keyboard input interaction method, device, terminal and medium, and relates to the technical field of virtual keyboards. Comprising the following steps: when an interaction instruction to be input by a keyboard is received, an initial image acquired by a camera is acquired, when the appointed position of a target finger for input interaction in the initial image is determined, the space position of the target finger is used as the position of an appointed letter key to carry out virtual keyboard layout, the appointed letter key of a physical keyboard on a display screen is initially marked, and the key in the physical keyboard is triggered and marked according to each frame image captured by the camera to mark the triggered key. Therefore, by arranging the virtual keyboards with keys in one-to-one correspondence with the physical keyboard keys in the terminal display screen in space, the triggering of the virtual keyboards can be mapped to the triggering of the physical keyboards on the terminal, keyboard input interaction is not limited by the size of the physical keyboards, and the accuracy of keyboard input interaction is improved.

Description

Terminal keyboard input interaction method, device, terminal and medium
Technical Field
The present application relates to the field of virtual keyboards, and in particular, to a method, an apparatus, a terminal, and a medium for terminal keyboard input interaction.
Background
With the continuous development of mobile terminal technology, mobile terminal equipment such as a tablet, an intelligent watch, a healthy bracelet, intelligent glasses and the like are deeply favored by people, and great convenience is brought to the life of people. At present, how to implement keyboard input interaction for devices with small display screens, such as smart watches, is a crucial research direction.
Currently, mobile terminal devices with small display screens usually layout physical keyboards on a display interface, so that keyboard input interaction is realized, and because the display screens of the devices are smaller, the input mode of the physical keyboards is very easy to cause false touch, so that bad experience is brought to users.
In addition, the input interaction can be performed in a voice recognition mode, however, the voice recognition mode is easily limited by environmental factors, and when the environment is noisy, the voice recognition is easy to make mistakes, so that the application range of the mobile terminal is limited, and the user experience is reduced.
Therefore, how to improve the accuracy of keyboard input interaction and the user experience is a problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a method, a device, a terminal and a medium for terminal keyboard input interaction, which are used for improving the accuracy of keyboard input interaction and further improving the user experience.
In order to solve the technical problems, the application provides a method for terminal keyboard input interaction, which comprises the following steps:
when an interaction instruction input by a keyboard is received, acquiring an initial image acquired by a camera;
when determining a designated position of a target finger for inputting interaction in the initial image, performing virtual keyboard layout by taking the spatial position of the target finger as the position of a designated letter key;
initial marking is carried out on the designated letter keys in the physical keyboard displayed on the display screen so as to mark the layout of the virtual keyboard; wherein, the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keyboard on the display screen, and the size of the virtual keyboard is larger than that of the physical keyboard;
and triggering marks are carried out on keys in the physical keyboard according to each frame image captured by the camera so as to mark the triggered keys.
Preferably, performing virtual keyboard layout by using the spatial position of the target finger as the position of the designated letter key comprises:
analyzing the initial image to determine a linear distance between the camera and the target finger;
determining the current gesture of the terminal according to the data acquired by the gesture sensor of the terminal;
establishing a space coordinate system to determine the space coordinates of the designated letter keys;
determining attribute information of the virtual keyboard through the linear distance, the current gesture and the space coordinate system; the attribute information comprises a set angle, a set size and space coordinates of each key of the virtual keyboard;
and laying out the virtual keyboard according to the space coordinates of the designated letter keys and the space coordinates of all keys of the virtual keyboard.
Preferably, triggering the marking of the keys in the physical keyboard according to each frame image captured by the camera comprises:
analyzing each frame image to obtain the coordinate information of the target finger, which corresponds to the current position of the target finger after the virtual keyboard is subjected to virtual keyboard layout, and switching to the coordinate information of the target finger at the next position;
and when the key switching is determined to occur according to the coordinate information corresponding to the current position and the next position, triggering and marking the key corresponding to the next position.
Preferably, determining that key switching occurs according to the coordinate information corresponding to each of the current position and the next position includes:
determining the distance between the current position and the next position according to the coordinate information corresponding to the current position and the next position;
when the distance is within a preset range, determining that key switching occurs;
correspondingly, when the distance is not within the preset range, the jitter is determined to occur, and the corresponding frame image is ignored.
Preferably, determining the designated position of the target finger for the input interaction in the initial image comprises:
analyzing the initial image to obtain an analysis result;
determining whether the duty ratio of the target finger in the initial image is within a preset duty ratio range according to the analysis result;
if the target finger is in the preset area of the initial image, judging whether the target finger is in the preset area of the initial image; and if the target finger is in the preset area, determining the designated position of the target finger in the initial image.
Preferably, when the duty ratio of the target finger in the initial image is not within the duty ratio preset range, the method further includes:
and sending a prompt signal for adjusting the spatial position of the target finger.
Preferably, the virtual keyboard is spatially parallel to the display screen.
In order to solve the technical problem, the application also provides a device for terminal keyboard input interaction, which comprises:
the acquisition module is used for acquiring an initial image acquired by the camera when receiving the interaction instruction input by the keyboard;
the layout module is used for carrying out virtual keyboard layout by taking the space position of the target finger as the position of a designated letter key when determining the designated position of the target finger in the initial image for inputting interaction;
the first marking module is used for initially marking the designated letter keys in the physical keyboard displayed on the display screen so as to mark the layout of the virtual keyboard; wherein, the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keyboard on the display screen, and the size of the virtual keyboard is larger than that of the physical keyboard;
and the second marking module is used for marking the key in the physical keyboard in a triggering way according to each frame image captured by the camera so as to mark the triggered key.
In order to solve the technical problem, the application also provides a terminal, which comprises a memory for storing a computer program;
and the processor is used for realizing the steps of the terminal keyboard input interaction method when executing the computer program.
In order to solve the technical problem, the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the terminal keyboard input interaction method when being executed by a processor.
The application provides a terminal keyboard input interaction method, which comprises the following steps: when an interaction instruction to be input by a keyboard is received, an initial image acquired by a camera is acquired, when the appointed position of a target finger for input interaction in the initial image is determined, the space position of the target finger is used as the position of an appointed letter key to carry out virtual keyboard layout, correspondingly, the appointed letter key of a physical keyboard displayed on a display screen is initially marked so as to be convenient for marking that the virtual keyboard layout is successful, wherein keys on the virtual keyboard correspond to keys of the physical keyboard on the display screen one by one, and the size of the virtual keyboard is larger than that of the physical keyboard. Then, the keys in the physical keyboard are triggered and marked according to each frame of image captured by the camera so as to mark the triggered keys. Therefore, according to the technical scheme provided by the application, through arranging the virtual keyboards with keys in one-to-one correspondence with the physical keyboard keys in the display screen of the terminal in space, the size of the virtual keyboard is larger than that of the physical keyboard, so that the triggering of the virtual keyboard can be mapped to the triggering of the physical keyboard on the terminal, the false triggering of the display screen when the smaller terminal performs keyboard input interaction is avoided, the keyboard input interaction is not limited by the size of the physical keyboard, the accuracy of the keyboard input interaction is improved, and the user experience is further improved.
In addition, the application also provides a terminal keyboard input interaction device, a terminal and a medium, which correspond to the terminal keyboard input interaction method, and have the same effects.
Drawings
For a clearer description of embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flowchart of a method for terminal keyboard input interaction in an embodiment of the application;
FIG. 2 is a block diagram of a system for terminal keyboard input interaction according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a terminal keyboard input interaction according to an embodiment of the present application;
FIG. 4 is a block diagram of a device for terminal keyboard input interaction provided in an embodiment of the present application;
fig. 5 is a block diagram of a terminal according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present application.
The application provides a method, a device, a terminal and a medium for terminal keyboard input interaction, which avoid false touch of a small-screen terminal during keyboard input interaction, improve the accuracy of keyboard input interaction and further improve the experience of a user.
In order to better understand the aspects of the present application, the present application will be described in further detail with reference to the accompanying drawings and detailed description.
With the continuous development of the times, for example, small-screen terminal equipment such as a smart watch is deeply favored by users due to the characteristics of small size, portability and the like, but when the small-screen terminal is used for keyboard input interaction, false triggering is easy to occur due to the small screen, so that extremely poor experience is brought to the users. If the voice is used for interaction, the voice content is easy to be limited by the use environment, and under the noisy condition, the voice content cannot be identified, so that the input interaction is failed, and the user experience is reduced.
In order to solve the technical problems, improve the accuracy of keyboard input interaction of small-screen terminal equipment and improve the experience of users, the embodiment of the application provides a method for inputting interaction by a terminal keyboard, which is used for laying out a virtual keyboard for inputting the position of a designated letter key at the spatial position of a target finger of interaction, wherein the virtual keyboard is larger than a physical keyboard in a terminal display screen, and the keyboard input interaction is realized by triggering the triggering of the mapping of the virtual keyboard on the physical keyboard, so that false triggering caused by directly triggering the physical keyboard is avoided, the accuracy of keyboard input interaction is improved, and the experience of users is further improved.
Fig. 1 is a flowchart of a method for terminal keyboard input interaction in an embodiment of the present application, as shown in fig. 1, the method includes:
s10: when an interaction instruction input by a keyboard is received, acquiring an initial image acquired by a camera;
in a specific embodiment, if a user wants to perform keyboard input interaction, the user can trigger the keyboard to output an interaction instruction by pressing physical buttons (buttons) on a plurality of terminal devices, for example, in a smart watch, the volume of the buttons and the keyboard can be simultaneously turned off to trigger the keyboard to input the interaction instruction. In addition, the Touch on the terminal screen can be triggered to trigger the keyboard to input the interaction instruction, for example, when the physical keyboard is required to be flicked out of the terminal display screen, any Touch in the screen can be triggered again to trigger the keyboard to input the interaction instruction, the Touch triggering method is not particularly limited, and the Touch triggering method is only limited in the protection scope of the application as long as the Touch triggering method is based on the Touch triggering and then the keyboard inputting interaction instruction is triggered. Of course, the keyboard input interaction instruction may be triggered by a voice mode, for example, when the terminal device receives the target voice of "start keyboard", the terminal device determines to trigger the keyboard input interaction instruction, and it should be noted that the target voice is not specifically limited by the present application.
Fig. 2 is a block diagram of a system for keyboard input interaction of a terminal according to an embodiment of the present application, as shown in fig. 2, in an implementation, after receiving a keyboard output interaction instruction sent by Button3 or Touch5, a central processing unit 2 (Central Processing Unit/Processor, abbreviated as CPU) in the terminal 1 acquires an initial image acquired by a camera 4. For example, when the user triggers the keyboard input of the terminal 1 through the Button3 or Touch5, the Button3 or Touch5 sends a keyboard input interaction instruction to the CPU2, and the CPU2 sends a corresponding instruction to the camera 4 after receiving the keyboard input interaction instruction, so that the camera 4 collects an initial image and returns to the CPU2.
S11: when determining a designated position of a target finger for inputting interaction in an initial image, performing virtual keyboard layout by taking the spatial position of the target finger as the position of a designated letter key;
analyzing whether a target finger for inputting interaction exists in an initial image acquired by a camera, and whether the target finger is at a designated position in the initial image, and when the target finger is at the designated position in the initial image, taking the space position of the target finger as the position of a designated letter key to perform layout of the virtual keyboard.
Fig. 3 is a schematic diagram of terminal keyboard input interaction provided in the embodiment of the present application, as shown in fig. 3, when a user performs keyboard input interaction through a virtual keyboard, a camera needs to collect an initial image including a target finger, where the target finger may be an index finger, a middle finger, or other fingers.
After the camera acquires the initial image, the initial image is subjected to depth processing to determine whether the initial image comprises a target finger or not, and meanwhile, whether the target finger is in a specified position or not is determined. It can be understood that when the target finger is closer to the terminal display screen, the occupied area of the target finger in the acquired initial image is larger, and at this time, the virtual keyboard is laid out in the spatial position where the target finger is located, so that the laid out virtual keyboard is not in the acquisition range of the camera, and further, the triggering of the physical keyboard in the display screen cannot be mapped according to the triggering of the virtual keyboard. Of course, if the target finger is far from the terminal display screen, the trigger accuracy of determining the virtual keyboard according to the frame image acquired by the camera is low, that is, if the distance is far, the movement of the target finger changes less on the image, which affects the accuracy of input interaction.
Therefore, after the target finger is included in the initial image, the linear distance between the target finger and the display screen is determined based on the depth processing, and the current gesture of the terminal is determined according to the data acquired by the acceleration sensor. And then, establishing a space coordinate system by taking the acceleration sensor as an origin, determining the space coordinates of the designated letter keys, determining the setting angle and the size of the virtual keyboard to be laid out and the space coordinates of each key through the linear distance, the current gesture and the space coordinate system, and then laying out the virtual keys according to the space coordinates of the designated letter keys and the space coordinates of each key.
S12: initial marking is carried out on designated letter keys of a physical keyboard displayed on a display screen so as to mark the layout of the virtual keyboard; wherein, the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keyboard on the display screen, and the size of the virtual keyboard is larger than that of the physical keyboard;
in order to determine that the virtual keyboard layout is successful, the designated letter keys of the physical keys on the display screen of the terminal are initially marked, and it is required to be noted that the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keys on the display screen, so that the reference keys (i.e., designated letter keys) of the layout virtual keyboard are identical to the designated letter keys in the physical keys.
In addition, in order to avoid false triggering caused by small physical keyboards on the client display screen, virtual keyboards are arranged for input interaction, and therefore the size of the virtual keyboards is larger than that of the physical keyboards.
S13: and triggering marks are carried out on the keys in the physical keyboard according to each frame of image captured by the camera so as to mark the triggered keys.
After the virtual keyboard layout is successfully determined through the initial mark of the designated letter key on the physical keyboard in the terminal display screen, the camera captures frame images in real time, after the CPU acquires each frame image, the frame images are analyzed to determine the switching of the target finger in the virtual keyboard, and then the trigger mark is mapped on the physical keyboard in the display screen to the corresponding key to mark the triggered key.
Specifically, each frame of image is analyzed according to the established coordinate system to obtain coordinate information of the target finger switched from the current position to the next position in the virtual keyboard, and when key switching is determined to occur according to the coordinate information, a trigger mark is carried out on a key corresponding to the next position.
The method for terminal keyboard input interaction provided by the embodiment of the application comprises the following steps: when an interaction instruction to be input by a keyboard is received, an initial image acquired by a camera is acquired, when the appointed position of a target finger for input interaction in the initial image is determined, the space position of the target finger is used as the position of an appointed letter key to carry out virtual keyboard layout, correspondingly, the appointed letter key of a physical keyboard displayed on a display screen is initially marked so as to be convenient for marking that the virtual keyboard layout is successful, wherein keys on the virtual keyboard correspond to keys of the physical keyboard on the display screen one by one, and the size of the virtual keyboard is larger than that of the physical keyboard. Then, the keys in the physical keyboard are triggered and marked according to each frame of image captured by the camera so as to mark the triggered keys. Therefore, according to the technical scheme provided by the application, through arranging the virtual keyboards with keys in one-to-one correspondence with the physical keyboard keys in the display screen of the terminal in space, the size of the virtual keyboard is larger than that of the physical keyboard, so that the triggering of the virtual keyboard can be mapped to the triggering of the physical keyboard on the terminal, the false triggering of the display screen when the smaller terminal performs keyboard input interaction is avoided, the keyboard input interaction is not limited by the size of the physical keyboard, the accuracy of the keyboard input interaction is improved, and the user experience is further improved.
In a specific embodiment, when the layout of the virtual keyboard is performed, after receiving the keyboard input interaction instruction, as shown in fig. 3, the camera acquires an initial image, and when determining that a target finger for inputting interaction in the initial image is at a designated position, performs the virtual keyboard layout by taking the spatial position of the target finger as the position of a designated letter key.
Specifically, the initial image acquired by the camera is resolved, and thus the linear distance between the camera and the target finger is determined, as shown by the dashed line in fig. 3. In general, a gesture sensor is arranged in the terminal device, the gesture sensor at least comprises a gyroscope sensor and an acceleration sensor, and the current gesture of the terminal relative to the ground can be determined according to data acquired by the gesture sensor. Meanwhile, a space coordinate system is established, and it is to be noted that the space coordinate system is obtained by combining a certain algorithm according to a coordinate system of an attitude sensor of a terminal and a coordinate system calibrated by a camera, and a specific algorithm can refer to the existing space coordinate system establishment algorithm or can be obtained by setting a corresponding mapping relation according to actual conditions, so that the method is not limited. Thus, the spatial coordinates of the designated letter key can be determined in the spatial coordinate system.
Further, according to the linear distance between the target finger and the camera, the current gesture of the terminal and the established space coordinate system, determining attribute information of the virtual keyboard to be laid out, wherein the attribute information comprises, but is not limited to, a setting angle, a size and space coordinates of each key of the virtual keyboard.
It should be noted that, the setting angle of the virtual keyboard refers to the angular relationship between the plane of the virtual keyboard and the plane of the terminal display screen. From the perspective of convenience and comfort, the virtual keyboard and the display screen are arranged in parallel in space, and the setting angle can be set according to the like of a user, so that the application is not limited. When the terminal is a wrist strap device such as a smart watch or a smart bracelet as shown in fig. 3, the setting angle of the virtual keyboard is also in a vertical relationship with the arm wearing the watch.
In fact, after determining the setting angle, size, space coordinates of each key and other attribute information of the virtual keyboard to be laid out, the virtual keyboard can be laid out based on the attribute information. Specifically, during layout, the designated letter keys are used as references, namely, the designated letter keys are used as origins of the virtual keyboard, the coordinate relation between the space coordinates of other keys and the designated letter keys is determined, and then the layout of the virtual keyboard is realized according to the coordinate relation.
According to the terminal keyboard input interaction method provided by the embodiment of the application, the initial image acquired by the camera is analyzed to determine the linear distance between the camera and the target finger, the current gesture of the terminal is determined according to the data acquired by the gesture sensor of the terminal, in addition, after the space coordinate system is established, the space coordinate of the appointed letter keys is determined, so that the setting angle, the size, the space coordinate and other information of each key of the virtual keyboard are determined through the linear distance, the current gesture and the space coordinate system, and the layout of the virtual keyboard is realized according to the space coordinate of the appointed letter keys and the space coordinate of each key of the virtual keyboard, so that the triggering of the physical keyboard in the display screen can be realized through triggering the virtual keyboard, the keyboard input interaction of the small display screen terminal is realized, and the accuracy of the keyboard input interaction is improved.
On the basis of the embodiment, the camera acquires images in real time, and analyzes each acquired frame of images to determine coordinate information of the target finger in the virtual keyboard, wherein the coordinate information is switched from the current position to the next position. It can be understood that in the above embodiment, after the virtual keyboard layout is established, the coordinate information of the current position of the target finger corresponding to the virtual keyboard layout and the coordinate information of the next position can be determined after the acquired frame image is subjected to the depth processing.
Further, whether key switching is sent or not can be determined according to the coordinate information of the current position and the coordinate information of the next position, namely whether the user is currently performing keyboard input interaction or not, if yes, triggering of a physical keyboard mapped in the display screen is performed, namely triggering marking is performed on keys corresponding to the next position.
It will be appreciated that when the user triggers the virtual keyboard, finger shake may occur, and thus, when key switching is performed, it is necessary to determine whether the user really has a key switching intention based on the coordinate information of the current position switched to the coordinate information of the next position. Specifically, the distance between the coordinate information corresponding to the current position and the next position is calculated according to the coordinate information, if the distance between the current position and the next position is within the preset range, the key switching is determined to occur, otherwise, the target finger is determined to shake, and at the moment, the current frame image can be ignored.
It should be noted that, according to the coordinate information corresponding to the current position and the next position after the virtual keyboard layout, the distance between different keys can be determined, if the moving distance of the target finger is not within the acceptable range of the distance between the current key and other keys, the shake of the target finger is determined. For example, the current position of the target finger is the position of the letter a, the next position is the position of the letter G, the distance y between the letter a and the letter G can be determined through the coordinates of the letter a and the letter G, in fact, the finger does not necessarily move according to the straight line distance when moving, therefore, a certain distance of up-down fluctuation at the distance y can also be regarded as normal triggering of a key, further, the actual distance of the target finger moving can be determined through the frame image, if the actual distance of the movement is within the up-down wave range of the distance y, namely, when the distance between the current position and the next position is within the preset range, the key switch is determined to occur, and otherwise, the key switch is determined to occur.
According to the terminal keyboard input interaction method provided by the embodiment of the application, the frame image which is acquired by the camera and comprises the movement of the target finger is analyzed to acquire the coordinate information of the target finger at the current position corresponding to the virtual keyboard after the virtual keyboard is subjected to virtual keyboard layout, the trigger of the virtual keyboard is determined according to the coordinate information corresponding to the current position and the coordinate information corresponding to the next position, and further the trigger of the physical keyboard in the display screen is mapped, so that the keyboard input interaction of the small display screen terminal equipment is realized, the false trigger during key switching is avoided, and the user experience is improved.
In implementation, as shown in fig. 3, in the initial image acquired by the camera, the target finger is not necessarily located at a designated position, it may be understood that when the target finger is too close to the camera lens, the area occupied by the target finger in the initial image is larger, and further, the virtual keyboard in the layout may not be in the shooting range of the camera, so that when the keyboard input interaction is performed, a frame image corresponding to an individual key of the virtual keyboard triggered by the user may not be acquired.
In addition, if the target finger is far away from the camera lens, when the camera collects the frame image, the moving distance on the frame image is not obvious because the target finger occupies too small area of the frame image, so that the coordinate calculation error is larger, and finally the keyboard input interaction accuracy is lower. Of course, if the target finger is not in the initial image, the layout of the virtual keyboard cannot be performed.
Accordingly, whether the initial position of the target finger is appropriate or not, that is, the initial position of the designated letter key, that is, the layout position of the virtual keyboard can be determined according to the duty ratio of the target finger in the initial image. Specifically, after the initial image is analyzed to obtain an analysis result, whether the duty ratio of the target finger in the initial image is within a duty ratio preset range or not is determined according to the analysis result, and if the duty ratio of the target finger is within the duty ratio preset range, the distance between the target finger and the camera is determined to be proper, so that the target finger can be used as the position of a designated letter key. Further, whether the target finger is in the preset area of the initial image is judged, and if the target finger is in the preset area, the designated position of the target finger in the initial image is determined.
It should be noted that, when the duty ratio of the target finger in the initial image is not within the preset duty ratio range, it is determined that the current position of the target finger is not suitable, may be too close, may be too far, or is not in the initial image, and at this time, a prompt signal for adjusting the spatial position of the target finger may be sent through voice and/or subtitles, so that the user can adjust the position of the target finger.
According to the method for inputting interaction by the terminal keyboard, provided by the embodiment of the application, according to the fact that whether the target finger for inputting interaction is in the initial image or not is determined, and the space position of the target finger is used as the position of the designated letter key to conduct virtual keyboard layout, so that the triggering of the physical keyboard in the display screen according to the triggering mapping of the virtual keyboard is facilitated, the keyboard inputting interaction of the terminal is achieved, false triggering of the terminal with a smaller display screen in the process of inputting interaction by the keyboard is avoided, and the user experience is improved.
In the above embodiment, the method for inputting interaction by the terminal keyboard is described in detail, and the application also provides a corresponding embodiment of the device for inputting interaction by the terminal keyboard. It should be noted that the present application describes an embodiment of the device portion from two angles, one based on the angle of the functional module and the other based on the angle of the hardware structure.
Fig. 4 is a block diagram of a device for terminal keyboard input interaction according to an embodiment of the present application, as shown in fig. 4, where the device includes:
the acquisition module 10 is used for acquiring an initial image acquired by the camera when receiving the interaction instruction input by the keyboard;
a layout module 11, configured to perform virtual keyboard layout by using a spatial position of a target finger for inputting interaction as a position of a designated letter key when determining the designated position of the target finger in the initial image;
a first marking module 12 for initially marking designated letter keys of the physical keyboard displayed on the display screen to mark the layout of the virtual keyboard; wherein, the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keyboard on the display screen;
and the second marking module 13 is used for triggering and marking the keys in the physical keyboard according to each frame of image captured by the camera so as to mark the triggered keys.
Since the embodiments of the apparatus portion and the embodiments of the method portion correspond to each other, the embodiments of the apparatus portion are referred to the description of the embodiments of the method portion, and are not repeated herein.
The device for terminal keyboard input interaction provided by the embodiment of the application comprises the following components: when an interaction instruction to be input by a keyboard is received, an initial image acquired by a camera is acquired, when the appointed position of a target finger for input interaction in the initial image is determined, the space position of the target finger is used as the position of an appointed letter key to carry out virtual keyboard layout, correspondingly, the appointed letter key of a physical keyboard displayed on a display screen is initially marked so as to be convenient for marking that the virtual keyboard layout is successful, wherein keys on the virtual keyboard correspond to keys of the physical keyboard on the display screen one by one, and the size of the virtual keyboard is larger than that of the physical keyboard. Then, the keys in the physical keyboard are triggered and marked according to each frame of image captured by the camera so as to mark the triggered keys. Therefore, according to the technical scheme provided by the application, through arranging the virtual keyboards with keys in one-to-one correspondence with the physical keyboard keys in the display screen of the terminal in space, the size of the virtual keyboard is larger than that of the physical keyboard, so that the triggering of the virtual keyboard can be mapped to the triggering of the physical keyboard on the terminal, the false triggering of the display screen when the smaller terminal performs keyboard input interaction is avoided, the keyboard input interaction is not limited by the size of the physical keyboard, the accuracy of the keyboard input interaction is improved, and the user experience is further improved.
Fig. 5 is a block diagram of a terminal according to another embodiment of the present application, and as shown in fig. 5, the terminal includes: a memory 20 for storing a computer program;
a processor 21 for carrying out the steps of the method of terminal keyboard input interaction as mentioned in the above embodiments when executing a computer program.
The terminal provided in this embodiment may include, but is not limited to, a smart watch, a smart bracelet, and the like.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be implemented in hardware in at least one of a digital signal processor (Digital Signal Processor, abbreviated as DSP), a Field programmable gate array (Field-Programmable Gate Array, abbreviated as FPGA), and a programmable logic array (Programmable Logic Array, abbreviated as PLA). The processor 21 may also include a main processor, which is a processor for processing data in an awake state, also called CPU, and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with an image processor (Graphics Processing Unit, GPU for short) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 21 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 20 is at least used for storing a computer program 201, where the computer program, when loaded and executed by the processor 21, is capable of implementing the relevant steps of the method for terminal keyboard input interaction disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 20 may further include an operating system 202, data 203, and the like, where the storage manner may be transient storage or permanent storage. The operating system 202 may include Windows, unix, linux, among others. The data 203 may include, but is not limited to, related data involved in the method of terminal keyboard input interaction, and the like.
In some embodiments, the terminal may further include a display 22, an input-output interface 23, a communication interface 24, a power supply 25, and a communication bus 26.
Those skilled in the art will appreciate that the structure shown in fig. 5 is not limiting of the wearable device and may include more or fewer components than illustrated.
The terminal provided by the embodiment of the application comprises a memory and a processor, wherein the processor can realize the following method when executing a program stored in the memory: and a terminal keyboard input interaction method.
According to the terminal provided by the embodiment of the application, the virtual keyboards with keys corresponding to the physical keyboard keys in the display screen of the terminal one by one are distributed in the space, and the size of the virtual keyboards is larger than that of the physical keyboards, so that the triggering of the virtual keyboards can be mapped to the triggering of the physical keyboards on the terminal, the false triggering of the display screen when the smaller terminal performs keyboard input interaction is avoided, the keyboard input interaction is not limited by the size of the physical keyboards, the accuracy of the keyboard input interaction is improved, and the experience of a user is further improved.
Finally, the application also provides a corresponding embodiment of the computer readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps as described in the method embodiments above.
It will be appreciated that the methods of the above embodiments, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The method, the device, the terminal and the medium for terminal keyboard input interaction provided by the application are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method for terminal keyboard input interaction, comprising:
when an interaction instruction input by a keyboard is received, acquiring an initial image acquired by a camera;
when determining a designated position of a target finger for inputting interaction in the initial image, performing virtual keyboard layout by taking the spatial position of the target finger as the position of a designated letter key;
initial marking is carried out on the designated letter keys in the physical keyboard displayed on the display screen so as to mark the layout of the virtual keyboard; wherein, the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keyboard on the display screen, and the size of the virtual keyboard is larger than that of the physical keyboard;
and triggering marks are carried out on keys in the physical keyboard according to each frame image captured by the camera so as to mark the triggered keys.
2. The method of terminal keyboard input interaction according to claim 1, wherein performing virtual keyboard layout using the spatial position of the target finger as the position of the designated letter key comprises:
analyzing the initial image to determine a linear distance between the camera and the target finger;
determining the current gesture of the terminal according to the data acquired by the gesture sensor of the terminal;
establishing a space coordinate system to determine the space coordinates of the designated letter keys;
determining attribute information of the virtual keyboard through the linear distance, the current gesture and the space coordinate system; the attribute information comprises a set angle, a set size and space coordinates of each key of the virtual keyboard;
and laying out the virtual keyboard according to the space coordinates of the designated letter keys and the space coordinates of all keys of the virtual keyboard.
3. The method of terminal keyboard input interaction of claim 1, wherein triggering a key in the physical keyboard from each frame of images captured by the camera comprises:
analyzing each frame image to obtain the coordinate information of the target finger, which corresponds to the current position of the target finger after the virtual keyboard is subjected to virtual keyboard layout, and switching to the coordinate information of the target finger at the next position;
and when the key switching is determined to occur according to the coordinate information corresponding to the current position and the next position, triggering and marking the key corresponding to the next position.
4. A method of terminal keyboard input interaction according to claim 3, wherein determining that key switching occurs according to the coordinate information corresponding to each of the current position and the next position comprises:
determining the distance between the current position and the next position according to the coordinate information corresponding to the current position and the next position;
when the distance is within a preset range, determining that key switching occurs;
correspondingly, when the distance is not within the preset range, the jitter is determined to occur, and the corresponding frame image is ignored.
5. The method of terminal keyboard input interaction of claim 1, wherein determining a specified location in the initial image of a target finger for input interaction comprises:
analyzing the initial image to obtain an analysis result;
determining whether the duty ratio of the target finger in the initial image is within a preset duty ratio range according to the analysis result;
if the target finger is in the preset area of the initial image, judging whether the target finger is in the preset area of the initial image; and if the target finger is in the preset area, determining the designated position of the target finger in the initial image.
6. The method of terminal keyboard input interaction of claim 5, further comprising, when the duty ratio of the target finger in the initial image is not within the duty ratio preset range:
and sending a prompt signal for adjusting the spatial position of the target finger.
7. The method of any of claims 1-6, wherein the virtual keyboard is spatially parallel to the display screen.
8. A device for terminal keyboard input interaction, comprising:
the acquisition module is used for acquiring an initial image acquired by the camera when receiving the interaction instruction input by the keyboard;
the layout module is used for carrying out virtual keyboard layout by taking the space position of the target finger as the position of a designated letter key when determining the designated position of the target finger in the initial image for inputting interaction;
the first marking module is used for initially marking the designated letter keys in the physical keyboard displayed on the display screen so as to mark the layout of the virtual keyboard; wherein, the keys on the virtual keyboard are in one-to-one correspondence with the keys of the physical keyboard on the display screen, and the size of the virtual keyboard is larger than that of the physical keyboard;
and the second marking module is used for marking the key in the physical keyboard in a triggering way according to each frame image captured by the camera so as to mark the triggered key.
9. A terminal comprising a memory for storing a computer program;
a processor for implementing the steps of the method of terminal keyboard input interaction of any of claims 1 to 7 when executing said computer program.
10. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the method of terminal keyboard input interaction of any of claims 1 to 7.
CN202310956953.4A 2023-07-31 2023-07-31 Terminal keyboard input interaction method, device, terminal and medium Pending CN116954387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310956953.4A CN116954387A (en) 2023-07-31 2023-07-31 Terminal keyboard input interaction method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310956953.4A CN116954387A (en) 2023-07-31 2023-07-31 Terminal keyboard input interaction method, device, terminal and medium

Publications (1)

Publication Number Publication Date
CN116954387A true CN116954387A (en) 2023-10-27

Family

ID=88442453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310956953.4A Pending CN116954387A (en) 2023-07-31 2023-07-31 Terminal keyboard input interaction method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN116954387A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117835044A (en) * 2024-03-06 2024-04-05 凌云光技术股份有限公司 Debugging method and device of motion capture camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117835044A (en) * 2024-03-06 2024-04-05 凌云光技术股份有限公司 Debugging method and device of motion capture camera

Similar Documents

Publication Publication Date Title
CN107913520B (en) Information processing method, information processing device, electronic equipment and storage medium
WO2019153824A1 (en) Virtual object control method, device, computer apparatus, and storage medium
US10055064B2 (en) Controlling multiple devices with a wearable input device
CN109885245B (en) Application control method and device, terminal equipment and computer readable storage medium
US20130241837A1 (en) Input apparatus and a control method of an input apparatus
US10621766B2 (en) Character input method and device using a background image portion as a control region
US10228762B2 (en) Analysis of user interface interactions within a virtual reality environment
US11009949B1 (en) Segmented force sensors for wearable devices
WO2020131592A1 (en) Mode-changeable augmented reality interface
EP2690524B1 (en) Electronic device, control method and control program
CN110780738B (en) Virtual reality simulation walking method, device, equipment and readable storage medium
CN116954387A (en) Terminal keyboard input interaction method, device, terminal and medium
EP4307096A1 (en) Key function execution method, apparatus and device, and storage medium
JP2021185498A (en) Method for generating 3d object arranged in augmented reality space
US10318131B2 (en) Method for scaling down effective display area of screen, and mobile terminal
CN111124156A (en) Interaction control method of mobile terminal and mobile terminal
CN109885170A (en) Screenshotss method, wearable device and computer readable storage medium
JP6197012B2 (en) Information processing apparatus and information processing method
CN106990843B (en) Parameter calibration method of eye tracking system and electronic equipment
CN113760085A (en) Virtual environment construction and application method, VR (virtual reality) equipment and virtual reality interaction system
CN112402967B (en) Game control method, game control device, terminal equipment and medium
WO2021161769A1 (en) Information processing device, information processing method, and program
US20240096043A1 (en) Display method, apparatus, electronic device and storage medium for a virtual input device
CN116627567A (en) Virtual interface adjusting method and electronic equipment
CN106951154B (en) Mobile terminal, man-machine interaction method and human-computer interaction device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination