WO2023236052A1 - Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage - Google Patents

Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage Download PDF

Info

Publication number
WO2023236052A1
WO2023236052A1 PCT/CN2022/097433 CN2022097433W WO2023236052A1 WO 2023236052 A1 WO2023236052 A1 WO 2023236052A1 CN 2022097433 W CN2022097433 W CN 2022097433W WO 2023236052 A1 WO2023236052 A1 WO 2023236052A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
input
target
hand
sub
Prior art date
Application number
PCT/CN2022/097433
Other languages
English (en)
Chinese (zh)
Inventor
曾学忠
豆子飞
王星言
李�诚
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to CN202280004383.2A priority Critical patent/CN117546124A/zh
Priority to PCT/CN2022/097433 priority patent/WO2023236052A1/fr
Publication of WO2023236052A1 publication Critical patent/WO2023236052A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present disclosure relates to the technical field of terminal equipment, and specifically to input information determination methods, devices, equipment and storage media.
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • device such as a keyboard, etc.
  • controller such as a matching handle
  • keyboards and controllers need to be specially carried, which will cause inconvenience to users.
  • input devices such as keyboards need to be placed on a flat object (such as a table, etc.) for use, and are not suitable for certain VR, AR or MR application scenarios where flat objects cannot be seen.
  • a controller When using a controller to assist input on a virtual keyboard, the keys on the controller are usually small and the keys on the virtual keyboard are difficult to aim, which can easily lead to mis-pressing.
  • embodiments of the present disclosure provide an input information determination method, device, equipment and storage medium to solve the defects in the related technology.
  • a method for determining input information including:
  • the target sub-area touched by the thumb of the hand within the touch area of the hand is determined based on the target image, and the touch area is the area touchable by the thumb on the hand, except for all The area formed by the hand outside the thumb;
  • Target information to be input is determined based on the target sub-region.
  • the touch area includes at least one pre-divided sub-area, and the at least one sub-area includes at least one of the following:
  • the area where the knuckles of the fingers are, the area between the knuckles of two fingers, the area where the fingertips are, the area between the fingertips and the finger knuckles, the area in the palm of the hand near the base of the fingers, the area between the fingers together The divided area, the divided area within the palm.
  • determining the target information to be input based on the target sub-area includes:
  • the preset correspondence relationship includes a relationship between at least one pre-divided sub-region in the touch area of the hand and different input information
  • the target information is determined based on the input information corresponding to the target sub-region.
  • the preset correspondence includes at least one of the following:
  • the corresponding relationship between the sub-region and the input information representing the input function, the input function includes at least one of the following: delete function, input mode switching function, left move function, right move function, up move function, and move down function.
  • the preset correspondence also includes:
  • each sub-region in a sub-region combination composed of multiple sub-regions with adjacent or similar positions and each input information in the input information combination, where the input information in the input information combination is input content or input Input information associated with the function.
  • the input information combination includes at least one of the following:
  • An all-numeric input information combination in which each input information includes at least one number as the input content to be selected.
  • determining the target sub-area touched by the thumb of the hand within the touch area of the hand based on the target image includes:
  • the hand in the target image is a single hand
  • the hands in the target image are the left hand and the right hand, based on the target image, determine the target sub-area touched by the left thumb in the touch area of the left hand and the target sub-area touched by the right thumb in the touch area of the right hand. Target sub-area.
  • determining the target information based on input information corresponding to the target sub-region includes:
  • the input content included in the input information corresponding to the target sub-area to determine the target information based on the result of selecting the displayed input content, and the input content includes at least one of the following: letters, numbers, and symbols;
  • displaying the input content included in the input information corresponding to the target sub-area to determine the target information based on a result of selecting the displayed input content includes:
  • the target information is determined according to a result of selecting each displayed input content to be selected.
  • determining the target information based on input information corresponding to the target sub-region includes:
  • the input information of the target sub-area includes first input content and second input content
  • Determining different input contents contained in the input information corresponding to the target sub-area as the target information based on different touch durations of the thumb on the target sub-area including:
  • determining the target information based on the first input content In response to detecting that the duration of the thumb's touch to the target sub-area is less than or equal to a preset duration threshold, determining the target information based on the first input content;
  • the target information is determined based on the second input content.
  • the method further includes:
  • the operation of obtaining a target image of the hand is performed, the first trigger instruction includes at least one of the following: a first setting trigger gesture, a first setting The trigger sound and the first set trigger button are triggered.
  • obtaining a target image of the hand includes:
  • input information corresponding to the sub-region is displayed in at least one sub-region within the touch area according to the preset correspondence relationship.
  • it also includes:
  • the target information is input, and the second trigger instruction includes at least one of the following: a second set trigger gesture, a second set trigger sound, a second set trigger sound, The second set trigger button is triggered.
  • an input information determining device including:
  • Target image acquisition module used to acquire the target image of the hand
  • a sub-region determination module configured to determine, based on the target image, a target sub-region touched by the thumb of the hand within a touch area of the hand, where the touch area is where the thumb touches the hand.
  • a target information determination module configured to determine target information to be input based on the target sub-area.
  • the touch area includes at least one pre-divided sub-area, and the at least one sub-area includes at least one of the following:
  • the area where the knuckles of the fingers are, the area between the knuckles of two fingers, the area where the fingertips are, the area between the fingertips and the finger knuckles, the area in the palm of the hand near the base of the fingers, the area between the fingers together The divided area, the divided area within the palm.
  • the target information determination module includes:
  • An input information determination unit configured to determine the input information corresponding to the target sub-area based on a preset correspondence relationship, the preset correspondence relationship including at least one pre-divided sub-area within the touch area of the hand and different input information. relation;
  • a target information determining unit configured to determine the target information based on input information corresponding to the target sub-region.
  • the preset correspondence includes at least one of the following:
  • the corresponding relationship between the sub-region and the input information representing the input function, the input function includes at least one of the following: delete function, input mode switching function, left move function, right move function, up move function, and move down function.
  • the preset correspondence also includes:
  • each sub-region in a sub-region combination composed of multiple sub-regions with adjacent or similar positions and each input information in the input information combination, where the input information in the input information combination is input content or input Input information associated with the function.
  • the input information combination includes at least one of the following:
  • An all-numeric input information combination in which each input information includes at least one number as the input content to be selected.
  • the sub-region determination module includes:
  • a first determination unit configured to determine the target sub-area touched by the thumb within the touch area of the hand based on the target image when the hand in the target image is a single hand, and the single hand is the left hand or the left hand. right hand;
  • a second determination unit configured to determine, based on the target image, the target sub-area touched by the left thumb in the touch area of the left hand and the target sub-area touched by the right thumb in the right hand when the hands in the target image are the left hand and the right hand. The touched target sub-area within the touch area.
  • the target information determining unit is also used to:
  • the input content included in the input information corresponding to the target sub-area to determine the target information based on the result of selecting the displayed input content, and the input content includes at least one of the following: letters, numbers, and symbols;
  • the target information determining unit is also used to:
  • the target information is determined according to a result of selecting each displayed input content to be selected.
  • the target information determining unit is further configured to determine the different input contents contained in the input information corresponding to the target sub-area as the different input contents based on different touch durations of the thumb on the target sub-area. Describe target information.
  • the input information of the target sub-area includes first input content and second input content
  • the target information determining unit is also used to:
  • determining the target information based on the first input content In response to detecting that the duration of the thumb's touch to the target sub-area is less than or equal to a preset duration threshold, determining the target information based on the first input content;
  • the target information is determined based on the second input content.
  • the target image acquisition module is further configured to perform the operation of acquiring the target image of the hand in response to detecting a first trigger instruction for determining input information.
  • the first trigger instruction includes the following: At least one of: the first set trigger gesture, the first set trigger sound, and the first set trigger button is triggered.
  • the target image acquisition module is further configured to, in response to detecting that fingers of the hand other than the thumb are in an open state, at least one sub-section in the touch area according to the preset correspondence relationship.
  • the input information corresponding to the sub-area is displayed in the area.
  • it also includes:
  • a target information input module configured to input the target information in response to detecting a second trigger instruction for information input.
  • the second trigger instruction includes at least one of the following: a second setting trigger gesture, a third The second setting triggers the sound and the second setting trigger button is triggered.
  • an input information determining device including:
  • the target sub-area touched by the thumb of the hand within the touch area of the hand is determined based on the target image, and the touch area is the area touchable by the thumb on the hand, except for all The area formed by the hand outside the thumb;
  • Target information to be input is determined based on the target sub-region.
  • a computer-readable storage medium for storing a computer program.
  • the computer program When executed by a processor, it implements:
  • the target sub-area touched by the thumb of the hand within the touch area of the hand is determined based on the target image, and the touch area is the area touchable by the thumb on the hand, except for all The area formed by the hand outside the thumb;
  • Target information to be input is determined based on the target sub-region.
  • the present disclosure can realize the accurate input of target information based on the detection results of the target image of the user's hand. Since there is no need to rely on external special input devices or controllers, the trouble of carrying input devices or controllers can be avoided, and since there is no need to consider The placement position of the input device can therefore break through the limitations of application scenarios, and because the user directly uses the thumb to touch each preset sub-area of the user's hand, compared to the use of a controller to assist input on a virtual keyboard in related technologies The solution can improve the accuracy of information input, thereby improving the efficiency of information input.
  • Figure 1 is a flow chart of a method for determining input information according to an exemplary embodiment of the present disclosure
  • Figure 2A is a flowchart illustrating how to determine target information to be input based on the target sub-region according to an exemplary embodiment of the present disclosure
  • Figure 2B is a schematic diagram of the region division results of the right hand according to an exemplary embodiment of the present disclosure
  • FIG. 2C is a schematic diagram of the region division results of the left hand according to an exemplary embodiment of the present disclosure.
  • Figure 3 is a flowchart illustrating how to determine the target information according to the result of selecting the displayed input content according to an exemplary embodiment of the present disclosure
  • Figure 4 is a flow chart of an input information determination method according to yet another exemplary embodiment of the present disclosure.
  • Figure 5 is a schematic diagram of each sub-area and corresponding input information within the touch area of the right hand according to an exemplary embodiment of the present disclosure
  • Figure 6 is a block diagram of an information determining device according to an exemplary embodiment of the present disclosure.
  • Figure 7 is a block diagram of yet another information determining device according to an exemplary embodiment of the present disclosure.
  • FIG. 8 is a block diagram of an input information determining device according to an exemplary embodiment of the present disclosure.
  • Figure 1 is a flow chart of a method for determining input information according to an exemplary embodiment of the present disclosure; the method of this embodiment can be applied to mobile phones, tablet computers, wearable devices, smart home devices, smart office equipment and/or Or vehicle-mounted equipment, etc.
  • Smart home equipment includes but is not limited to: smart speakers, smart doors and windows, etc.
  • Wearable devices can be, for example, input information determination devices that implement VR, AR or MR application scenarios (such as helmets, glasses, gloves, etc.) .
  • the method includes the following steps S101-S103:
  • step S101 a target image of the hand is acquired.
  • the device may acquire a target image of the hand of the user wearing the device in response to detecting the first trigger instruction for determining the input information.
  • the above-mentioned first triggering instruction can be sent to the device, and then when the device detects the instruction, it can obtain the user's hand information collected by a camera or other device. target image.
  • the above-mentioned first trigger instruction may include at least one of the following: a first set trigger gesture (for example, a naturally open palm gesture, etc.), a first set trigger sound, a first set trigger button being triggered. .
  • a first set trigger gesture for example, a naturally open palm gesture, etc.
  • a first set trigger sound for example, a naturally open palm gesture, etc.
  • a first set trigger button being triggered.
  • the above-mentioned user's hand may be the user's right hand or left hand, which is not limited in this embodiment.
  • step S102 the target sub-area touched by the thumb of the hand within the touch area of the hand is determined based on the target image.
  • the target sub-region touched by the thumb of the hand within the touch area of the hand may be determined based on the target image.
  • the touch area may be an area formed by the hand other than the thumb that is touchable by the thumb.
  • the above-mentioned touch area may include at least one sub-area pre-divided on the human hand, and the at least one sub-area includes at least one of the following:
  • the area where the knuckles of the fingers are, the area between the knuckles of two fingers, the area where the fingertips are, the area between the fingertips and the finger knuckles, the area in the palm of the hand near the base of the fingers, the area between the fingers together The divided area, the divided area within the palm.
  • the hand in the above target image may be the user's left hand and/or right hand.
  • the target sub-area touched by the thumb of the single hand within the touch area of the single hand is determined, wherein, the One hand is the left hand or the right hand; and when the hands in the target image are the left hand and the right hand, the target sub-area touched by the left thumb in the left hand's touch area and the right thumb can be determined based on the above target image.
  • the target sub-area touched within the touch area of the right hand is the user's left hand and/or right hand.
  • step S202 may include any one of the following (1) to (3):
  • the input content includes at least one of the following: letters, numbers, symbol.
  • the input information corresponding to the target sub-area includes two input contents, such as 0 and Space
  • the input contents can be displayed for the user to select the displayed input contents, and then according to the selection The result determines the target information to be input.
  • the input information corresponding to the target sub-region includes only one input content, such as the number 1, then the input information can be directly used as the target information to be input.
  • the function can be determined as the target information, and then when the user determines to input the target information, the function can be executed. Function.
  • step S103 target information to be input is determined based on the target sub-area.
  • the target information to be input can be determined based on the target sub-region.
  • the target information to be input may include at least one of letters, numbers, symbols, and functions required for information input (such as deletion, Chinese-English conversion, and/or direction keys, etc.).
  • the above method of determining the target information to be input based on the target sub-region may also refer to the embodiment shown in FIG. 2A below, and will not be described in detail here.
  • the method of this embodiment can achieve accurate input of target information based on the detection results of the target image of the user's hand. Since there is no need to rely on external special input devices or controllers, it is possible to avoid carrying input devices or controls. There is no need to consider the placement position of the input device, so it can break through the limitations of application scenarios, and because the user directly uses the thumb to touch each preset sub-area of the user's hand, compared to the use of control in related technologies The solution of the device assisting input on the virtual keyboard can improve the accuracy of information input, thereby improving the efficiency of information input.
  • Figure 2A is a flowchart illustrating how to determine target information to be input based on the target sub-area according to an exemplary embodiment of the present disclosure; as shown in Figure 2A, the determination based on the target sub-area described in step S103 above
  • the target information to be input may include the following steps S201-S202:
  • step S201 the input information corresponding to the target sub-region is determined based on the preset correspondence relationship
  • step S201 the target information is determined based on the input information corresponding to the target sub-region.
  • the above-mentioned preset corresponding relationship includes a relationship between at least one pre-divided sub-area within the touch area of the hand and different input information.
  • the above-mentioned preset correspondence may include at least one of the following:
  • the input function includes at least one of the following: delete function, input mode switching function, left move function, right move function, up move function, and move down function.
  • the above-mentioned preset corresponding relationships may also include:
  • each sub-region in a sub-region combination composed of multiple sub-regions with adjacent or similar positions and each input information in the input information combination, where the input information in the input information combination is input content or input Input information associated with the function.
  • the above input information combination may include at least one of the following:
  • a combination of input information in the form of a nine-square grid for example, a combination of key information of the numeric keys 0-9 in a nine-square grid keyboard, where each input information includes at least one of the following candidate input contents: multiple letters, a number, a or multiple symbols;
  • a combination of all-letter input information (for example, a combination of key information for letters A-Z on a full-key keyboard), in which each input information includes at least one letter as input content to be selected;
  • An all-numeric input information combination (for example, a combination of numeric key information of numeric keys 0-9 in a nine-square grid keyboard or a full-key keyboard), in which each input information includes at least one number as the input content to be selected.
  • FIG. 2B is a schematic diagram of the area division result of the right hand according to an exemplary embodiment of the present disclosure
  • FIG. 2C is a schematic diagram of the area division result of the left hand according to an exemplary embodiment of the present disclosure.
  • the target area of the left or right hand of the user can be divided into 15 sub-areas 1-15.
  • a correspondence relationship between each input information and each of the plurality of sub-regions can be constructed to achieve subsequent detection of the touch of the thumb of the user's hand on the user's hand. After the touched target sub-area in the target area is determined, the input information corresponding to the target sub-area is determined based on the corresponding relationship.
  • a corresponding relationship between the input information of the numeric keys 1-9 and the first to ninth sub-regions of the plurality of sub-regions arranged in a preset order may be constructed respectively.
  • the first to ninth sub-regions may be sub-regions 1 to 9, sub-regions 4 to 12, or sub-regions 10 to 6, etc. in FIG. 2B or 2C.
  • the embodiment will be described by taking the first to ninth sub-regions as sub-regions 1 to 9 in FIG. 2B or 2C as an example.
  • the input information of numeric key 1 can include the number 1 and preset punctuation marks (such as comma, semicolon, etc.), the input information of numeric key 2 can include the number 2 and letters a, b, c, and the input of numeric key 3
  • the information may include the number 3 and the letters d, e, and f.
  • the input information of the numeric key 4 may include the number 4 and the letters g, h, i.
  • the input information of the numeric key 5 may include the number 5 and the letters j, k, l.
  • the input information of key 6 may include the number 6 and the letters m, n, and o.
  • the input information of the number key 7 may include the number 7 and the letters p, q, r, s.
  • the input information of the number key 8 may include the number 8 and the letters t, u. , v, the input information of numeric key 9 includes the number 9 and the letters w, x, y, z.
  • the above-mentioned preset order may include: from left to right, from top to bottom; or from top to bottom, from right to left; or from top to bottom, from left to right, etc. in a continuous order.
  • this example does not limit this.
  • matching the input information of the numeric keys 1-9 with the first to ninth sub-regions arranged in a preset order can facilitate the user to determine the correlation between the sub-regions and the input information, and It is convenient for users to touch these sub-areas with their thumbs, and is more in line with users' information input habits.
  • the corresponding relationship between the input information of numeric key 0 and the tenth sub-region (such as sub-region 11 in FIG. 2B or 2C) among the plurality of sub-regions can also be constructed to construct numeric key 0.
  • the corresponding relationship between the input information and the tenth sub-area among the plurality of sub-areas, and the input information of the numeric key 0 includes the number 0 and Space.
  • a corresponding relationship between the input information of the delete key and the eleventh sub-region (eg, sub-region 15 in FIG. 2B or FIG. 2C) among the plurality of sub-regions may also be constructed.
  • the target predefined posture may also be displayed in response to the detection of the user's hand (for example, the user's hand is flipped from the first posture with the palm of the hand facing the user's face to the first posture with the back of the hand facing the user's face). second posture, and then flip back to the first posture) to delete the currently selected input information.
  • a correspondence relationship between the input information of the Chinese and English switching keys and the twelfth sub-region (eg, sub-region 13 in Figure 2B or Figure 2C) among the plurality of sub-regions can also be constructed.
  • the current information input method can be switched between Chinese mode and English mode. .
  • the user can input English information based on the above embodiment; and when switching from the English mode to the Chinese mode, the user can input Chinese information based on the above embodiment. input of.
  • a correspondence relationship between input information for selecting a Pinyin key and a thirteenth sub-region (eg, sub-region 14 in FIG. 2B or FIG. 2C ) among the plurality of sub-regions may also be constructed.
  • determining the target sub-area touched by the thumb of the hand in the touch area of the hand based on the target image in step S102 may include: based on the thumb once Or the target sub-area touched multiple times continuously in chronological order, wherein the target sub-area touched multiple times continuously can be the same sub-area or different sub-areas.
  • FIG. 3 is a flowchart illustrating how to determine the target information according to the result of selecting the displayed input content according to an exemplary embodiment of the present disclosure.
  • the input content included in the input information corresponding to the target sub-area is displayed to determine the target information based on the result of selecting the displayed input content, which may include The following steps S301-303:
  • step S301 the target input information is determined based on the input information corresponding to the target sub-area touched by the thumb once or multiple times in chronological order;
  • step S302 display each candidate input content included in the target input information
  • step S303 the target information is determined according to the result of selecting each displayed input content to be selected.
  • the input information corresponding to these sub-areas can be determined as the target input information, and then the input information will be determined by the input information.
  • the target input information includes the pinyin composed of the input content as the input content to be selected (such as bei, ce, etc.), and then the input content to be selected is displayed in the set position, such as when the user wears the current device.
  • the display can be performed in a preset selection sub-area, which can be an area divided in the above-mentioned touch area, or can be an area near the current hand that can be seen when the user is wearing the current device. .
  • the device can determine the corresponding order of the two or more target sub-areas based on the time sequence of the touches.
  • the input information determine at least one alternative pinyin information, and display the alternative pinyin information; on this basis, the user can select the sub-section corresponding to the input information of the pinyin key based on the thumb in the thirteenth sub-region (that is, Touch is performed within the area), so that in response to detecting the touch information, the device determines the alternative pinyin information currently selected by the user from at least one alternative pinyin information currently displayed, and adds the currently selected alternative to the pinyin information. At least one text corresponding to the pinyin information is determined as at least one alternative information, and the at least one alternative information is displayed.
  • the at least one text corresponding to the alternative Pinyin information includes the Pinyin information itself and/or the Chinese characters corresponding to the Pinyin information, etc., which is not limited in this embodiment. That is to say, the user can touch the thirteenth sub-area with his thumb to switch the currently selected alternative pinyin information. At the same time, the device can display the text corresponding to the currently selected alternative pinyin information.
  • the target sub-area to be determined can be determined based on the input information corresponding to the target sub-area touched by either thumb of the left thumb or right thumb multiple times in chronological order. Selected input content.
  • determining the target information based on the input information corresponding to the target sub-area in the above step S202 may also include determining the target sub-area based on different touch durations of the thumb on the target sub-area. Different input contents included in the input information corresponding to the target sub-area are determined as the target information. Further, the input content included in the input information of the target sub-region can be divided into first input content and second input content.
  • the target information can be determined based on the first input content; and when it is detected that the duration of thumb touching the target sub-area is greater than the preset duration threshold, the target information may be determined based on the second input content.
  • the preset duration threshold may be a duration threshold used to distinguish a long press or a short press.
  • a first trigger instruction for determining the input information can be sent to the device, and then the device can obtain the user's hand when detecting the instruction.
  • a target image, and based on the target image, the target sub-area touched by the thumb of the hand and the corresponding touch duration are detected. It can be understood that since the collection time interval of each frame of the image in the target image is fixed, the touch duration of the thumb in the target sub-area can be obtained by counting the time intervals of image frames in which the thumb touches the target sub-area.
  • the input information of the numeric keys 0-9 can be divided into first input information and second input information.
  • first input information includes numbers 0-9
  • second input information includes letters a-z and Space
  • first input information includes letters a-z and Space
  • second input information includes the numbers 0-9.
  • thumb-based operation can be implemented.
  • the touch operation type of the touch target sub-area determines the first input information or the second input information as the alternative information, which can further improve the accuracy of displaying the alternative information and facilitate the user to select the target information that needs to be input.
  • the input information of the left shift function and the input information of the right shift function can also be constructed separately with the fourteenth sub-region and the fifteenth sub-region among the plurality of sub-regions (for example, FIG. 2B or FIG. 2C Correspondence between sub-area 10 and sub-area 12) shown.
  • the fourteenth sub-region and the fifteenth sub-region among the plurality of sub-regions (for example, FIG. 2B or FIG. 2C Correspondence between sub-area 10 and sub-area 12) shown.
  • the user sees at least one alternative information currently displayed, he or she can touch the fourteenth sub-region and/or the fifteenth sub-region with the thumb, and then the device can respond to detecting This touch operation determines the currently selected alternative information from at least one alternative information currently displayed.
  • the user can touch the fourteenth sub-region and/or the fifteenth sub-region with the thumb to move the information selection mark on at least one candidate information currently displayed to the currently selected candidate.
  • Information that is, the thumb touches the fourteenth sub-region and the fifteenth sub-region, and the information selection mark can be moved left or right accordingly, thereby determining the currently selected alternative information.
  • Figure 4 is a flow chart of an input information determination method according to another exemplary embodiment of the present disclosure; the method of this embodiment can be applied to input information determination devices (such as helmets, helmets, etc.) that implement VR, AR or MR application scenarios. glasses, gloves, etc.).
  • input information determination devices such as helmets, helmets, etc.
  • VR VR, AR or MR application scenarios. glasses, gloves, etc.
  • the method includes the following steps S401-S404:
  • step S401 obtain the target image of the hand
  • step S402 a target sub-area touched by the thumb of the hand within the touch area of the hand is determined based on the target image, where the touch area is touchable by the thumb of the hand.
  • step S403 determine the target information to be input based on the target sub-area
  • step S404 the target information is input in response to detecting the second trigger instruction for information input.
  • the above-mentioned second triggering instruction may include at least one of the following: a second setting triggering gesture, a second setting triggering sound, or a second setting triggering key being triggered.
  • the device may determine the currently selected alternative information as the target information and input it.
  • the device may determine that the above-mentioned second trigger instruction is detected in response to detecting that the hand exhibits a second set trigger gesture (such as a fist gesture, etc.).
  • Figure 5 is a schematic diagram of each sub-area and corresponding input information within the touch area of the right hand according to an exemplary embodiment of the present disclosure; as shown in Figure 5, this embodiment can be based on hand key points (as shown in the figure)
  • the detection algorithm (dots on the hand) tracks the position of the thumb tip of the user's hand in real time.
  • the device when the device detects that the posture of the user's hand is the palm-open posture, it can be regarded as detecting the first trigger instruction for determining the input information; optionally, when the device detects that the user's hand except the thumb is When the finger is in the open state, it can also be used in at least one sub-area in the touch area according to the preset correspondence relationship (that is, the relationship between at least one pre-divided sub-area in the touch area of the hand and different input information). Display input information corresponding to the sub-areas, that is, letters, numbers, symbols, input functions, etc. displayed on each sub-area as shown in Figure 5.
  • the fingertip of the thumb touches a target sub-region among the plurality of preset sub-regions and then returns to the initial open palm state
  • it can be regarded as an information input by the user.
  • the corresponding number can be entered, and when it is detected that the thumb presses the sub-area corresponding to "0" briefly, the space can be entered; when it is detected that the thumb presses the sub-area corresponding to "1", punctuation marks can be input; when it is detected that the thumb presses the sub-area corresponding to "2" to "9", pinyin or English can be input.
  • Figure 6 is a block diagram of an information determination device according to an exemplary embodiment of the present disclosure; the device of this embodiment can be applied to input information determination devices (such as helmets, glasses, gloves, etc.) that implement VR, AR or MR application scenarios. wait).
  • input information determination devices such as helmets, glasses, gloves, etc.
  • the device includes: a target image acquisition module 110, a sub-region determination module 120 and a target information determination module 130, wherein:
  • the target image acquisition module 110 is used to acquire the target image of the hand
  • the sub-region determination module 120 is configured to determine, based on the target image, a target sub-region touched by the thumb of the hand within the touch area of the hand, where the touch area is the area where the thumb touches the touch area of the hand. The area formed by the hand except the thumb that can be touched;
  • the target information determining module 130 is configured to determine the target information to be input based on the target sub-area.
  • the device of this embodiment can accurately input target information based on the detection results of the target image of the user's hand. Since there is no need to rely on external special input devices or controllers, it is possible to avoid carrying input devices or controls. There is no need to consider the placement position of the input device, so it can break through the limitations of application scenarios. Moreover, since the user directly uses the thumb to touch each preset sub-area of the user's hand, compared to using a controller in related technologies, The solution of auxiliary input on the virtual keyboard can improve the accuracy of information input, thereby improving the efficiency of information input.
  • FIG. 7 is a block diagram of yet another information determining device according to an exemplary embodiment of the present disclosure.
  • the device of this embodiment can be applied to input information determination devices (such as helmets, glasses, gloves, etc.) that implement VR, AR or MR application scenarios.
  • input information determination devices such as helmets, glasses, gloves, etc.
  • the target image acquisition module 210, the sub-region determination module 220 and the target information determination module 230 have the same functions as the target image acquisition module 110, the sub-region determination module 120 and the target information determination module 130 in the embodiment shown in FIG. 6, No further details will be given here.
  • the above-mentioned touch area may include at least one pre-divided sub-area, and the at least one sub-area includes at least one of the following:
  • the area where the knuckles of the fingers are, the area between the knuckles of two fingers, the area where the fingertips are, the area between the fingertips and the finger knuckles, the area in the palm of the hand near the base of the fingers, the area between the fingers together The divided area, the divided area within the palm.
  • the target information determination module 230 may include:
  • the input information determining unit 231 is configured to determine the input information corresponding to the target sub-area based on a preset correspondence relationship, which includes at least one pre-divided sub-area within the touch area of the hand and different input information. Relationship;
  • the target information determining unit 232 is configured to determine the target information based on the input information corresponding to the target sub-region.
  • the above-mentioned preset correspondence may include at least one of the following:
  • the corresponding relationship between the sub-region and the input information representing the input function, the input function includes at least one of the following: delete function, input mode switching function, left move function, right move function, up move function, and move down function.
  • the preset correspondence also includes:
  • each sub-region in a sub-region combination composed of multiple sub-regions with adjacent or similar positions and each input information in the input information combination, where the input information in the input information combination is input content or input Input information associated with the function.
  • the above input information combination may include at least one of the following:
  • An all-numeric input information combination in which each input information includes at least one number as the input content to be selected.
  • the sub-region determination module 220 may include:
  • the first determination unit 221 is configured to determine the target sub-area touched by the thumb in the touch area of the hand based on the target image when the hand in the target image is a single hand, and the single hand is the left hand. or right hand;
  • the second determination unit 222 is configured to determine, based on the target image, the target sub-area touched by the left thumb in the touch area of the left hand and the target sub-area touched by the right thumb in the touch area of the left hand when the hands in the target image are the left hand and the right hand.
  • the target information determining unit 232 may also be used to:
  • the input content included in the input information corresponding to the target sub-area to determine the target information based on the result of selecting the displayed input content, and the input content includes at least one of the following: letters, numbers, and symbols;
  • the above-mentioned target information determining unit 232 can also be used to:
  • the target information is determined according to a result of selecting each displayed input content to be selected.
  • the target information determining unit 232 may also be configured to determine the different input contents contained in the input information corresponding to the target sub-area as the different input contents contained in the input information corresponding to the target sub-area based on different touch durations of the thumb on the target sub-area. Describe target information.
  • the input information of the target sub-area may include first input content and second input content
  • the target information determining unit 231 may also be used to:
  • determining the target information based on the first input content In response to detecting that the duration of the thumb's touch to the target sub-area is less than or equal to a preset duration threshold, determining the target information based on the first input content;
  • the target information is determined based on the second input content.
  • the target image acquisition module 210 may also be configured to perform the operation of acquiring the target image of the hand in response to detecting a first trigger instruction for determining input information, where the first trigger instruction includes the following: At least one of: the first set trigger gesture, the first set trigger sound, and the first set trigger button is triggered.
  • the target image acquisition module 210 may also be configured to, in response to detecting that the fingers of the hand except the thumb are in an open state, at least one sub-section in the touch area according to the preset correspondence relationship. The input information corresponding to the sub-area is displayed in the area.
  • the above device may also include:
  • the target information input module 240 is configured to input the target information in response to detecting a second trigger instruction for information input.
  • the second trigger instruction includes at least one of the following: a second setting trigger gesture, The second setting triggers the sound and the second setting trigger button is triggered.
  • FIG. 8 is a block diagram of an input information determining device according to an exemplary embodiment.
  • the device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • device 900 may include one or more of the following components: a processing component 902, a memory 904, a power supply component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and communications component 916.
  • a processing component 902 a memory 904
  • a power supply component 906 a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and communications component 916.
  • I/O input/output
  • Processing component 902 generally controls the overall operations of device 900, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 902 may include one or more processors 920 to execute instructions to complete all or part of the steps of the above method.
  • processing component 902 may include one or more modules that facilitate interaction between processing component 902 and other components.
  • processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operations at device 900 . Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 904 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power supply component 906 provides power to the various components of device 900 .
  • Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 900 .
  • Multimedia component 908 includes a screen that provides an output interface between the device 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 908 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) configured to receive external audio signals when device 900 is in operating modes, such as call mode, recording mode, and speech recognition mode. The received audio signals may be further stored in memory 904 or sent via communications component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 914 includes one or more sensors that provide various aspects of status assessment for device 900 .
  • the sensor component 914 can detect the open/closed state of the device 900, the relative positioning of components, such as the display and keypad of the device 900, and the sensor component 914 can also detect a change in position of the device 900 or a component of the device 900. , the presence or absence of user contact with device 900 , device 900 orientation or acceleration/deceleration and temperature changes of device 900 .
  • Sensor assembly 914 may also include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communications between device 900 and other devices.
  • the device 900 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G, or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 916 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 900 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 904 including instructions, which are executable by the processor 920 of the device 900 to complete the above method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de détermination d'informations d'entrée, un dispositif, et un support de stockage. Le procédé comprend : l'acquisition d'une image cible d'une main ; sur la base de l'image cible, la détermination d'une sous-zone cible qui est touchée par le pouce de la main dans une zone tactile de la main, la zone tactile étant une zone qui est formée par la main, autre que le pouce, qui peut être touchée par le pouce dans la main ; et la détermination, sur la base de la sous-zone cible, d'informations cibles à entrer. Au moyen de la présente divulgation, des informations cibles peuvent être entrées avec précision sur la base d'un résultat de détection d'une image cible de la main d'un utilisateur de sorte que la gêne procurée par le port d'un dispositif d'entrée ou d'un contrôleur est évitée, ce qui permet d'améliorer la précision de l'entrée d'informations et l'efficacité de l'entrée d'informations.
PCT/CN2022/097433 2022-06-07 2022-06-07 Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage WO2023236052A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280004383.2A CN117546124A (zh) 2022-06-07 2022-06-07 输入信息确定方法、装置、设备及存储介质
PCT/CN2022/097433 WO2023236052A1 (fr) 2022-06-07 2022-06-07 Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/097433 WO2023236052A1 (fr) 2022-06-07 2022-06-07 Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage

Publications (1)

Publication Number Publication Date
WO2023236052A1 true WO2023236052A1 (fr) 2023-12-14

Family

ID=89117395

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/097433 WO2023236052A1 (fr) 2022-06-07 2022-06-07 Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage

Country Status (2)

Country Link
CN (1) CN117546124A (fr)
WO (1) WO2023236052A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215375A1 (fr) * 2016-06-12 2017-12-21 齐向前 Dispositif et procédé d'entrée d'informations
CN111078002A (zh) * 2019-11-20 2020-04-28 维沃移动通信有限公司 一种悬空手势识别方法及终端设备
WO2020170581A1 (fr) * 2019-02-18 2020-08-27 株式会社Nttドコモ Système de commande d'entrée
CN114418865A (zh) * 2020-10-28 2022-04-29 北京小米移动软件有限公司 图像处理方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215375A1 (fr) * 2016-06-12 2017-12-21 齐向前 Dispositif et procédé d'entrée d'informations
WO2020170581A1 (fr) * 2019-02-18 2020-08-27 株式会社Nttドコモ Système de commande d'entrée
CN111078002A (zh) * 2019-11-20 2020-04-28 维沃移动通信有限公司 一种悬空手势识别方法及终端设备
CN114418865A (zh) * 2020-10-28 2022-04-29 北京小米移动软件有限公司 图像处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN117546124A (zh) 2024-02-09

Similar Documents

Publication Publication Date Title
CN105446646B (zh) 基于虚拟键盘的内容输入方法、装置及触控设备
CN103885632B (zh) 输入方法和装置
JP2017505969A (ja) アプリケーション制御方法、装置、プログラム及び記録媒体
CN105260115A (zh) 实现单手模式的方法、装置及智能终端
CN107168566B (zh) 操作模式控制方法、装置及终端电子设备
CN111522498A (zh) 触控响应方法、装置及存储介质
US20230393732A1 (en) Virtual keyboard setting method and apparatus, and storage medium
CN111638810B (zh) 触控方法、装置及电子设备
WO2023125155A1 (fr) Procédé d'entrée et appareil d'entrée
EP4290338A1 (fr) Procédé et appareil d'entrée d'informations et support d'informations
WO2023236052A1 (fr) Procédé et appareil de détermination d'informations d'entrée, dispositif, et support de stockage
CN112650437B (zh) 光标控制方法及装置、电子设备、存储介质
CN107168631B (zh) 应用程序关闭方法、装置及终端电子设备
CN111092971A (zh) 一种显示方法、装置和用于显示的装置
CN114296628A (zh) 显示页面控制方法、装置、键盘、电子设备和存储介质
CN115543064A (zh) 界面显示控制方法、界面显示控制装置及存储介质
CN106775329B (zh) 触发点击事件的方法及装置、电子设备
CN112068730A (zh) 报点输出控制方法、报点输出控制装置及存储介质
CN113377250A (zh) 应用程序界面显示方法及装置、终端设备
CN106843691B (zh) 移动终端的操作控制方法及装置
CN113495666B (zh) 终端控制方法、终端控制装置及存储介质
CN105373333B (zh) 触控响应方法及装置
CN113434080B (zh) 信息输入方法和装置
WO2022127063A1 (fr) Procédé et dispositif d'entrée, et dispositif permettant l'entrée
CN114527919B (zh) 一种信息展示方法、装置和电子设备

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280004383.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22945199

Country of ref document: EP

Kind code of ref document: A1