CN107193373B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN107193373B
CN107193373B CN201710348619.5A CN201710348619A CN107193373B CN 107193373 B CN107193373 B CN 107193373B CN 201710348619 A CN201710348619 A CN 201710348619A CN 107193373 B CN107193373 B CN 107193373B
Authority
CN
China
Prior art keywords
object set
operation body
action
inputtable
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710348619.5A
Other languages
Chinese (zh)
Other versions
CN107193373A (en
Inventor
蒋晓华
金正操
陆建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710348619.5A priority Critical patent/CN107193373B/en
Publication of CN107193373A publication Critical patent/CN107193373A/en
Application granted granted Critical
Publication of CN107193373B publication Critical patent/CN107193373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Abstract

The invention discloses an information processing method and electronic equipment. The information processing method includes: collecting a first action of an operation body in a collection area through a collection unit; according to the first action, determining a corresponding candidate object set from the input object sets; displaying the candidate object set in a display area; collecting a second action of the operation body in the collection area through the collection unit; determining a corresponding selected object set from the candidate object set according to the second action; and triggering an operation instruction corresponding to the inputtable object in the selection object set. Therefore, according to the scheme, the selectable range of the objects capable of being input is gradually narrowed through the first action and the second action which are sequentially sent by the operation body, the required objects capable of being input are finally positioned, and the corresponding operation instruction is triggered, so that the purpose that the operation instruction corresponding to the objects capable of being input in the electronic equipment is accurately triggered through space action interaction is achieved.

Description

Information processing method and electronic equipment
The application is applied for 2012, 9 and 3, and the application number is as follows: 201210322235.3, title of the invention: an information processing method and a divisional application of an electronic device are provided.
Technical Field
The present invention relates to the field of electronic devices, and in particular, to an information processing method and an electronic device.
Background
With the development of science and technology, various electronic products are continuously enriched and convenient for the public life, and the expansion of various functions of electronic equipment is promoted by the continuously increasing application requirements of people.
In the prior art, the interaction based on the space action is widely applied to smart televisions, head-mounted equipment, large-screen display occasions and the like. People can simply control the electronic equipment by only sending out a specified space action without manually operating related control elements (such as a remote controller and a keyboard) of the electronic equipment, so that the purpose of improving user experience is achieved, for example: by swinging the arm upwards, the sound of the smart television can be improved, or by swinging the arm leftwards, the television channel of the smart television can be switched.
With the increasing demand of people, people hope to accurately position an inputtable object on a certain display area of the electronic device by sending out a corresponding spatial action, and then trigger an operation instruction corresponding to the inputtable object, for example: some keys on a keyboard displayed on a display screen are positioned by sending out specific spatial actions, so that the input of characters and/or instructions on the keys is realized. Therefore, how to accurately trigger the operation instruction corresponding to the input object in the electronic equipment through spatial action interaction is a problem of great concern.
Disclosure of Invention
In order to solve the above technical problem, an embodiment of the present invention provides an information processing method and an electronic device, so as to achieve the purpose of accurately triggering an operation instruction corresponding to an input object in the electronic device through spatial action interaction, where the technical scheme is as follows:
in one aspect, an embodiment of the present invention provides an information processing method, which is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area; the method comprises the following steps:
collecting a first action of an operation body in the collection area through the collection unit;
determining a corresponding candidate object set from the inputtable object set according to the first action, wherein the candidate object set belongs to the inputtable object set;
displaying the set of candidate objects in the display area;
collecting a second action of the operation body in the collection area through the collection unit;
determining a corresponding selection object set from the candidate object set according to the second action, wherein the selection object set belongs to the candidate object set;
and triggering an operation instruction corresponding to an inputtable object in the selection object set.
On the other hand, an embodiment of the present invention further provides an electronic device, including a display unit and a collection unit, where the display unit has a display area, and the collection unit has a collection area, and the electronic device further includes:
the first action acquisition unit is used for acquiring a first action of the operation body in the acquisition area through the acquisition unit;
a candidate object determining unit, configured to determine, according to the first action, a corresponding candidate object set from an inputtable object set, where the candidate object set belongs to the inputtable object set;
a candidate object display unit configured to display the candidate object set in the display area;
the second action acquisition unit is used for acquiring a second action of the operation body in the acquisition area through the acquisition unit;
a selectable object determining unit, configured to determine, according to the second action, a corresponding selected object set from the candidate object sets, where the selected object set belongs to the candidate object set;
and the instruction triggering unit is used for triggering the operation instruction corresponding to the object in the selected object set.
According to the technical scheme provided by the embodiment of the invention, according to a first action sent by an operation body in an acquisition area, which is acquired by an acquisition unit, a corresponding candidate object set is determined from an inputtable object set, and the candidate object set is displayed; and further determining a corresponding selection object set from the candidate object set according to a second action of the operation body acquired by the acquisition unit in the acquisition area, and further triggering the operation instruction corresponding to the input object in the selection object set. Therefore, according to the scheme, the selectable range of the objects capable of being input is gradually narrowed through the first action and the second action which are sequentially sent by the operation body, the required objects capable of being input are finally positioned, and the corresponding operation instruction is triggered, so that the purpose that the operation instruction corresponding to the objects capable of being input in the electronic equipment is accurately triggered through space action interaction is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart of an information processing method according to an embodiment of the present invention;
fig. 2 is a second flowchart of an information processing method according to an embodiment of the present invention;
FIG. 3 is a third flowchart of an information processing method according to an embodiment of the present invention;
FIG. 4 is a fourth flowchart of an information processing method according to an embodiment of the present invention;
fig. 5 is a fifth flowchart of an information processing method according to an embodiment of the present invention;
fig. 6 is a sixth flowchart of an information processing method according to an embodiment of the present invention;
fig. 7 is a seventh flowchart of an information processing method according to an embodiment of the present invention;
fig. 8 is an eighth flowchart of an information processing method according to an embodiment of the present invention;
fig. 9 is a ninth flowchart of an information processing method according to an embodiment of the present invention;
fig. 10 is a tenth flowchart of an information processing method according to an embodiment of the present invention;
fig. 11 is a schematic diagram illustrating an application scenario of manipulating a qwerty keyboard according to an embodiment of the present invention;
FIG. 12 is another diagram illustrating an application scenario for manipulating a qwerty keyboard according to an embodiment of the present invention;
fig. 13 is a schematic diagram of an application scenario of manipulating a cursor according to an embodiment of the present invention;
fig. 14 is another schematic diagram of an application scenario of manipulating a cursor according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 16 is a top view of the basic configuration of a head-mounted device provided by the present invention;
FIG. 17 is a top view of an example situation in which a user is wearing a head mounted device;
fig. 18 is a side view of an example situation where a user wears a head mounted device.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to achieve the purpose of accurately triggering an operation instruction corresponding to an inputtable object in electronic equipment through spatial action interaction, the embodiment of the invention provides an information processing method and the electronic equipment.
First, an information processing method according to an embodiment of the present invention will be described below.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. In practical application, the electronic device may be a smart television, a head-mounted device, a notebook, or the like, which has a camera and a display screen.
As shown in fig. 1, an information processing method may include:
s101, collecting a first action of an operation body in the collection area through the collection unit;
s102, according to the first action, determining a corresponding candidate object set from the inputtable object set;
when an operation instruction corresponding to an inputtable object in the electronic equipment needs to be triggered through spatial action interaction, the operation body needs to send a first action in the acquisition area of the acquisition unit, the electronic equipment can acquire the first action of the operation body through the acquisition unit, and according to the acquired first action, the corresponding inputtable object is screened from the inputtable object set to form a candidate object set. The candidate object set belongs to the inputtable object set, that is, the inputtable objects in the candidate object set are not more than the inputtable objects in the inputtable object set.
It should be noted that, for different application scenarios, the inputtable objects in the inputtable object set are different, for example: when a character and/or command input means (e.g., a keyboard) is operated by a spatial motion, the inputtable object may be a key object; when the cursor in the display unit is operated through the space motion, the inputtable object can be the operable position of the cursor in the display area; when the application icon or the menu in the display area is operated through the space action, the inputtable object can apply the icon or the menu option; meanwhile, on the premise of ensuring that the operation body can send different actions, the operation body can be both hands, one hand or other specific devices of a user, and the operation body is reasonable.
S103, displaying the candidate object set in the display area;
after the candidate object set is determined through the first action acquired by the acquisition unit, the candidate object set can be displayed in the display area so as to show inputtable objects in the candidate object set to the user, and therefore the user has better visual experience.
Wherein, in the display area, displaying the candidate object set may include:
displaying a candidate object set in the inputtable object set in the display area, and hiding inputtable objects in the inputtable object set except the candidate object set.
It is to be understood that, for the case where a set of inputtable objects is displayed in the display area before the first action is issued, when the set of candidate objects is determined according to the first action, inputtable objects other than the set of candidate objects in the set of inputtable objects may be subjected to concealment processing so that only the inputtable objects in the set of candidate objects exist in the display area; however, in the case where the inputtable object set is not displayed in the display area before the first action is issued, when the candidate object set is determined according to the first action, only the inputtable objects in the candidate object set may be displayed in the display area.
Furthermore, in order to make the user clearly understand all the inputtable objects in the inputtable object set and simultaneously clarify the inputtable objects in the candidate object set, and thus have a better user experience, the displaying the candidate object set in the display area may include:
in the display area, the set of candidate objects is displayed with a first effect, and inputtable objects other than the set of candidate objects in the set of inputtable objects are displayed with a second effect, wherein the first effect is different from the second effect.
Wherein, the first effect and the second effect can be divided from the virtual and real angle, the color angle, the light and shade angle, and the like, which are reasonable.
It will be understood by those skilled in the art that the above-mentioned display manner of the candidate object set is only an example, and should not be construed as limiting the embodiments of the present invention.
S104, collecting a second action of the operation body in the collection area through the collection unit;
s105, according to the second action, determining a corresponding selected object set from the candidate object set;
after the candidate object set is displayed in the display area, the operating body can continue to send out a second action in the acquisition area, the electronic equipment can acquire the second action through the acquisition unit, and then corresponding inputtable objects are screened out from the candidate object set according to the second action to form a selection object set, so that the selection of the inputtable objects is completed. The set of selection objects belongs to the set of candidate objects, that is, the number of inputtable objects in the set of selection objects is not more than the number of inputtable objects in the set of candidate objects.
It will be appreciated that this second action is different from the first action to ensure that the determination of the inputtable object is accurately completed after the current action is acquired.
And S106, triggering an operation instruction corresponding to the inputtable object in the selection object set.
When the selection object set is determined through the second action acquired by the acquisition unit, an operation instruction corresponding to an inputtable object in the selection object set can be triggered. For example: when the inputtable object is a key object of the character and/or command input device, after the selection object set is determined, an operation instruction of the character and/or command corresponding to the key object in the selection object set can be triggered; when the inputtable object is the operable position of the cursor, after the selection object set is determined, a cursor selection instruction corresponding to the operable position of the cursor in the selection object set can be triggered; and when the inputtable object is an application image in the display area, after the selection object set is determined, a program starting instruction corresponding to the application image in the selection object set can be triggered.
According to the technical scheme provided by the embodiment of the invention, according to a first action sent by an operation body in an acquisition area, which is acquired by an acquisition unit, a corresponding candidate object set is determined from an inputtable object set, and the candidate object set is displayed; and further determining a corresponding selection object set from the candidate object set according to a second action of the operation body acquired by the acquisition unit in the acquisition area, and further triggering the operation instruction corresponding to the input object in the selection object set. Therefore, according to the scheme, the selectable range of the objects capable of being input is gradually narrowed through the first action and the second action which are sequentially sent by the operation body, the required objects capable of being input are finally positioned, and the corresponding operation instruction is triggered, so that the purpose that the operation instruction corresponding to the objects capable of being input in the electronic equipment is accurately triggered through space action interaction is achieved.
The following describes an information processing method provided by an embodiment of the present invention, taking the first motion as an operation body and taking the moving direction of the first form in the acquisition area as an example.
The first form is one of a plurality of forms that the operation body can form, and different forms that different operation bodies can form, such as: when the operation body is a single hand of a user, the form that the operation body can form at least includes: a fist-making shape, a five-finger separating shape, a five-finger folding shape and the like; when the operation body is another specific device, the operation body can be formed in a shape including: a first state of the particular device, a second state of the particular device, etc.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. In practical application, the electronic device may be a smart television, a head-mounted device, a notebook, or the like, which has a camera and a display screen.
As shown in fig. 2, an information processing method may include:
s201, when the operation body moves in the collection area in the first form, collecting the current movement direction of the operation body in the collection area through the collection unit;
in this embodiment, the preset trigger condition for acquiring the first action is: the operating body moves in the acquisition region in a first configuration. When the operation body moves in the acquisition area in the first form, the electronic device may acquire, by the acquisition unit, a current movement direction of the operation body in the acquisition area, where the current movement direction may include: up, down, left, right, left up, right up, left down, right down, etc.
S202, taking an adjacent set in the current motion direction of a preset initial set in the input object set as a candidate object set;
if the operation body only appears in the acquisition area in the first form and does not move, and no matter what position the operation body appears in the acquisition area in the first form, a preset initial set is used as a candidate object set corresponding to the position; when the operation body is detected to move in the acquisition area in the first form, the current movement direction of the operation body needs to be detected, and then an adjacent set in the current movement direction of the preset initial set is used as a candidate object set.
S203, in the display area, displaying the candidate object set with a first effect, and displaying inputtable objects except the candidate object set in the inputtable object set with a second effect;
it is reasonable that the first effect is different from the second effect, and the first effect and the second effect can be divided from a virtual-real angle, a color angle, a light-dark angle, and the like.
Of course, after the candidate object set is obtained, the candidate object set may be displayed in other manners, for example: displaying a candidate object set in the inputtable object set in the display area, and hiding inputtable objects in the inputtable object set except the candidate object set.
S204, when the operation body is detected to form a second shape in the collection area, the motion change of the operation body is collected through the collection unit;
the preset triggering condition for acquiring the second action is as follows: the operation body forms a second shape in the acquisition area; wherein the second form is one of a plurality of forms that the operation body can form. It will be appreciated that the second modality is different from the first modality, for example: when the operation body is a single hand of a user, the first form of the operation body can be a fist-making form and the second form can be a five-finger-separated form, or the first form of the operation body can be a fist-making form and the second form can be a five-finger-closed form; alternatively, when the operation body is another specific device, it is reasonable that the first form of the operation body may be the first state of the specific device and the second form may be the second state of the specific device.
When the operation body is detected to form the second shape in the collection area, the motion change of the operation body can be collected through the collection unit, and then subsequent processing is carried out according to the collected motion change.
S205, determining the specific part of the operation body which changes according to the motion change of the operation body;
wherein the specific portion belongs to the operation body. For example: when the operation body is a single hand of a user, the specific part which is changed can be a finger which is bent, or a finger of which the bending degree exceeds a threshold value, and the like; when the operating body is a specific device, the specific portion that changes may be a portion that can be changed by the specific device.
S206, the inputtable objects corresponding to the specific part of the candidate object set, in which the operation body changes, form a selection object set;
it should be noted that the inputtable objects corresponding to the specific part of the operation body that changes are preset, and when the specific part of the operation body that changes is determined, the corresponding inputtable objects can be determined, and the determined inputtable objects constitute the selection object set. For example: when the inputtable object is a key of a qwerty keyboard (i.e., a full keyboard), and the operation body is both hands of the user, the middle finger of the left hand may correspond to the key E, D, C when bent, and the middle finger of the right hand may correspond to the key I, K (< and,) when bent.
And S207, triggering an operation instruction corresponding to the inputtable object in the selection object set.
When the selection object set is determined, an operation instruction corresponding to an inputtable object in the selection object set can be triggered. For example: when the inputtable object is a key object of the character and/or command input device, after the selection object set is determined, an operation instruction of the character and/or command corresponding to the key object in the selection object set can be triggered; when the inputtable object is the operable position of the cursor, after the selection object set is determined, a cursor selection instruction corresponding to the operable position of the cursor in the selection object set can be triggered; and when the inputtable object is an application image in the display area, after the selection object set is determined, a program starting instruction corresponding to the application image in the selection object set can be triggered.
Therefore, according to the scheme, the selectable range of the inputtable objects is gradually narrowed according to the movement direction of the operation body moving in the first form and the action change of the operation body after the operation body forms the second form, the needed inputtable objects are finally positioned, and the corresponding operation instructions are triggered, so that the purpose of accurately triggering the operation instructions corresponding to the inputtable objects in the electronic equipment through space action interaction is achieved.
An information processing method provided by an embodiment of the present invention is described below by taking the first action as an example of a position of the operating body when the acquisition area is in a stationary state.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. In practical application, the electronic device may be a smart television, a head-mounted device, a notebook, or the like, which has a camera and a display screen.
As shown in fig. 3, an information processing method may include:
s301, when the operation body is in a static state in the acquisition area, acquiring the current position information of the operation body in the acquisition area through the acquisition unit;
in this embodiment, the preset trigger condition for acquiring the first action is as follows: the operating body is in a static state in the collection area. When the operation body is detected to be in a static state in the acquisition area, the acquisition unit can acquire the current position information of the operation body in the acquisition area, and further the current position information is utilized to determine a subsequent candidate object set.
It is understood that the position information of the operation body may be a relative position of the operation body and the display unit, for example: the operation body is positioned at the left front, the right front, the upper front, the lower front and the like of the display unit; alternatively, the position information of the operation body may be the position of the operation body relative to each part of the user body, for example: the operation body is positioned in front of the head, the neck, the chest, and the like of the user.
Further, for more accuracy, the trigger condition for acquiring the first action may be set as: the operating body is in a static state in the collection area in a first form, wherein the first form is one of a plurality of forms which can be formed by the operating body.
S302, obtaining the distribution of the inputtable object set and the mapping relation of the position information of the operation body;
s303, forming an inputtable object corresponding to the current position information in the inputtable object set into a candidate object set according to the mapping relation and the current position information;
when the current position information corresponding to the operation body is acquired, the distribution of a preset inputtable object set and the mapping relation of the position information of the operation body can be acquired, wherein the mapping relation indicates the inputtable objects respectively corresponding to different positions of the operation body.
After the mapping relationship is determined, according to the mapping relationship and the current position information, the inputtable objects corresponding to the current position information in the inputtable object set can be obtained, and the obtained inputtable objects form a candidate object set.
S304, displaying a candidate object set in the inputtable object set in the display area, and hiding inputtable objects except the candidate object set in the inputtable object set;
after the candidate object set is obtained, the candidate object set may be displayed in the display area, and an inputtable object other than the candidate object in the inputtable object set may be hidden.
Of course, after the candidate object set is obtained, the candidate object set may be displayed in other manners, for example: the set of candidate objects is displayed with a first effect and inputtable objects in the set of inputtable objects other than the set of candidate objects are displayed with a second effect.
S305, when the operation body is detected to form a second shape in the collection area, collecting the action change of the operation body through the collection unit;
wherein the second form is one of a plurality of forms that the operation body can form.
S306, determining the specific part of the operation body which changes according to the action change of the operation body;
wherein the specific portion belongs to the operation body.
S307, the inputtable objects corresponding to the specific part of the candidate object set, in which the operation body changes, form a selection object set;
s308, triggering an operation instruction corresponding to the inputtable object in the selection object set.
In this embodiment, steps S305 to S308 are similar to steps S204 to S207 in the above embodiment, and are not repeated herein.
Therefore, according to the scheme, the selectable range of the objects capable of being input is gradually narrowed according to the current position information of the operation body in the static state in the acquisition area and the action change of the operation body after the operation body forms the second state, the objects capable of being input are finally positioned to be required to be input, and the corresponding operation instruction is triggered, so that the purpose of accurately triggering the corresponding operation instruction of the objects capable of being input in the electronic equipment through space action interaction is achieved.
The following describes an information processing method provided by an embodiment of the present invention with a set of inputtable objects as a specific application scenario of a key object in a character and/or command input device of an electronic device.
The character and/or command input device can be a qwerty keyboard, a numeric keyboard, an abc keyboard and the like, the qwerty keyboard is a full keyboard of the notebook computer, the abc keyboard is a mobile phone function keyboard, wherein abc corresponds to the number 2, and def corresponds to the number 3; meanwhile, the key objects that may be input in the object set may include: character keys (e.g., letter keys such as Q, W, R, Y, G, and numeric keys such as 1, 2, 3, and 4), command keys (e.g., Shift key, Ctrl key, Enter key, etc.); the selection object set may be composed of at least one character key or at least one command key, and of course, may also be composed of a combination of character keys and command keys, for example: shift + A, Ctrl + X, Ctrl + S, etc., all of which are reasonable.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. In practical application, the electronic device may be a smart television, a head-mounted device, a notebook, or the like, which has a camera and a display screen.
As shown in fig. 4, an information processing method may include:
s401, when the operation body moves in the collection area in the first form, collecting the current movement direction of the operation body in the collection area through the collection unit;
in this embodiment, the preset trigger condition for acquiring the first action is: the operating body moves in the acquisition region in a first configuration. When the operation body moves in the acquisition area in the first form, a current movement direction of the operation body in the acquisition area may be acquired by the acquisition unit, wherein the current movement direction may include: up, down, left, right, left up, right up, left down, right down, etc.
The first form is one of a plurality of forms that the operation body can form, and different forms that different operation bodies can form, such as: when the operation body is a single hand of a user, the form that the operation body can form at least includes: a fist-making shape, a five-finger separating shape, a five-finger folding shape and the like; when the operation body is another specific device, the operation body can be formed in a shape including: a first state of the particular device, a second state of the particular device, etc.
S402, forming a candidate object set by key objects corresponding to adjacent lines in the current motion direction of a preset initial line in the character and/or command input device;
if the operation body only appears in the acquisition area in the first form and does not move, and no matter where the operation body appears in the acquisition area in the first form, the key objects corresponding to a preset initial line in the character and/or command input device form a candidate object set; when the operation body is detected to move in the acquisition area in the first form, the current movement direction of the operation body needs to be detected, and then key objects corresponding to adjacent lines in the current movement direction of the preset initial line form a candidate object set.
For example: in the qwerty keyboard shown in fig. 11, the set of inputtable objects includes: key objects in line 1, line 2 and line 3, and an initial line 2 is preset; when the operation body only appears in the acquisition area in the first form and does not move, the key objects in the row 2 form a candidate object set; when the operating body moves upward in the first form, the key objects in the row 1 may be formed into a candidate object set, and when the operating body moves downward in the first form, the keys in the row 3 may be formed into a candidate object set.
S403, in the display area, displaying the candidate object set with a first effect, and displaying key objects except the candidate object set in the inputtable object set with a second effect;
it is reasonable that the first effect is different from the second effect, and the first effect and the second effect can be divided from a virtual-real angle, a color angle, a light-dark angle, and the like.
Of course, after the candidate object set is obtained, the candidate object set may be displayed in other manners, for example: displaying a candidate object set in the inputtable object set in the display area, and hiding key objects except the candidate object set in the inputtable object set.
S404, when the operation body is detected to form a second shape in the collection area, the motion change of the operation body is collected through the collection unit;
the preset triggering condition for acquiring the second action is as follows: the operative body forming a second configuration in the acquisition region; wherein the second form is one of a plurality of forms that the operation body can form. It will be appreciated that the second modality is different from the first modality, for example: when the operation body is a single hand of a user, the first form of the operation body can be a fist-making form and the second form can be a five-finger-separated form, or the first form of the operation body can be a fist-making form and the second form can be a five-finger-closed form; alternatively, when the operation body is another specific device, it is reasonable that the first form of the operation body may be the first state of the specific device and the second form may be the second state of the specific device.
S405, determining a specific part of the operation body which changes according to the motion change of the operation body;
wherein the specific portion belongs to the operation body. For example: when the operation body is a single hand of a user, the specific part which is changed can be a finger which is bent, or a finger of which the bending degree exceeds a threshold value, and the like; when the operating body is a specific device, the specific portion that changes may be a portion that can be changed by the specific device.
S406, forming a selection object set by the key objects corresponding to the changed specific part of the operation body in the candidate object set;
it should be noted that the inputtable objects corresponding to the specific part of the operation body that changes are preset, and when the specific part of the operation body that changes is determined, the corresponding inputtable objects can be determined, and the inputtable objects can be further configured into the selection object set. For example: when the inputtable object is a key of a qwerty keyboard (i.e., a full keyboard), and the operation body is both hands of the user, the middle finger of the left hand may correspond to the key E, D, C when bent, and the middle finger of the right hand may correspond to the key I, K (< and,) when bent.
In practical applications, when some specific parts of the operation body are changed, other parts may be linked, which may cause the set of the selected objects to be inaccurate, for example: when the operation body is both hands or one hand of the user, when the user wants to bend the middle finger, the ring finger or the index finger may be linked involuntarily, or when the user wants to bend the little finger, the ring finger may be linked involuntarily, which may cause the set of the selection objects to be inaccurate. Therefore, in order to further improve the accuracy of constructing the selected object set, the method for determining the selected object set from the candidate object set according to the determined action change of the operation body may further include:
determining a specific part of the operation body, which has a change amplitude exceeding a preset amplitude threshold value, according to the action change of the operation body, wherein the specific part belongs to the operation body;
forming a selection object set by key objects corresponding to the specific part of the candidate object set, wherein the variation amplitude of the key objects exceeds a preset amplitude threshold;
alternatively, the first and second electrodes may be,
determining a specific part with the largest change amplitude in the specific part of the operation body, which changes according to the action change of the operation body, wherein the specific part belongs to the operation body;
and forming a selection object set by the key objects corresponding to the specific part with the maximum variation amplitude in the candidate object set.
The preset amplitude threshold is set according to an actual application scenario, and is not described herein again.
S407, triggering the input instruction of the character and/or command corresponding to each key object in the selected object set.
After the selection object set is determined, an input instruction of characters and/or commands corresponding to each key object in the selection object set can be triggered. When the selected object set only comprises at least one character key, an input instruction of a character corresponding to the at least one character key can be triggered; when the selection object set only comprises at least one command key, an input instruction of a command corresponding to the at least one command key can be triggered; and when the selected object comprises a character key and a command key, an operation command of a character and command combination corresponding to the character key and the command key can be triggered.
Therefore, according to the scheme, the selectable range of the key object is gradually reduced according to the movement direction of the operation body moving in the first form and the action change of the operation body after the operation body forms the second form, the required key object is finally positioned, and the corresponding input instruction is triggered, so that the purpose of accurately triggering the corresponding input instruction of the key object in the electronic equipment through space action interaction is achieved.
It should be noted that, since an electronic device may have two types of keyboards, for example: the notebook is provided with a qwerty keyboard and a numeric keyboard correspondingly, and different keyboards can be operated through different types of operation bodies in order to better improve user experience. Therefore, after the first action is acquired and before the candidate object set is determined, the information processing method may further include:
judging the type of the operation body sending the first action;
when the operation body sending the first action is of a first type, the first keyboard is used as a character and/or command input device;
when the operation body sending the first action is of a second type, the second keyboard is used as a character and/or command input device;
wherein the first type and the second type are different, and the first keyboard and the second keyboard are different.
For example: when the operation body giving the first action is both hands of the user, the qwerty keyboard can be used as a character and/or command input device, and when the operation body giving the first action is one hand of the user, the numeric keyboard can be used as a character and/or command input device; alternatively, when the operating body issuing the first action is the first specific device, the qwerty keyboard may be used as the character and/or command input device, and when the operating body issuing the first action is the second specific device, the numeric keyboard may be used as the character and/or command input device.
The following still introduces an information processing method provided by the embodiment of the present invention, taking a set of objects that can be input as a specific application scenario of a key object in a character and/or command input device of an electronic device.
The character and/or command input device can be a qwerty keyboard, a numeric keyboard, an abc keyboard and the like, the qwerty keyboard is a full keyboard of the notebook computer, the abc keyboard is a mobile phone function keyboard, wherein abc corresponds to the number 2, and def corresponds to the number 3; meanwhile, the key objects that may be input in the object set may include: character keys (e.g., letter keys such as Q, W, R, Y, G, and numeric keys such as 1, 2, 3, and 4), command keys (e.g., Shift key, Ctrl key, Enter key, etc.); the selection object set may be composed of at least one character key or at least one command key, and of course, may also be composed of a combination of character keys and command keys, for example: shift + A, Ctrl + X, Ctrl + S, etc., all of which are reasonable.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. In practical application, the electronic device may be a smart television, a head-mounted device, a notebook, or the like, which has a camera and a display screen.
As shown in fig. 5, an information processing method may include:
s501, when the operation body is in a static state in the acquisition area, acquiring the current position information of the operation body in the acquisition area through the acquisition unit;
in this embodiment, the preset trigger condition for acquiring the first action is as follows: the operating body is in a static state in the collection area. When the operation body is detected to be in a static state in the acquisition area, the acquisition unit can acquire the current position information of the operation body in the acquisition area, and further the current position information is utilized to determine a subsequent candidate object set.
It is understood that the position information of the operation body may be a relative position of the operation body and the display unit, for example: the operation body is positioned at the left front, the right front, the upper front, the lower front and the like of the display unit; alternatively, the position information of the operation body may be the position of the operation body relative to each part of the user body, for example: the operation body is positioned in front of the head, the neck, the chest, and the like of the user.
Further, for more accuracy, the trigger condition for acquiring the first action may be set as: the operating body is in a static state in the collection area in a first form, wherein the first form is one of a plurality of forms which can be formed by the operating body.
S502, obtaining the mapping relation between the line mark of the character and/or command input device and the position information of the operation body;
s503, determining a row label corresponding to the current position information according to the mapping relation and the current position information;
s504, forming a candidate object set by all key objects corresponding to the determined row marks;
when the current position information corresponding to the operation body is acquired, the mapping relation between the line mark of the character and/or command input device and the position information of the operation body can be acquired, wherein the mapping relation indicates the line marks respectively corresponding to different positions of the operation body.
After the mapping relationship is determined, the key object corresponding to the current position information in the inputtable object set can be obtained according to the mapping relationship and the current position information, and the obtained inputtable objects form a candidate object set.
For example: in the qwerty keyboard shown in fig. 12, the set of inputtable objects includes: the key objects in the rows 1, 2, 3 and 4 pre-construct the mapping relation between the row labels and the operators corresponding to the body parts of the user: the operation body is positioned in front of the head and corresponds to the row 1, the operation body is positioned in front of the neck and corresponds to the row 2, the operation body is positioned in front of the chest and corresponds to the row 3, and the operation body is positioned in front of the abdomen and corresponds to the row 4; when detecting that the current position information corresponding to the operation body indicates that the operation body is positioned in front of the head of the user, forming a candidate object set by all key objects in the row 1; and when the current position information corresponding to the operation body indicates that the operation body is positioned in front of the abdomen of the user, forming the key objects in the row 4 into a candidate object set.
S505, displaying a candidate object set in the inputtable object set in the display area, and hiding key objects except the candidate object set in the inputtable object set;
after the candidate object set is obtained, the candidate object set may be displayed in the display area, and the key objects other than the candidate object in the inputtable object set may be hidden.
Of course, after the candidate object set is obtained, the candidate object set may be displayed in other manners, for example: the set of candidate objects is displayed with a first effect and key objects in the set of inputtable objects other than the set of candidate objects are displayed with a second effect.
S506, when the operation body is detected to form a second shape in the collection area, the motion change of the operation body is collected through the collection unit;
s507, determining the specific part of the operation body which changes according to the action change of the operation body;
wherein the specific part belongs to the operation body.
S508, forming a selection object set by the key objects corresponding to the changed specific part of the operation body in the candidate object set;
s509, triggering an input instruction of characters and/or commands corresponding to each key object in the selected object set.
In this embodiment, steps S506 to S509 are similar to steps S404 to S407 of the above embodiment, and are not repeated herein.
Therefore, according to the scheme, the selectable range of the key object is gradually reduced according to the current position information of the operation body in the static state in the acquisition area and the action change of the operation body after the operation body forms the second state, the required key object is finally positioned and the corresponding input instruction is triggered, and therefore the purpose of accurately triggering the corresponding output instruction of the key object in the electronic equipment through space action interaction is achieved.
It should be noted that, since an electronic device may have two types of keyboards, for example: the notebook is corresponding to a qwerty keyboard and a numeric keyboard, and different keyboards can be operated through different types of operation bodies in order to better improve user experience. Therefore, after the first action is acquired and before the candidate object set is determined, the information processing method may further include:
judging the type of the operation body sending the first action;
when the operation body sending the first action is of a first type, the first keyboard is used as a character and/or command input device;
when the operation body sending the first action is of a second type, the second keyboard is used as a character and/or command input device;
wherein the first type and the second type are different, and the first keyboard and the second keyboard are different.
For example: when the operation body giving out the first action is the two hands of the user, the qwerty keyboard can be used as a character and/or command input device, and when the operation body giving out the first action is the one hand of the user, the numeric keyboard can be used as a character and/or command input device; alternatively, when the operating body issuing the first action is the first specific device, the qwerty keyboard may be used as the character and/or command input device, and when the operating body issuing the first action is the second specific device, the numeric keyboard may be used as the character and/or command input device.
In an application scenario where a set of objects capable of being input is a key object in a character and/or command input device of an electronic device, an example of an operation body is two hands of a user and a character and/or command input device is a qwerty keyboard is described below.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display screen and a camera, the display screen has a display area, and the camera has an acquisition area. In practical application, the electronic device may be a smart television, a head-mounted device, a notebook, or the like, which has a camera and a display screen.
As shown in fig. 6, an information processing method may include:
s601, when the two hands of the user are in a static state in the acquisition area, acquiring the current position information of the two hands in the acquisition area as being positioned at the neck of the user through a camera;
s602, obtaining the mapping relation between the row mark of the qwerty keyboard and the position information of the two hands;
s603, determining a line label corresponding to the current position information as a line 2 according to the mapping relation and the current position information;
as shown in fig. 12, it is assumed that the preset mapping relationship between the logo of the qwerty keyboard and the position information of the two hands is as follows: two hands in front of the head corresponding to row 1, two hands in front of the neck corresponding to row 2, two hands in front of the chest corresponding to row 3, and two hands in front of the abdomen corresponding to row 4.
When it is detected that the current position information of the two hands indicates that the two hands are positioned in front of the neck of the user, at this time, according to the mapping relation between the row mark of the qwerty keyboard and the position information of the two hands, the row 2 corresponding to the current position information can be determined.
S604, forming a candidate object set by all key objects in the row 2;
s605, displaying the candidate object set in the inputtable object set in the display area, and hiding key objects except the candidate object set in the inputtable object set;
s606, when detecting that the two hands form a five-finger separation shape in the collection area, collecting the motion changes of the two hands through the camera;
s607, according to the action change of the two hands, determining that the fingers with the bending amplitude exceeding the preset amplitude threshold value in the bent fingers are as follows: ring finger of left hand and middle finger of right hand;
s608, forming a selection object set by the key objects respectively corresponding to the ring finger of the left hand and the middle finger of the right hand in the candidate object set;
in accordance with the usual key operation habit, the left-hand ring finger corresponds to the W key in the candidate set, and the right-hand ring finger corresponds to the I key in the candidate set, so that the W key and the I key constitute the selected object set.
And S609, triggering an input instruction of the character corresponding to each key object in the selected object set.
Since the set of selection objects includes: and the W key and the I key are used, so that the input instruction corresponding to the W key and the I key can be triggered.
Therefore, according to the scheme, the selectable range of the key object is gradually reduced according to the current position information when the two hands are in the static state in the acquisition area and the action change of the two hands after the two hands form the second state, the required key object is finally positioned and the corresponding input instruction is triggered, and therefore the purpose of accurately triggering the corresponding input instruction of the key object in the electronic equipment through space action interaction is achieved.
An information processing method provided by an embodiment of the present invention is described below with a cursor on an operation display area as a specific application scenario.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. The display area of the display unit is divided into at least two sub-areas in advance, and the inputtable object in the inputtable object set is the operable position of the cursor in the at least two sub-areas. In practical application, the electronic device can be an intelligent television with a camera and a display screen, a head-mounted device, a notebook computer and the like.
As shown in fig. 7, an information processing method may include:
s701, acquiring a first action of an operation body in the acquisition area through the acquisition unit;
s702, determining a target sub-region from the at least two sub-regions according to the first action;
s703, moving the cursor to the target sub-region, and forming a candidate object set by the operable position of the cursor in the target sub-region;
when an instruction corresponding to the operable position of the cursor in the electronic device needs to be triggered in a spatial action interaction manner, the operating body needs to send a first action in the acquisition region of the acquisition unit, the electronic device can acquire the first action of the operating body through the acquisition unit, determine a target sub-region from the at least two sub-regions according to the acquired first action, move the cursor to the target sub-region, and further form a candidate object set by the operable position of the cursor in the target sub-region.
S704, displaying the candidate object set on the display unit;
after the candidate object set is determined through the first action acquired by the acquisition unit, the candidate object set can be displayed in the display area so as to show the operable position of the cursor in the determined candidate object set to the user, and therefore the user has better visual experience.
Wherein, in the display area, displaying the candidate object set may include:
and displaying the candidate object set in the display area, and hiding the operable position of the cursor outside the candidate object set in the at least two sub-areas.
Furthermore, in order to make the user clearly understand the operable position of the cursor in the candidate object set while clearly knowing all the operable positions of the cursor in at least two sub-regions, and thus have a better user experience, wherein, in the display region, displaying the candidate object set may include:
in the display area, the candidate object set is displayed with a first effect, and the operable position of the cursor outside the candidate object set in the at least two sub-areas is displayed with a second effect, wherein the first effect is different from the second effect.
Wherein, the first effect and the second effect can be divided from the virtual and real angle, the color angle, the light and shade angle, and the like, which are reasonable.
It will be understood by those skilled in the art that the above-mentioned display manner of the candidate object set is only an example, and should not be construed as limiting the embodiments of the present invention.
S705, collecting a second action of the operation body in the collection area through the collection unit;
s706, according to the second action, determining a target cursor operable position from the cursor operable positions in the candidate object set;
s707, moving the cursor to the target cursor operable position, and forming the target cursor operable position into a selection object set;
after the candidate object set is displayed in the display area, the operating body can continue to send out a second action in the acquisition area, and the electronic device can acquire the second action through the acquisition unit, and further screen out a target cursor operable position from the candidate object set according to the second action to form a selection object set, so that the selection of the cursor operable position is completed. It will be appreciated that this second action is different from the first action to ensure that the determination of the inputtable object is accurately completed after the current action is acquired.
S708, a cursor selection instruction corresponding to the operable position of the target cursor in the selected object set is triggered.
When the selection object set is determined through the second action acquired by the acquisition unit, a cursor selection instruction corresponding to the operable position of the target cursor in the selection object set can be triggered.
Therefore, in the scheme, the selectable range of the operable position of the cursor is gradually narrowed through the first action and the second action which are sequentially sent by the operation body, and finally the cursor is positioned to the required operable position of the cursor and a corresponding cursor selection instruction is triggered, so that the aim of accurately triggering the cursor selection instruction corresponding to the operable position of the cursor in the electronic equipment through space action interaction is fulfilled.
In the following, under a specific application scenario of a cursor on an operation display area, taking a first action as an operation body and taking a motion direction of a first form in an acquisition area as an example, an information processing method provided by an embodiment of the present invention is described.
The first form is one of a plurality of forms that the operation body can form, and different forms that different operation bodies can form, such as: when the operation body is a single hand of a user, the form that the operation body can form at least includes: a fist-making shape, a five-finger separating shape, a five-finger folding shape and the like; when the operation body is another specific device, the operation body can be formed in a shape including: a first state of the particular device, a second state of the particular device, etc.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. The display area of the display unit is divided into at least two sub-areas in advance, and the inputtable object in the inputtable object set is the operable position of the cursor in the at least two sub-areas. In practical application, the electronic device can be an intelligent television with a camera and a display screen, a head-mounted device, a notebook computer and the like.
As shown in fig. 8, an information processing method may include:
s801, when the operation body moves in the collection area in the first form, collecting the current movement direction of the operation body in the collection area through the collection unit;
in this embodiment, the preset trigger condition for acquiring the first action is: the operating body moves in the acquisition region in a first configuration. When the operation body moves in the acquisition area in the first form, a current movement direction of the operation body in the acquisition area may be acquired by the acquisition unit, where the current movement direction may include: up, down, left, right, left up, right up, left down, right down, etc.
S802, taking an adjacent sub-region of the at least two sub-regions in the current motion direction of the preset initial sub-region as a target sub-region;
s803, moving the cursor to the target sub-region, and forming a candidate object set by the operable position of the cursor in the target sub-region;
if the operation body only appears in the acquisition area in the first form and does not move, and no matter where the operation body appears in the acquisition area in the first form, a preset initial sub-area is used as a target sub-area corresponding to the position, the cursor is moved to the target sub-area, and the operable position of the cursor in the target sub-area forms a candidate object set; when the operation body is detected to move in the acquisition region in the first form, the current movement direction of the operation body needs to be detected, then the adjacent sub-region in the current movement direction of the preset initial sub-region is used as a target sub-region, the cursor is moved to the target sub-region, and the operable position of the cursor in the target sub-region forms a candidate object set.
It should be noted that, after the destination sub-region is determined, the cursor is at the preset cursor operable position when being moved into the destination sub-region.
For example: FIG. 13 shows a display area divided into nine sub-areas, where the initial sub-area is designated as sub-area 5; when the operation body is detected to move upwards in the collection area in the first form, taking the sub-area 2 above the sub-area 5 as a target sub-area; and when detecting that the first shape of the operating body moves to the right lower direction in the acquisition region, taking the lower right subregion 9 of the subregion 5 as the target subregion.
S804, in the display area, displaying the candidate object set with a first effect, and displaying the operable position of the cursor outside the candidate object set in the at least two sub-areas with a second effect;
it is reasonable that the first effect is different from the second effect, and the first effect and the second effect can be divided from a virtual-real angle, a color angle, a light-dark angle, and the like.
Of course, after the candidate object set is obtained, the candidate object set may be displayed in other manners, for example: and displaying the candidate object set in the display area, and hiding the operable position of the cursor outside the candidate object set in the at least two sub-areas.
S805, when the operation body moves in the second form in the acquisition area, acquiring the current movement direction of the operation body in the acquisition area through the acquisition unit;
s806, using the adjacent cursor operable position in the current motion direction of the current cursor operable position corresponding to the cursor in the candidate object set as the target cursor operable position;
s807, moving the cursor to the target cursor operable position, and forming the target cursor operable position into a selection object set;
the preset triggering condition for acquiring the second action is as follows: the operation body moves in a second shape in the acquisition area; wherein the second form is one of a plurality of forms that the operation body can form. It will be appreciated that the second modality is different from the first modality, for example: when the operation body is a single hand of a user, the first form of the operation body can be a fist-making form and the second form can be a five-finger-separated form, or the first form of the operation body can be a fist-making form and the second form can be a five-finger-closed form; alternatively, when the operation body is another specific device, it is reasonable that the first form of the operation body may be the first state of the specific device and the second form may be the second state of the specific device.
When the operation body is detected to move in the second state in the acquisition area, the acquisition unit acquires the current movement direction of the operation body in the acquisition area, determines the current cursor operable position corresponding to the cursor in the candidate object set, and further takes the adjacent cursor operable position in the current movement direction of the current cursor operable position as the destination cursor operable position, so that the cursor is moved to the destination cursor operable position, and the destination cursor operable position constitutes the selection object set.
And S808, triggering a cursor selection instruction corresponding to the operable position of the target cursor in the selection object set.
When the selection object set is determined, a cursor selection instruction corresponding to the operable position of the target cursor in the selection object set can be triggered.
Therefore, in the scheme, the selectable range of the operable position of the cursor is gradually narrowed through the movement direction of the operation body moving in the first form and the movement direction of the operation body moving in the second form, and finally the cursor is positioned to the required operable position of the cursor and a corresponding cursor selection instruction is triggered, so that the aim of accurately triggering the cursor selection instruction corresponding to the operable position of the cursor in the electronic equipment through space action interaction is fulfilled.
In the following, still in a specific application scenario of operating a cursor on a display area, taking a first action as an example that an operating body is in a static state in an acquisition area, an information processing method provided by the embodiment of the present invention is described.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display unit and a collection unit, the display unit has a display area, and the collection unit has a collection area. The display area of the display unit is divided into at least two sub-areas in advance, and the inputtable object in the inputtable object set is the operable position of the cursor in the at least two sub-areas. In practical application, the electronic device can be an intelligent television with a camera and a display screen, a head-mounted device, a notebook computer and the like.
As shown in fig. 9, an information processing method may include:
s901, when the operation body is in a static state in the acquisition area, acquiring the current position information of the operation body in the acquisition area through the acquisition unit;
in this embodiment, the preset trigger condition for acquiring the first action is as follows: the operating body is in a static state in the collection area. When the operation body is detected to be in a static state in the acquisition area, the acquisition unit can acquire the current position information of the operation body in the acquisition area, and further the current position information is utilized to determine a subsequent candidate object set.
It is understood that the position information of the operation body may be a relative position of the operation body and the display unit, for example: the operation body is positioned at the left front, the right front, the upper front, the lower front and the like of the display unit; alternatively, the position information of the operation body may be the position of the operation body relative to each part of the user body, for example: the operation body is positioned in front of the head, the neck, the chest, and the like of the user.
Further, for more accuracy, the trigger condition for acquiring the first action may be set as: the operating body is in a static state in the collection area in a first form, wherein the first form is one of a plurality of forms which can be formed by the operating body.
S902, acquiring the mapping relation between the position of the operation body and the sub-region;
s903, according to the mapping relation and the current position information of the operation body, taking the sub-region corresponding to the current position as a target sub-region;
s904, moving the cursor to the target sub-region, and forming a candidate object set by the operable position of the cursor in the target sub-region;
after the current position information corresponding to the operation body is acquired, the mapping relationship between the position of the operation body and the sub-region can be obtained, wherein the mapping relationship indicates the sub-regions corresponding to different positions of the operation body. After the mapping relationship is determined, according to the mapping relationship and the current position information, the sub-region corresponding to the current position is taken as a target sub-region, the cursor is moved to the target sub-region, and the operable position of the cursor in the target sub-region forms a candidate object set. It should be noted that, after the destination sub-region is determined, the cursor is at the preset cursor operable position when being moved into the destination sub-region.
For example: a display area shown in fig. 14 is divided into nine sub-areas, and a mapping relationship between the sub-areas and the positions of the operators relative to the display unit is pre-established: the operation bodies are positioned in the left upper front, right upper front, left front, right lower front and right lower front of the display unit and respectively correspond to the sub-area 1, the sub-area 2, the sub-area 3, the sub-area 4, the sub-area 5, the sub-area 6, the sub-area 7, the sub-area 8 and the sub-area 9; when the operator is detected to be positioned right in front of the display unit, taking the sub-region 5 as a target sub-region, and forming a candidate set by the operable position of the cursor in the target sub-region; and when the operator is detected to be positioned at the upper left front of the display unit, taking the sub-area 1 as a target sub-area, and forming a candidate set by the operable position of the cursor in the target sub-area.
S905, displaying the candidate object set in the display area, and hiding the operable position of the cursor outside the candidate object set in the at least two sub-areas;
after the candidate object set is obtained, the candidate object set may be displayed in the display area, and the operable position of the cursor outside the candidate object in the at least two sub-areas may be hidden.
Of course, after the candidate object set is obtained, the candidate object set may be displayed in other manners, for example: the set of candidate objects is displayed with a first effect and the operable position of the cursor outside the set of candidate objects in the at least two sub-regions is displayed with a second effect.
S906, when the operation body moves in the second form in the acquisition area, acquiring the current movement direction of the operation body in the acquisition area through the acquisition unit;
s907, taking the adjacent cursor operable position in the current movement direction of the current cursor operable position corresponding to the cursor in the target sub-area as the target cursor operable position;
s908, moving the cursor to the operable position of the target cursor, and forming the operable position of the target cursor into a selection object set;
and S909, triggering a cursor selection instruction corresponding to the operable position of the target cursor in the selection object set.
In this embodiment, steps S906 to S909 are similar to steps S805 to S808 of the above embodiment, and are not described again here.
Therefore, in the scheme, the selectable range of the operable position of the cursor is gradually reduced through the current position information of the operating body in the static state in the acquisition area and the moving direction of the operating body moving in the second state, and finally the cursor is positioned to the required operable position of the cursor and a corresponding cursor selection instruction is triggered, so that the aim of accurately triggering the cursor selection instruction corresponding to the operable position of the cursor in the electronic equipment through space action interaction is fulfilled.
In the following, under a specific application scenario of operating a cursor on a display area, an operation body is taken as an example of a single hand of a user, and an information processing method provided by the embodiment of the present invention is described.
It should be noted that the method provided by the embodiment of the present invention is applicable to an electronic device, where the electronic device includes a display screen and a camera, the display screen has a display area, and the camera has an acquisition area. The display area of the display screen is divided into at least two sub-areas in advance, and the inputtable object in the inputtable object set is the operable position of the cursor in the at least two sub-areas. In practical application, the electronic device can be an intelligent television with a camera and a display screen, a head-mounted device, a notebook computer and the like.
As shown in fig. 10, an information processing method may include:
s1001, when one hand of a user moves in the collection area in a fist making mode, collecting the current movement direction of the one hand of the user to be upward through the camera;
s1002, taking an adjacent sub-region 2 in the upward direction of the preset initial sub-region 5 in the at least two sub-regions as a target sub-region;
as shown in fig. 13, the sub-area 5 is set in advance as an initial sub-area, and when the user moves one hand upward, the sub-area 2 located above the sub-area 5 is set as a destination sub-area.
S1003, moving the cursor to the target sub-area 2, and forming a candidate object set by the operable position of the cursor in the target sub-area 2;
it should be noted that, after the destination sub-area 2 is determined, the cursor is located at the operable position of the cursor at the center of the sub-area when the cursor is moved into the destination sub-area 2.
S1004, in the display area, displaying the candidate object set with a first effect, and displaying the operable position of the cursor outside the candidate object set in the at least two sub-areas with a second effect;
s1005, when the single hand of the user moves in the collection area in the five-finger separated mode, collecting that the current movement direction of the single hand of the user in the collection area is downward through the camera;
s1006, using the adjacent cursor operable position in the downward direction of the current cursor operable position corresponding to the cursor in the candidate object set as the target cursor operable position;
s1007, moving the cursor to the target cursor operable position, and forming the target cursor operable position into a selection object set;
and S1008, triggering the cursor selection instruction corresponding to the operable position of the target cursor in the selected object set.
Therefore, in the scheme, the selectable range of the operable position of the cursor is gradually narrowed through the movement direction of the single hand of the user moving in the fist making mode and the movement direction of the five-finger separating mode, the cursor is finally positioned to the required operable position of the cursor and a corresponding cursor selection instruction is triggered, and therefore the purpose that the operable position of the cursor in the electronic equipment corresponds to the cursor selection instruction is accurately triggered through space action interaction.
Through the above description of the method embodiments, those skilled in the art can clearly understand that the present invention can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media that can store program codes, such as Read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and so on.
Corresponding to the above method embodiment, an embodiment of the present invention further provides an electronic device, including: as shown in fig. 15, the electronic device may further include:
a first action acquisition unit 1510 configured to acquire, by the acquisition unit, a first action of an operation body in the acquisition area;
a candidate object determining unit 1520, configured to determine a corresponding candidate object set from an inputtable object set according to the first action, wherein the candidate object set belongs to the inputtable object set;
a candidate display unit 1530 for displaying the candidate set in the display area;
a second action obtaining unit 1540, configured to collect, by the collection unit, a second action of the operation body in the collection area;
a selectable object determining unit 1550, configured to determine, according to the second action, a corresponding selected object set from the candidate object sets, where the selected object set belongs to the candidate object set;
and the instruction triggering unit 1560 is configured to trigger an operation instruction corresponding to an object in the selected object set.
It is understood that in practical applications, the electronic device may be a smart television, a head-mounted device, a notebook, etc.
According to the electronic equipment provided by the embodiment of the invention, a corresponding candidate object set is determined from the inputtable object set according to a first action sent by the operation body in the acquisition region acquired by the acquisition unit, and the candidate object set is displayed; and further determining a corresponding selection object set from the candidate object set according to a second action of the operation body acquired by the acquisition unit in the acquisition area, and further triggering the operation instruction corresponding to the input object in the selection object set. Therefore, according to the scheme, the selectable range of the objects capable of being input is gradually narrowed through the first action and the second action which are sequentially sent by the operation body, the required objects capable of being input are finally positioned, and the corresponding operation instruction is triggered, so that the purpose that the operation instruction corresponding to the objects capable of being input in the electronic equipment is accurately triggered through space action interaction is achieved.
The candidate object display unit 1530 may specifically be configured to:
displaying a set of candidate objects in the set of inputtable objects in the display area and hiding inputtable objects in the set of inputtable objects other than the set of candidate objects;
alternatively, the first and second electrodes may be,
in the display area, the set of candidate objects is displayed with a first effect, and inputtable objects other than the set of candidate objects in the set of inputtable objects are displayed with a second effect, wherein the first effect is different from the second effect.
The first action obtaining unit 1510 may specifically be configured to:
when the operation body moves in the collection area in a first form, collecting the current movement direction of the operation body in the collection area through the collection unit, wherein the first form is one of a plurality of forms which can be formed by the operation body;
accordingly, the candidate object determining unit 1520 may be specifically configured to:
a neighboring set in the current motion direction of a preset initial set among the inputtable object set is taken as a candidate object set.
In another embodiment of the present invention, the first action obtaining unit 1510 may specifically be configured to:
when the operation body is in a static state in the acquisition area, acquiring the current position information of the operation body in the acquisition area through the acquisition unit;
accordingly, the candidate object determination unit 1520 may be specifically configured to:
obtaining the distribution of the inputtable object set and the mapping relation of the position information of the operation body;
and forming an inputtable object corresponding to the current position information in the inputtable object set into a candidate object set according to the mapping relation and the current position information.
The second action obtaining unit 1540 may be specifically configured to:
when the operation body is detected to form a second shape in the collection area, the motion change of the operation body is collected through the collection unit, wherein the second shape is one of a plurality of shapes which can be formed by the operation body;
accordingly, the optional object determination unit 1550 may be specifically configured to:
determining a specific part of the operation body which changes according to the action change of the operation body, wherein the specific part belongs to the operation body;
and forming a selection object set by inputtable objects corresponding to the specific part of the candidate object set, in which the operation body is changed.
The electronic device provided by the embodiment of the invention is described below with a head-mounted device as a specific device form.
The electronic device provided by the invention can comprise:
the display unit is provided with a display area and is used for displaying information;
the acquisition unit is provided with an acquisition area and is used for acquiring information;
the first action acquisition unit is used for acquiring a first action of the operation body in the acquisition area through the acquisition unit;
a candidate object determining unit, configured to determine, according to the first action, a corresponding candidate object set from an inputtable object set, where the candidate object set belongs to the inputtable object set;
a candidate object display unit configured to display the candidate object set in the display area;
the second action acquisition unit is used for acquiring a second action of the operation body in the acquisition area through the acquisition unit;
a selectable object determining unit, configured to determine, according to the second action, a corresponding selected object set from the candidate object sets, where the selected object set belongs to the candidate object set;
the instruction triggering unit is used for triggering an operation instruction corresponding to an object in the selected object set;
a fixing unit for wearing the electronic device on the head;
the display device comprises a connecting unit and a fixing unit, wherein the connecting unit is used for connecting the display unit and the fixing unit, and has a first state and a second state, in the first state, the connecting unit enables at least a first part in the display unit to have a first positional relation relative to the fixing unit, and in the second state, the connecting unit enables the display unit to have a second positional relation relative to the fixing unit.
Wherein the fixing unit may include wearing parts such as a helmet, a headband, etc.; the display unit may include a display screen that may display corresponding information contents and an optical system that may receive light emitted from the display screen and perform light path conversion on the light emitted from the display screen to form an enlarged virtual image; the display screen can be a miniature display screen with a small size, and the optical system can receive light rays emitted from the display screen and convert light paths of the light rays emitted from the display screen to form an amplified virtual image; that is, the optical system has positive refractive power, and the user can view an enlarged virtual image of the display content having a large size through the optical system, so that the user can clearly view the displayed content, and the size of the display content viewed by the user is not limited by the size of the display unit.
The display unit is connected with the first position of the connecting unit; in the first state, the connection unit makes at least a first portion of the display unit away from the first position, and in the second state, the connection unit makes the display unit close to the first position. When the electronic equipment is worn on the head of a user through the fixing unit, at least the first part of the display unit is positioned in a visible area of the user and faces the user when the connecting unit is in the first state.
That is, when the electronic apparatus is worn on the head by the user through the fixing unit, at least a first portion of the display unit is located in a first region with respect to the fixing unit in a first state of the connection unit; and when the connecting unit is in the second state, at least a first part in the display unit is positioned in a second area relative to the fixing unit, wherein the first area and the second area are different, and the first area is a visible area of a user when the electronic equipment is worn on the head of the user through the fixing unit. It will be appreciated that the display unit may comprise a display screen of smaller size, the display unit being located in the viewable area of the user and facing the user in the first state for viewing by the user.
Further, the stationary unit may further comprise a rotating member, for example, in case the stationary unit is a helmet, in a first state the rotating member may be rotated such that the display unit is located in a visible area in front of the eyes of the user, and in a second state the rotating member may be rotated such that the display unit is moved up into a sandwich of the helmet or above the helmet.
Fig. 16 is a top view of an exemplary basic configuration of a head-mounted device provided by the present invention.
The head-mounted device includes a headband component 1610, a first sub-connection unit 1620, a first sub-display unit 1630, a second sub-connection unit 1640, and a second sub-display unit 1650, and of course, the head-mounted device should further include units for performing spatial motion processing: the system comprises an acquisition unit, a first action acquisition unit, a candidate object determination unit, a candidate object display unit, a second action acquisition unit, an optional object determination unit and an instruction triggering unit, wherein each unit can be integrally arranged in the head-mounted component 1610 or an external component.
It is understood that the headband member 1610 is an example of a fixing unit, the first sub-connecting unit 1620 and the second sub-connecting unit 1640 are examples of connecting units, and the first sub-display unit 1630 and the second sub-display unit 1650 are examples of display units. Wherein, the first sub-connecting unit 1620 connects the first end of the ribbon component 1610 and the first sub-display unit 1630, and the second sub-connecting unit 1640 connects the second end of the ribbon component 1610 and the second sub-display unit 1650.
The first and second sub-connecting units 1620 and 1640 have first and second states. In the first state, the first sub-connecting unit 1620 and the second sub-connecting unit 1640 cause the first sub-display unit 1630 and the second sub-display unit 1650 to have a first positional relationship with respect to the headband component 1610; and in the second state, the first and second sub-connecting units 1620 and 1640 cause at least a first portion of the first and second sub-display units 1630 and 1650 to have a second positional relationship with respect to the headband member 1610.
Specifically, when the headband member 1610 is worn on the head, the headband member 1610 is flexibly deformable such that the first sub-connection unit 1620 and the second sub-connection unit 1640 press against the left ear and the second sub-connection unit 1640 and the second sub-display unit 1650 press against the right ear in the second state.
Meanwhile, when the head mounted device is worn on the head by the user through the headband member 1610, at least a first portion of the first sub-display unit 1630 and the second sub-display unit 1650 is positioned within a visible region of the user and faces the user in the first state of the first sub-connection unit 1620 and the second sub-connection unit 1630. Alternatively, the states of the first and second sub-connecting units 1620 and 1640 may be controlled separately.
The first sub-connecting unit 1620 may be a first supporting arm of the first sub-display unit 1630 in the first state, and the second sub-connecting unit 1640 may be a second supporting arm of the first sub-display unit 1650 in the first state. As shown in fig. 16, a second included angle is formed between the first supporting arm and the first sub-display unit 1630, and between the second supporting arm and the second sub-display unit 1650.
Fig. 17 is a top view of an example situation where the user wears the head mounted device. Fig. 18 is a side view of an example situation where the user wears the head mounted device. As shown in fig. 17 and 18, in the first state of the first and second sub-connecting units 1620 and 1640, at least a first portion of the first and second sub-display units 1630 and 1650 is located in a visible region of a user and faces the user; the first sub-display unit 1630 and the second sub-display unit 1650 may include a display screen displaying information content and an optical system receiving light emitted from the display screen and performing optical path conversion on the light emitted from the display screen to form an enlarged virtual image, respectively, so that the user may view the enlarged virtual image 1710 of the first image through the first sub-display unit 1630 and the second sub-display unit 1650.
In the head-mounted device, the display unit is connected to the fixing unit which can be worn on the head of the user through the connecting unit, so that the weight of the display unit is mainly borne by the head, not by the bridge of the nose and the ears, and the load of the bridge of the nose and the ears is reduced.
For device or system embodiments, as they correspond substantially to method embodiments, reference may be made to the method embodiments for some of their descriptions. The above-described embodiments of the apparatus or system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways without departing from the spirit and scope of the present application. The present embodiment is an exemplary example only, and should not be taken as limiting, and the specific disclosure should not be taken as limiting the purpose of the application. For example, the division of the unit or the sub-unit is only one logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or a plurality of sub-units are combined together. In addition, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
Additionally, the systems, apparatus, and methods described, as well as the illustrations of various embodiments, may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present application. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The foregoing is directed to embodiments of the present invention, and it is understood that various modifications and improvements can be made by those skilled in the art without departing from the spirit of the invention.

Claims (9)

1. The information processing method is characterized by being applicable to electronic equipment, wherein the electronic equipment comprises a display unit and an acquisition unit, the display unit is provided with a display area, and the acquisition unit is provided with an acquisition area; the method comprises the following steps:
collecting a first action of an operation body in the collection area through the collection unit;
determining a corresponding candidate object set from the inputtable object set according to the first action, wherein the candidate object set belongs to the inputtable object set; when the first action of the operation body is the current motion direction of the operation body in the acquisition area in the first form, determining a corresponding candidate object set from the inputtable object sets according to the first action, including: forming a candidate object set by key objects corresponding to adjacent lines in the current motion direction of a preset initial line in the character and/or command input device serving as the inputtable objects; or, when the first action of the operation body is the current position information of the operation body in a static state in the acquisition region, determining a corresponding candidate object set from the inputtable object sets according to the first action, including: obtaining the mapping relation between the line mark of the character and/or command input device as the inputtable object and the position information of the operation body; determining a line mark corresponding to the current position information according to the mapping relation and the current position information; forming a candidate object set by all key objects corresponding to the determined row marks;
displaying the set of candidate objects in the display area;
collecting a second action of the operation body in the collection area through the collection unit;
determining a corresponding selection object set from the candidate object set according to the second action, wherein the selection object set belongs to the candidate object set;
and triggering an operation instruction corresponding to an inputtable object in the selection object set.
2. The method of claim 1,
the displaying the set of candidate objects comprises:
displaying a set of candidate objects in the set of inputtable objects in the display area and hiding inputtable objects in the set of inputtable objects other than the set of candidate objects;
alternatively, the first and second electrodes may be,
displaying, in the display area, the set of candidate objects with a first effect and displaying, with a second effect, inputtable objects out of the set of candidate objects in the set of inputtable objects, wherein the first effect is different from the second effect;
and/or the presence of a gas in the gas,
the acquiring, by the acquiring unit, a first action of the operation body in the acquiring region includes:
when the operation body is in a static state in the acquisition area, acquiring the current position information of the operation body in the acquisition area through the acquisition unit;
determining a corresponding candidate object set from the inputtable object sets according to the first action, comprising:
obtaining the distribution of the inputtable object set and the mapping relation of the position information of the operation body;
and forming an inputtable object corresponding to the current position information in the inputtable object set into a candidate object set according to the mapping relation and the current position information.
3. The method according to any one of claims 1-2, wherein the inputtable objects in the set of inputtable objects are key objects in a character and/or command input device of the electronic device;
accordingly, the method can be used for solving the problems that,
triggering an operation instruction corresponding to an inputtable object in the selection object set, wherein the operation instruction comprises the following steps:
and triggering an input instruction of characters and/or commands corresponding to each key object in the selected object set.
4. The method of claim 3, wherein after acquiring the first action, and prior to determining the set of candidate objects, the method further comprises:
judging the type of the operation body sending the first action;
when the operation body sending the first action is of a first type, the first keyboard is used as a character and/or command input device;
when the operation body sending out the first action is of a second type, a second keyboard is used as a character and/or command input device;
wherein the first type and the second type are different, and the first keyboard and the second keyboard are different.
5. The method of claim 1, wherein the second motion of the operative body is a motion change occurring after the operative body is present in the acquisition region in a second state;
determining a corresponding set of selection objects from the set of candidate objects according to the second action, including:
determining a specific part of the operation body which changes according to the action change of the operation body, wherein the specific part belongs to the operation body;
forming a selection object set by key objects corresponding to the specific part of the candidate object set, in which the operation body changes;
alternatively, the first and second electrodes may be,
determining a specific part of the operation body, which has a change amplitude exceeding a preset amplitude threshold value, according to the action change of the operation body, wherein the specific part belongs to the operation body;
forming a selection object set by key objects corresponding to a specific part of the candidate object set, wherein the variation amplitude of the key objects exceeds a preset amplitude threshold;
alternatively, the first and second electrodes may be,
determining a specific part with the largest change amplitude in the specific part of the operation body, which changes according to the action change of the operation body, wherein the specific part belongs to the operation body;
and forming a selection object set by the key objects corresponding to the specific part with the maximum variation amplitude in the candidate object set.
6. The information processing method is characterized by being applicable to electronic equipment, wherein the electronic equipment comprises a display unit and an acquisition unit, the display unit is provided with a display area, and the acquisition unit is provided with an acquisition area; the method comprises the following steps:
collecting a first action of an operation body in the collection area through the collection unit;
according to the first action, determining a corresponding candidate object set from the inputtable object sets, wherein the method comprises the following steps: determining a target sub-area from at least two sub-areas according to the first action; moving a cursor into the target sub-region, and forming a candidate object set by an operable position of the cursor in the target sub-region; the display area comprises at least two sub-areas, and an inputtable object in the inputtable object set is a cursor operable position in the sub-area; the set of candidate objects belongs to the set of inputtable objects;
displaying the set of candidate objects in the display area;
acquiring a second action of the operation body in the acquisition area through the acquisition unit, wherein the second action comprises the following steps: determining a destination cursor operable position from the cursor operable positions of the candidate object set according to the second action;
moving the cursor to the target cursor operable position, and forming the target cursor operable position into a selection object set;
determining a corresponding selection object set from the candidate object set according to the second action, wherein the selection object set belongs to the candidate object set;
triggering an operation instruction corresponding to an inputtable object in the selection object set, wherein the operation instruction comprises the following steps: and triggering a cursor selection instruction corresponding to the operable position of the target cursor in the selection object set.
7. The method of claim 6,
when the first action of the operation body is the current motion direction of the operation body in the acquisition region in the first form, determining a destination sub-region corresponding to the cursor from the at least two sub-regions according to the first action, including:
taking an adjacent sub-area in the current motion direction of a preset initial sub-area in the at least two sub-areas as a target sub-area;
or the like, or, alternatively,
when the first action of the operation body is the current position information of the operation body in a static state in the acquisition region, determining a target sub-region corresponding to the cursor from the at least two sub-regions according to the first action, including:
acquiring a mapping relation between the position of an operation body and a sub-region;
and taking the sub-region corresponding to the current position as a target sub-region corresponding to the cursor according to the mapping relation and the current position information of the operation body.
8. The method of claim 7,
the second action of the operation body in the acquisition area is acquired through the acquisition unit, and the second action comprises the following steps:
when the operating body moves in a second form in the acquisition area, acquiring the current movement direction of the operating body in the acquisition area through the acquisition unit;
according to the second action, determining a destination cursor operable position corresponding to the cursor from the cursor operable positions of the candidate object set, including:
and taking the adjacent cursor operable position in the current motion direction of the current cursor operable position corresponding to the cursor in the candidate object set as the target cursor operable position.
9. An electronic device, comprising a display unit and a collection unit, wherein the display unit has a display area, the collection unit has a collection area, the electronic device further comprises:
the first action acquisition unit is used for acquiring a first action of the operation body in the acquisition area through the acquisition unit;
a candidate object determining unit, configured to determine, according to the first action, a corresponding candidate object set from an inputtable object set, where the candidate object set belongs to the inputtable object set; when the first action of the operation body is the current movement direction of the operation body in the acquisition area in the first form, the candidate object determination unit is specifically configured to configure a candidate object set with key objects corresponding to adjacent lines in the current movement direction of a preset initial line in the character and/or command input device as the inputtable objects; or, when the first action of the operation body is the current position information of the operation body in a static state in the acquisition region, the candidate object determination unit is specifically configured to obtain a mapping relationship between the line symbol of the character and/or command input device as the inputtable object and the position information of the operation body; determining a line mark corresponding to the current position information according to the mapping relation and the current position information; forming a candidate object set by all key objects corresponding to the determined row marks;
a candidate object display unit configured to display the candidate object set in the display area;
the second action acquisition unit is used for acquiring a second action of the operation body in the acquisition area through the acquisition unit;
a selectable object determining unit, configured to determine, according to the second action, a corresponding selected object set from the candidate object sets, where the selected object set belongs to the candidate object set;
and the instruction triggering unit is used for triggering the operation instruction corresponding to the object in the selected object set.
CN201710348619.5A 2012-09-03 2012-09-03 Information processing method and electronic equipment Active CN107193373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710348619.5A CN107193373B (en) 2012-09-03 2012-09-03 Information processing method and electronic equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210322235.3A CN103677224B (en) 2012-09-03 2012-09-03 Method for processing information and electronic device
CN201710348619.5A CN107193373B (en) 2012-09-03 2012-09-03 Information processing method and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201210322235.3A Division CN103677224B (en) 2012-09-03 2012-09-03 Method for processing information and electronic device

Publications (2)

Publication Number Publication Date
CN107193373A CN107193373A (en) 2017-09-22
CN107193373B true CN107193373B (en) 2020-04-24

Family

ID=50315047

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710348619.5A Active CN107193373B (en) 2012-09-03 2012-09-03 Information processing method and electronic equipment
CN201210322235.3A Active CN103677224B (en) 2012-09-03 2012-09-03 Method for processing information and electronic device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201210322235.3A Active CN103677224B (en) 2012-09-03 2012-09-03 Method for processing information and electronic device

Country Status (1)

Country Link
CN (2) CN107193373B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107466396A (en) * 2016-03-22 2017-12-12 深圳市柔宇科技有限公司 Head-mounted display apparatus and its control method
CN109243046A (en) * 2018-07-09 2019-01-18 黄廉镇 A kind of business data processing method based on application program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101727181A (en) * 2009-12-30 2010-06-09 刘坤 Method for realizing computer input and output through 3D technology
CN102541401A (en) * 2010-12-21 2012-07-04 联想(北京)有限公司 Information processing equipment and method for processing information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007079425A2 (en) * 2005-12-30 2007-07-12 Apple Inc. Portable electronic device with multi-touch input
CN102335510B (en) * 2010-07-16 2013-10-16 华宝通讯股份有限公司 Human-computer interaction system
US9104306B2 (en) * 2010-10-29 2015-08-11 Avago Technologies General Ip (Singapore) Pte. Ltd. Translation of directional input to gesture
CN102591458A (en) * 2011-12-27 2012-07-18 上海聚力传媒技术有限公司 Method, device and equipment for executing video control operation based on human motion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101727181A (en) * 2009-12-30 2010-06-09 刘坤 Method for realizing computer input and output through 3D technology
CN102541401A (en) * 2010-12-21 2012-07-04 联想(北京)有限公司 Information processing equipment and method for processing information

Also Published As

Publication number Publication date
CN107193373A (en) 2017-09-22
CN103677224A (en) 2014-03-26
CN103677224B (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN106687889B (en) Display portable text entry and editing
US9830071B1 (en) Text-entry for a computing device
CN100407118C (en) 3D pointing method, 3D display control method, 3D pointing device, 3D display control device, 3D pointing program, and 3D display control program
KR101947034B1 (en) Apparatus and method for inputting of portable device
KR101323281B1 (en) Input device and method for inputting character
US8316319B1 (en) Efficient selection of characters and commands based on movement-inputs at a user-inerface
US20130069883A1 (en) Portable information processing terminal
US9857971B2 (en) System and method for receiving user input and program storage medium thereof
KR102247020B1 (en) Keyboard Typing System and Keyboard Typing Method with Finger Gesture
CN103733115A (en) Wearable computer with curved display and navigation tool
CN101452356A (en) Input device, display device, input method, display method, and program
CN101006493A (en) Virtual keypad input device
WO2014050147A1 (en) Display control device, display control method and program
WO2015050322A1 (en) Method by which eyeglass-type display device recognizes and inputs movement
US10621766B2 (en) Character input method and device using a background image portion as a control region
WO2022267760A1 (en) Key function execution method, apparatus and device, and storage medium
US20130241896A1 (en) Display control apparatus and control method therefor
CA2707917C (en) Virtual keyboard of a mobile terminal
CN107193373B (en) Information processing method and electronic equipment
CN109739349A (en) A kind of palm dummy keyboard input method, system and input sensing identifier
US20130227477A1 (en) Semaphore gesture for human-machine interface
CN106406567B (en) Switch the method and apparatus of user&#39;s input method on touch panel device
KR20120068259A (en) Method and apparatus for inpputing character using touch input
CN117480483A (en) Text input method for augmented reality device
JP6232694B2 (en) Information processing apparatus, control method thereof, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant