CN112650390A - Input method, related device and input system - Google Patents

Input method, related device and input system Download PDF

Info

Publication number
CN112650390A
CN112650390A CN202011534697.2A CN202011534697A CN112650390A CN 112650390 A CN112650390 A CN 112650390A CN 202011534697 A CN202011534697 A CN 202011534697A CN 112650390 A CN112650390 A CN 112650390A
Authority
CN
China
Prior art keywords
user
input
operation object
entity
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011534697.2A
Other languages
Chinese (zh)
Inventor
束珉鑫
余飞
闫珂
蒋昌军
戴晓楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202011534697.2A priority Critical patent/CN112650390A/en
Publication of CN112650390A publication Critical patent/CN112650390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods

Abstract

The application provides an input method, a related device and an input system, wherein the method comprises the following steps: collecting input operation executed by a user on an entity operation object; the entity operation object is used for enabling a user to execute input operation; determining input operation information corresponding to the acquired user input operation; the input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object; and executing an action corresponding to the input operation information on a virtual operation object corresponding to the entity operation object in a virtual reality scene. The input mode enables a user to input information in the virtual reality scene in the same input mode as the real scene, so that the information input efficiency of the virtual reality scene can be improved.

Description

Input method, related device and input system
Technical Field
The present application relates to the field of information input technologies, and in particular, to an input method, a related device, and an input system.
Background
The mainstream input mode of the virtual reality equipment in the current market is to display an input method view in a virtual reality view immersed by a user, and the user presses a determined key to input after pointing to a certain determined key position in a corresponding input method view in a ray mode by means of a handle device.
The user operation of the input mode is complicated, and the input efficiency is low.
Disclosure of Invention
Based on the technical current situation, the application provides an input method, a related device and an input system, which can improve the efficiency of inputting information to a virtual reality scene by a user.
In order to achieve the above purpose, the technical solution proposed by the present application is specifically as follows:
an input method, comprising:
collecting input operation executed by a user on an entity operation object; the entity operation object is used for enabling a user to execute input operation;
determining input operation information corresponding to the acquired user input operation; the input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object;
and executing an action corresponding to the input operation information on a virtual operation object corresponding to the entity operation object in a virtual reality scene.
An input system, comprising:
the system comprises an entity operation object, an input monitoring device and virtual reality equipment;
the entity operation object is used for enabling a user to execute input operation;
the input monitoring device is used for acquiring input operation executed by a user on the entity operation object, determining input operation information according to the acquired user input operation, and sending the input operation information to the virtual reality equipment;
and the virtual reality equipment is used for executing the action corresponding to the input operation information on the virtual operation object corresponding to the entity operation object in the virtual reality scene according to the input operation information.
An input device, comprising:
the operation acquisition unit is used for acquiring input operation executed by a user on the entity operation object; the entity operation object is used for enabling a user to execute input operation;
the analysis processing unit is used for determining input operation information corresponding to the acquired user input operation; the input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object;
and the input display unit is used for executing the action corresponding to the input operation information on the virtual operation object corresponding to the entity operation object in the virtual reality scene.
An input electronic device comprising:
a memory and a processor;
the memory is connected with the processor and used for storing programs;
the processor is used for realizing the input method by running the program in the memory.
A storage medium having stored thereon a computer program which, when executed by a processor, implements the input method described above.
According to the input method, the entity operation object is mapped to the virtual reality space, the input operation information is determined based on the input operation executed by the user on the entity operation object, the action corresponding to the input operation information is executed on the virtual operation object corresponding to the entity operation object according to the input operation information, and the input operation executed by the user on the entity operation object can be mapped to the virtual operation object. The input mode enables the user to adopt the input mode the same as that of the real scene, and information is input in the virtual reality scene, so that the information input efficiency of the virtual reality scene can be improved, and meanwhile, the input mode accords with the input habit of the user, and therefore the input experience of the user can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an input system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an input method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of an input operation capture scenario provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of another input operation capture scenario provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a mapping relationship between an entity operation object and a virtual operation object provided in the embodiment of the present application;
fig. 6 is a schematic diagram of coordinates of an area a in a virtual reality scene provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of a trace written by a user on a physical operation object according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of another input system provided in an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an input device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an input electronic device according to an embodiment of the present application.
Detailed Description
The technical scheme of the embodiment of the application is suitable for the information input application scene in the virtual reality scene, and by adopting the technical scheme of the embodiment of the application, a user can input operation in the real scene to realize the information input of the virtual reality scene.
For example, the technical solution of the embodiment of the present application can be applied to the information input system shown in fig. 1. The information input system comprises an entity operation object 1, an input monitoring device 2 and a virtual reality device 3.
The physical object 1 may be any physical object, and the user may perform an input operation on the physical object, for example, perform an input operation such as writing or clicking on the physical object. Illustratively, the physical operation object may be a physical keyboard, a handwriting pad, writing paper, a keyboard drawing drawn by the user himself, a palm of the user, and the like. In theory, any physical entity that can be used by the user for input operation can be the entity operation object.
It should be noted that, in order to clarify the specific input content corresponding to the clicking operation by the user, the input content corresponding to each area on the physical operation object may be predetermined, and when the user clicks a certain area on the physical operation object, the input content corresponding to the clicking operation by the user may be determined according to the predetermined input content corresponding to the area.
The input monitoring device 2 is used for monitoring and collecting the input operation of the user on the entity operation object. The input monitoring device also has a data analysis function, and can analyze the input operation executed by the user on the entity operation object, determine the object area clicked by the user, determine the input content of the user, record the writing track of the user and the like.
The input monitoring device 2 generates input operation information based on the collected user input operation, and transmits the input operation information to the virtual reality apparatus 3.
The virtual reality device 3 is used for generating a virtual reality scene, and a virtual operation object corresponding to the entity operation object is displayed in the virtual reality scene. The virtual reality equipment receives the input operation information sent by the input monitoring device, and executes the action corresponding to the input operation information in the virtual operation object in the virtual reality scene, thereby achieving the purpose of inputting the information in the virtual reality scene.
Based on the functions of the parts of the information input system, the embodiment of the application provides an input method, which can collect the input operation executed by the user on the entity operation object, and realize information input in the virtual reality scene based on the input operation.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 2, an input method provided in an embodiment of the present application includes:
s201, collecting input operation executed by a user on an entity operation object.
The entity operation object may be any physical entity capable of enabling a user to perform an input operation, such as a general entity keyboard, a handwriting board, a keyboard designed by the user in a customized manner, a keyboard drawn on paper or other paintable entity by the user, or any physical entity without drawing content, for example, paper, a carton, a washboard, a desktop panel, a book, or a part of a body of the user, for example, a palm, an arm, and the like.
In theory, any physical entity on which a user can perform an input operation may be the physical operation object.
The input operation performed by the user on the physical operation object may be a conventional input operation performed by the user in the display scene, such as an operation of typing in information by the user through a computer keyboard, an operation of typing in information by the user through a mobile phone keyboard, or an operation of writing on a writing pad or paper by the user.
Due to the fact that the habit of a user performing input operation in a real scene is mature, for example, when the user uses a computer and a mobile phone all the year round and writes everyday, the user forms a skilled keyboard input habit and a handwriting input habit. Therefore, the user executes the input operation on the entity operation object, the input operation habit of the user is met, the input operation can be performed skillfully, and the input operation efficiency of the user is high.
For input operation executed by a user on an entity operation object, the embodiment of the application provides two feasible implementation modes, wherein the first mode is that monitoring and acquisition can be carried out through a camera device; secondly, the input operation can be performed on the physical operation object by the user holding the hand-held input device, and the input operation of the user is collected by the hand-held input device.
For example, referring to fig. 3, corresponding to the first mode, when the camera device is used to capture the user input operation, in order to keep the camera device stable and obtain a clear image capture result, the camera device is fixed, and the physical operation object is placed in the visual field of the camera device to obtain an image map of the physical operation object, and when the user performs an input operation on the physical operation object, for example, the user taps a key on a keyboard or writes on a writing pad or writing paper, the camera device may capture the user input operation.
Wherein, foretell camera device can include ordinary optics camera and infrared range finding camera, and two kinds of camera cooperations realize the three-dimensional control to user input operation to gather user's button operation or write the operation more accurately.
The input operation of the user on the entity operation object is acquired through the fixed camera device, and the entity operation object is stable, so that the real environment of the user input is fixed, the input habit of the user is trained, and the input efficiency of the user is improved.
Referring to fig. 4, corresponding to the second manner, when a user holds a handheld input device (i.e. an auxiliary input device in the figure) to perform an input operation on an entity operation object, the user may touch or approach the entity operation object with the handheld input device, for example, approach or contact a certain part of a palm (the palm defines a corresponding relationship between each region and input content in advance), and a camera, a distance sensor, and the like on the handheld input device may sense the input operation of the user and determine a position or a region of the entity operation object touched or approached by the user with the handheld input device.
The handheld input device is used for collecting the input operation executed by the user on the entity operation object, so that the handheld input device has higher flexibility, the user can execute the input operation on the mobile entity operation object, and the information input requirement under more activity scenes can be met.
And S202, determining input operation information corresponding to the acquired user input operation.
The input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object.
In general, input operations of a user in a real scene are realized by clicking a key, by writing, or by a voice, but voice input does not need to be displayed in a virtual reality scene, or voice input is relatively simple in corresponding display between the real scene and the virtual reality scene.
Accordingly, based on the collected user input operation, click operation information performed by the user on the entity operation object or trajectory information written by the user on the entity operation object can be determined.
The click operation information executed by the user on the entity operation object includes, but is not limited to, position area information clicked by the user, key information clicked by the user, the number of times the user clicks the key, the click duration, and other click action information.
The trajectory information written by the user on the physical operator object may include coordinate information of the trajectory written by the user.
In order to improve the information recording and processing efficiency, the input operation information is preferably recorded in a light-weight information recording system. For example, when the user input operation is a click operation, the input operation information may be provided with only key-related event information, that is, which key is pressed by the user or which position area is pressed by the click, whether the click action is a long press or a short press, and the like, for example, if the user presses the Q key for a long time, the input operation information may be represented as [ Q, long press ].
When the user input operation is a writing operation, the input operation information may include user writing trace coordinate information, such as position coordinate information of key points on the writing trace, and the like.
And S203, executing the action corresponding to the input operation information on the virtual operation object corresponding to the entity operation object in the virtual reality scene.
Specifically, in the embodiment of the present application, the entity operation object is mapped to a virtual reality scene, so that a virtual operation object corresponding to the entity operation object is displayed in the virtual reality scene.
For example, as shown in fig. 5, after the above-mentioned physical operation object is captured by the imaging device, the imaging result is transmitted to the virtual reality device, and the virtual reality device maps the physical operation object to the generated virtual reality scene. The virtual reality device may map the entity operation object to a virtual reality scene through spatial coordinate transformation, such as translation, rotation, scaling, and the like, to obtain a virtual operation object.
The display form of the virtual operation object may be consistent with the display form of the entity operation object in the real scene, or the display form may be adjusted based on the real scene, as long as the approximate area distribution of the virtual operation object is matched with the entity operation object, or each area of the virtual operation object and each area of the entity operation object have a clear corresponding relationship, which is not strictly limited in the embodiments of the present application.
Based on the mapping relationship between the entity operation object and the virtual operation object in the virtual reality scene, when the input operation information of the user is determined by collecting the input operation performed by the user on the entity operation object, an action corresponding to the input operation information is performed on the virtual operation object in the virtual reality scene, that is, an entity operation object action corresponding to the input operation of the user is performed on the virtual operation object.
For example, if the physical operand is a physical keyboard, the virtual operand corresponding to the physical operand is a virtual keyboard.
When a user clicks a Q key on the physical keyboard, input operation information [ Q, short pressing ] can be determined by collecting the input operation of the user, and the short pressing action of the Q key is executed on the virtual keyboard in the virtual reality scene based on the input operation information, so that the corresponding action of the physical keyboard when the user presses the Q key on the physical keyboard for a short time can be reflected.
As can be seen from the above description, in the input method provided in this embodiment of the present application, the entity operation object is mapped to the virtual reality space, the input operation information is determined based on the input operation performed by the user on the entity operation object, and the action corresponding to the input operation information is performed on the virtual operation object corresponding to the entity operation object according to the input operation information, that is, the input operation performed by the user on the entity operation object can be mapped to the virtual operation object. The input mode enables the user to adopt the input mode the same as that of the real scene, and information is input in the virtual reality scene, so that the information input efficiency of the virtual reality scene can be improved, and meanwhile, the input mode accords with the input habit of the user, and therefore the input experience of the user can be improved.
As described above, in general, a user inputs information into a virtual reality scene by way of a click input or a handwriting input. Next, the present embodiment introduces a specific processing procedure of the input method proposed in the present embodiment from two aspects, namely, a click input mode and a handwriting input mode.
When the user performs the click input operation on the entity operation object, the position coordinates of the click of the user on the entity operation object can be determined according to the collected click operation of the user. The position coordinate is a position coordinate in a space coordinate system established based on the physical operation object, and a certain position coordinate on the physical operation object represents a position of a certain point on the physical operation object.
And determining the input operation information according to the position coordinates clicked and pressed by the user on the entity operation object and the clicking action executed by the user on the entity operation object.
The click action performed by the user on the entity operation object specifically refers to the type of the click action performed by the user on the entity operation object, and may be, for example, a long click, a short click, a single click, a double click, and the like.
For example, the input operation information may be composed of position coordinates of a click made by the user on the physical operation object and a click action performed by the user on the physical operation object.
Or, based on the purpose of displaying the user input operation in the virtual reality scene, in the embodiment of the present application, according to the position coordinate clicked by the user on the physical operation object, on the virtual operation object corresponding to the physical operation object, the virtual operation area corresponding to the position coordinate is determined, that is, the operation area clicked by the user is determined from the virtual operation object, so as to perform the click area display on the virtual operation object.
After the virtual operation area is determined from the virtual operation object, the input operation information is generated by using the information of the virtual operation area, for example, the area identifier of the virtual operation area, or the position information of the virtual operation area, and the click action information executed by the user on the physical operation object. Based on the input operation information, the type of the click action by the user and the area of the virtual operation object corresponding to the click action can be specified.
Further, in order to facilitate a user to perform an input operation on the entity operation object to implement information input, the embodiment of the present application further defines in advance a corresponding relationship between different object regions of the entity operation object and input content.
For example, for the palm shown in fig. 4, input contents corresponding to different areas on the palm are predefined. When the entity operation object is an entity keyboard or a user-defined drawn keyboard, each key on the keyboard corresponds to an input content, for example, each key corresponds to a character and the like.
According to the position coordinates clicked by the user on the entity operation object, the input content of the user can be determined by combining the corresponding relation between different object areas on the entity operation object and the input content. That is, the input content corresponding to the object area where the position coordinate pressed by the user is located is the user input content.
After a virtual operation area corresponding to the position coordinate is determined on a virtual operation object corresponding to the entity operation object according to the position coordinate clicked on the entity operation object by the user, the information of the virtual operation area, such as the identification of the virtual operation area, the position coordinate of the virtual operation area and the like, is combined with the input content of the user and the click action information executed on the entity operation object by the user to generate the input operation information. Based on the input operation information, the type of the click action of the user, the region of the virtual operation object corresponding to the click action, and the user input content can be specified.
As an exemplary implementation manner, the above-mentioned determining, according to the position coordinate clicked by the user on the physical operation object, the virtual operation area corresponding to the position coordinate on the virtual operation object corresponding to the physical operation object can be implemented in two manners as follows:
the first implementation mode comprises the following steps:
and determining a virtual operation area corresponding to the position coordinate on the virtual operation object according to the position coordinate clicked by the user on the entity operation object and the spatial corresponding relation between the entity operation object and the virtual operation object.
As described in the foregoing embodiments, the virtual operation object displayed in the virtual reality scene by the virtual display device is obtained by performing conversion processing such as translation, rotation, and scaling on an image of the physical operation object in space, so that a clear conversion relationship exists between the coordinate system of the physical operation object and the coordinate system of the virtual operation object, for example, the coordinate system of the physical operation object is translated, rotated, and scaled, so as to obtain the coordinate system of the virtual operation object. That is, there is an explicit spatial correspondence between the physical operands and the virtual operands.
After the position coordinate clicked by the user on the entity operation object is determined, the virtual operation area corresponding to the position coordinate can be determined on the virtual operation object according to the spatial corresponding relation between the entity operation object and the virtual operation object.
For example, based on the spatial correspondence between the physical manipulation object and the virtual manipulation object, a transformation matrix Ts between the physical manipulation object coordinate system and the virtual manipulation object coordinate system may be determined in advance, and then, using the position coordinates (x, y, z) pointed by the user on the physical manipulation object and the transformation matrix Ts, the virtual manipulation object position (x ', y ', z ') (x, y, z) × (Ts) corresponding to the position coordinates may be calculated, and the virtual manipulation region corresponding to the virtual manipulation object position may be determined.
Assuming that the user clicks (x11, y11, z11) on the physical operation object, the position on the virtual operation object corresponding to the position is (x11 ', y11 ', z11 ') calculated by the above-mentioned conversion matrix Ts.
Further, if it is assumed that respective vertex coordinates of the area a on the virtual operation object are as shown in fig. 6, if x11 '> x21 and x 11' < x22, and y11 '> y22 and y 11' < y21, it can be determined that the position coordinates that the user clicks on the physical operation object are in the a area on the virtual operation object. Where the distance between the position clicked by the user and the plane of the a area is dz, z 11' -z 21, it can be specified that the user clicks the a area when dz < Δ z.
The second implementation mode comprises the following steps:
inputting the position coordinates clicked by a user on the entity operation object into a position mapping model trained in advance, and determining a virtual operation area corresponding to the position coordinates on a virtual operation object corresponding to the entity operation object;
the position mapping model is obtained by training a position coordinate clicked and pressed by a user on an entity operation object as a training sample and a virtual operation area corresponding to the position coordinate clicked and pressed by the user on the entity operation object as a sample label.
Specifically, in the embodiment of the present application, a position mapping model is trained in advance, and the position mapping model is used for calculating a virtual operation area corresponding to a position coordinate according to the position coordinate clicked by a user on an entity operation object.
In the training process, the position coordinates of the click input operation executed by the user on the entity operation object are used as training samples, the virtual operation area corresponding to the click input operation of the user is used as a training label, the position mapping model is trained, and the model parameters are updated by using a back propagation algorithm until the model can accurately calculate the virtual operation area corresponding to the input position coordinates.
For example, the position mapping model may be built based on a deep neural network.
When a user performs writing input operation on an entity operation object, determining the writing track coordinate written on the entity operation object by the user according to the collected writing operation performed on the entity operation object by the user.
For example, as shown in fig. 7, when a user writes on a palm with a finger or other operation body, such as a handheld input device, a writing track of the user may be collected by a camera or the handheld input device, and position coordinates of each point on the writing track may be calculated and determined.
Input operation information can be generated by using writing track information written on the entity operation object by the user, and the writing track of the user can be clarified based on the input operation information.
Furthermore, according to the writing track coordinates written on the entity operation object by the user, the text content written by the user can be determined.
For example, if the trajectory written on the palm of the hand by the user is as shown in fig. 7, the analysis of the writing trajectory can determine that the text content written by the user may be any one of text characters [7, Chinese character, etc. ].
When generating the input operation information, the text content written by the user and the writing track coordinates written by the user on the entity operation object may be jointly formed into the input operation information. Based on the data operation information, not only the writing track of the user can be determined, but also the text content written by the user can be determined.
Based on the above description of the implementation of the input method, the structure and function of the input system proposed in the embodiment of the present application are described as follows:
referring to fig. 1, an input system provided in an embodiment of the present application includes:
the system comprises an entity operation object 1, an input monitoring device 2 and a virtual reality device 3.
The physical object 1 is used for a user to perform an input operation, and may be a physical object of any form on which the user can perform an input operation, such as a click operation or a writing operation.
For example, in order to clarify specific input content corresponding to the clicking operation of the user, input content corresponding to each area on the physical operation object may be predetermined, and when the user clicks a certain area on the physical operation object, the input content corresponding to the clicking operation of the user may be determined according to the predetermined input content corresponding to the area.
The input monitoring device 2 is configured to collect an input operation performed by a user on the entity operation object 1. The input monitoring device also has a data analysis function, and can analyze the input operation executed by the user on the entity operation object 1, determine the object area clicked by the user, determine the input content of the user, record the writing track of the user and the like.
The input monitoring device 2 generates input operation information based on the collected user input operation, and transmits the input operation information to the virtual reality apparatus 3.
The input monitoring device 2 may be a camera device, such as a general optical camera, an infrared distance measuring camera, or a portable handheld input device.
When the input monitoring device 2 is a camera device, it fixedly monitors and shoots the physical operation object 1, thereby realizing the collection of the input operation of the user on the physical operation object 1.
When the input monitoring device 2 is a handheld input device that can be used movably, a user holds the input monitoring device 2 to perform an input operation on the physical operation object 1, and in the user operation process, the input monitoring device 2 collects information related to the user input operation, such as information related to the user clicking operation, information related to the user writing, and the like.
And the virtual reality device 3 is used for generating a virtual reality scene, and displaying a virtual operation object correspondingly matched with the entity operation object in the virtual reality scene. The virtual reality device 3 receives the input operation information transmitted from the input monitoring apparatus 2, and executes an action corresponding to the input operation information in a virtual operation object in a virtual reality scene.
The input monitoring device 2 and the virtual reality device 3 may be connected in a wired or wireless manner such as a wireless network or bluetooth communication.
The input system provided in the embodiment of the present application maps an entity operation object to a virtual reality space, determines input operation information based on an input operation performed by a user on the entity operation object, and performs an action corresponding to the input operation information on a virtual operation object corresponding to the entity operation object according to the input operation information, that is, maps the input operation performed by the user on the entity operation object to the virtual operation object. The input mode enables the user to adopt the input mode the same as that of the real scene, and information is input in the virtual reality scene, so that the information input efficiency of the virtual reality scene can be improved, and meanwhile, the input mode accords with the input habit of the user, and therefore the input experience of the user can be improved.
Illustratively, referring to fig. 8, the input monitoring device 2 described above includes:
an information acquisition module 21 and a data processing module 22.
The information acquisition module 21 includes monitoring devices such as an optical camera and an infrared distance measurement camera, and is configured to acquire an input operation performed by a user on the entity operation object 1, and send acquired information of the user input operation to the data processing module 22.
And the data processing module 22 is configured to analyze and process the information sent by the information acquisition module 21 to obtain input operation information. For example, the position coordinates of the user click, the type of the user click action, the coordinates of the user writing trajectory, the analysis of the user input content, and the like are determined, and the analysis result is composed as input operation information and sent to the virtual reality device 3.
The virtual reality device 3 includes an information processing module 31 and a display module 32.
The information processing module 31 receives the image of the real operation object 1 transmitted by the input monitoring device 2, and displays a virtual operation object corresponding to the real operation object in a virtual reality scene by using the image.
The information processing module 31 is further configured to receive input operation information sent by the input monitoring device 2, and perform analysis processing on the received input operation information, for example, determine a user click area, analyze a user writing trajectory, and determine display content and a display position on a virtual operation object.
The information processing module 31 drives the display module 22 to display an action corresponding to the received input operation information on the virtual operation object of the virtual reality scene using the analysis result. For example, assuming that the input operation information received by the information processing module 31 is [ Q, short-press ], that is, information representing a Q key of a short-press keyboard, the information processing module 31 analyzes and determines that an action of short-pressing the Q key should be performed in a virtual keyboard of a virtual reality scene, and thus, the control display module 22 displays an action effect that the key Q is short-pressed on the virtual keyboard, for example, the key Q may flash once to represent that the key Q is short-pressed once.
Further, specific functions and working contents of the structures of the parts of the input system may also refer to processing contents described in the above embodiment of the input method, and the embodiment of the present application is not described repeatedly.
An embodiment of the present application further provides an input device, as shown in fig. 9, the input device including:
an operation acquisition unit 100 for acquiring an input operation performed by a user on an entity operation object; the entity operation object is used for enabling a user to execute input operation;
an analysis processing unit 110 configured to determine input operation information corresponding to the acquired user input operation; the input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object;
and an input display unit 120 configured to execute an action corresponding to the input operation information on a virtual operation object corresponding to the physical operation object in a virtual reality scene.
The input device provided in the embodiment of the present application maps an entity operation object to a virtual reality space, determines input operation information based on an input operation performed by a user on the entity operation object, and performs an action corresponding to the input operation information on a virtual operation object corresponding to the entity operation object according to the input operation information, that is, maps the input operation performed by the user on the entity operation object to the virtual operation object. The input mode enables the user to adopt the input mode the same as that of the real scene, and information is input in the virtual reality scene, so that the information input efficiency of the virtual reality scene can be improved, and meanwhile, the input mode accords with the input habit of the user, and therefore the input experience of the user can be improved.
Optionally, the determining input operation information corresponding to the collected user input operation includes:
according to the collected click operation executed by the user on the entity operation object, determining the position coordinate of the click operation of the user on the entity operation object;
and determining input operation information according to the position coordinates clicked and pressed by the user on the entity operation object and the clicking action executed by the user on the entity operation object.
Optionally, the determining input operation information according to the position coordinate clicked by the user on the entity operation object and the click action executed by the user on the entity operation object includes:
according to the position coordinates clicked by the user on the entity operation object, determining a virtual operation area corresponding to the position coordinates on the virtual operation object corresponding to the entity operation object;
and generating input operation information by using the information of the virtual operation area and the click action information executed by the user on the entity operation object.
Optionally, the entity operation object is an operation object defining a correspondence between different object areas and input content;
the determining input operation information according to the position coordinates clicked and pressed by the user on the entity operation object and the clicking action executed by the user on the entity operation object comprises:
according to the position coordinates clicked by the user on the entity operation object, determining a virtual operation area corresponding to the position coordinates on the virtual operation object corresponding to the entity operation object;
determining user input content according to the position coordinates clicked by the user on the entity operation object and the corresponding relation between different object areas of the entity operation object and the input content;
and generating input operation information by using the information of the virtual operation area, the user input content and the click action information executed by the user on the entity operation object.
Optionally, the determining, according to the position coordinate clicked by the user on the entity operation object, a virtual operation area corresponding to the position coordinate on the virtual operation object corresponding to the entity operation object includes:
determining a virtual operation area corresponding to the position coordinate on the virtual operation object according to the position coordinate clicked by a user on the entity operation object and the spatial corresponding relation between the entity operation object and the virtual operation object;
alternatively, the first and second electrodes may be,
inputting the position coordinates clicked by the user on the entity operation object into a position mapping model trained in advance, so as to determine a virtual operation area corresponding to the position coordinates on a virtual operation object corresponding to the entity operation object;
the position mapping model is obtained by training a position coordinate clicked and pressed by a user on the entity operation object as a training sample and a virtual operation area corresponding to the position coordinate clicked and pressed by the user on the entity operation object as a sample label.
Optionally, the determining input operation information corresponding to the collected user input operation includes:
determining the writing track coordinate written on the entity operation object by the user according to the acquired writing operation executed on the entity operation object by the user;
and generating input operation information at least by utilizing the writing track coordinates written on the entity operation object by the user.
Optionally, the generating, by using at least a writing trajectory coordinate written on the entity operation object by the user, input operation information includes:
determining the text content written by the user according to the writing track coordinates written by the user on the entity operation object;
and generating input operation information by using the text content written by the user and the writing track coordinates written on the entity operation object by the user.
Optionally, the acquiring an input operation performed by a user on the entity operation object includes:
acquiring input operation executed by a user on an entity operation object through a camera device;
alternatively, the first and second electrodes may be,
and acquiring input operation executed on the entity operation object by a user holding the handheld input device through the handheld input device.
Specifically, please refer to the contents of the above method embodiments for the specific working contents of each unit of the input device, which is not repeated here.
Another embodiment of the present application further provides an input electronic device, as shown in fig. 10, including:
a memory 200 and a processor 210;
wherein, the memory 200 is connected to the processor 210 for storing programs;
the processor 210 is configured to implement the processing steps of the input method disclosed in any of the above embodiments by running the program stored in the memory 200.
Specifically, the input electronic device may further include: a bus, a communication interface 220, an input device 230, and an output device 240.
The processor 210, the memory 200, the communication interface 220, the input device 230, and the output device 240 are connected to each other through a bus. Wherein:
a bus may include a path that transfers information between components of a computer system.
The processor 210 may be a general-purpose processor, such as a general-purpose Central Processing Unit (CPU), microprocessor, etc., an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the present invention. But may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The processor 210 may include a main processor and may also include a baseband chip, modem, and the like.
The memory 200 stores programs for executing the technical solution of the present invention, and may also store an operating system and other key services. In particular, the program may include program code including computer operating instructions. More specifically, memory 200 may include a read-only memory (ROM), other types of static storage devices that may store static information and instructions, a Random Access Memory (RAM), other types of dynamic storage devices that may store information and instructions, a disk storage, a flash, and so forth.
The input device 230 may include a means for receiving data and information input by a user, such as a keyboard, mouse, camera, scanner, light pen, voice input device, touch screen, pedometer, or gravity sensor, among others.
Output device 240 may include equipment that allows output of information to a user, such as a display screen, a printer, speakers, and the like.
Communication interface 220 may include any device that uses any transceiver or the like to communicate with other devices or communication networks, such as an ethernet network, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The processor 2102 executes programs stored in the memory 200 and invokes other devices that may be used to implement the various steps of the input methods provided by embodiments of the present application.
Another embodiment of the present application further provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the input method provided in any of the above embodiments.
Specifically, the specific working content of each part of the input electronic device and the specific processing content of the computer program on the storage medium when being executed by the processor can refer to the content of each embodiment of the input method, which is not described herein again.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of each embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and technical features described in each embodiment may be replaced or combined.
The modules and sub-modules in the device and the terminal in the embodiments of the application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. An input method, comprising:
collecting input operation executed by a user on an entity operation object; the entity operation object is used for enabling a user to execute input operation;
determining input operation information corresponding to the acquired user input operation; the input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object;
and executing an action corresponding to the input operation information on a virtual operation object corresponding to the entity operation object in a virtual reality scene.
2. The method according to claim 1, wherein the collecting input operation performed by the user on the physical operation object comprises:
acquiring input operation executed by a user on an entity operation object through a camera device;
alternatively, the first and second electrodes may be,
and acquiring input operation executed on the entity operation object by a user holding the handheld input device through the handheld input device.
3. The method of claim 1, wherein determining input operation information corresponding to the collected user input operation comprises:
according to the collected click operation executed by the user on the entity operation object, determining the position coordinate of the click operation of the user on the entity operation object;
and determining input operation information according to the position coordinates clicked and pressed by the user on the entity operation object and the clicking action executed by the user on the entity operation object.
4. The method according to claim 3, wherein the determining input operation information according to the position coordinates of the click action performed by the user on the physical operation object and the click action performed by the user on the physical operation object comprises:
according to the position coordinates clicked by the user on the entity operation object, determining a virtual operation area corresponding to the position coordinates on the virtual operation object corresponding to the entity operation object;
and generating input operation information by using the information of the virtual operation area and the click action information executed by the user on the entity operation object.
5. The method according to claim 3, wherein the entity operation object is an operation object defining correspondence between different object regions and input contents;
the determining input operation information according to the position coordinates clicked and pressed by the user on the entity operation object and the clicking action executed by the user on the entity operation object comprises:
according to the position coordinates clicked by the user on the entity operation object, determining a virtual operation area corresponding to the position coordinates on the virtual operation object corresponding to the entity operation object;
determining user input content according to the position coordinates clicked by the user on the entity operation object and the corresponding relation between different object areas of the entity operation object and the input content;
and generating input operation information by using the information of the virtual operation area, the user input content and the click action information executed by the user on the entity operation object.
6. The method according to claim 4 or 5, wherein the determining, based on the position coordinates clicked by the user on the physical operation object, a virtual operation area corresponding to the position coordinates on a virtual operation object corresponding to the physical operation object comprises:
determining a virtual operation area corresponding to the position coordinate on the virtual operation object according to the position coordinate clicked by a user on the entity operation object and the spatial corresponding relation between the entity operation object and the virtual operation object;
alternatively, the first and second electrodes may be,
inputting the position coordinates clicked by the user on the entity operation object into a position mapping model trained in advance, so as to determine a virtual operation area corresponding to the position coordinates on a virtual operation object corresponding to the entity operation object;
the position mapping model is obtained by training a position coordinate clicked and pressed by a user on the entity operation object as a training sample and a virtual operation area corresponding to the position coordinate clicked and pressed by the user on the entity operation object as a sample label.
7. The method of claim 1, wherein determining input operation information corresponding to the collected user input operation comprises:
determining the writing track coordinate written on the entity operation object by the user according to the acquired writing operation executed on the entity operation object by the user;
and generating input operation information at least by utilizing the writing track coordinates written on the entity operation object by the user.
8. The method according to claim 6, wherein the generating input operation information using at least writing track coordinates written by a user on the physical operation object comprises:
determining the text content written by the user according to the writing track coordinates written by the user on the entity operation object;
and generating input operation information by using the text content written by the user and the writing track coordinates written on the entity operation object by the user.
9. An input system, comprising:
the system comprises an entity operation object, an input monitoring device and virtual reality equipment;
the entity operation object is used for enabling a user to execute input operation;
the input monitoring device is used for acquiring input operation executed by a user on the entity operation object, determining input operation information according to the acquired user input operation, and sending the input operation information to the virtual reality equipment;
and the virtual reality equipment is used for executing the action corresponding to the input operation information on the virtual operation object corresponding to the entity operation object in the virtual reality scene according to the input operation information.
10. An input device, comprising:
the operation acquisition unit is used for acquiring input operation executed by a user on the entity operation object; the entity operation object is used for enabling a user to execute input operation;
the analysis processing unit is used for determining input operation information corresponding to the acquired user input operation; the input operation information comprises click operation information executed by a user on the entity operation object or track information written by the user on the entity operation object;
and the input display unit is used for executing the action corresponding to the input operation information on the virtual operation object corresponding to the entity operation object in the virtual reality scene.
11. An input electronic device, comprising:
a memory and a processor;
the memory is connected with the processor and used for storing programs;
the processor is configured to implement the input method according to any one of claims 1 to 8 by executing the program in the memory.
12. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the input method according to any one of claims 1 to 8.
CN202011534697.2A 2020-12-22 2020-12-22 Input method, related device and input system Pending CN112650390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011534697.2A CN112650390A (en) 2020-12-22 2020-12-22 Input method, related device and input system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011534697.2A CN112650390A (en) 2020-12-22 2020-12-22 Input method, related device and input system

Publications (1)

Publication Number Publication Date
CN112650390A true CN112650390A (en) 2021-04-13

Family

ID=75359327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011534697.2A Pending CN112650390A (en) 2020-12-22 2020-12-22 Input method, related device and input system

Country Status (1)

Country Link
CN (1) CN112650390A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873783A (en) * 2017-03-29 2017-06-20 联想(北京)有限公司 Information processing method, electronic equipment and input unit
CN107291222A (en) * 2017-05-16 2017-10-24 阿里巴巴集团控股有限公司 Interaction processing method, device, system and the virtual reality device of virtual reality device
CN107357434A (en) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 Information input equipment, system and method under a kind of reality environment
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN109491586A (en) * 2018-11-14 2019-03-19 网易(杭州)网络有限公司 Virtual object control method and device, electronic equipment, storage medium
CN111766937A (en) * 2019-04-02 2020-10-13 广东虚拟现实科技有限公司 Virtual content interaction method and device, terminal equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873783A (en) * 2017-03-29 2017-06-20 联想(北京)有限公司 Information processing method, electronic equipment and input unit
CN107291222A (en) * 2017-05-16 2017-10-24 阿里巴巴集团控股有限公司 Interaction processing method, device, system and the virtual reality device of virtual reality device
CN107357434A (en) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 Information input equipment, system and method under a kind of reality environment
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN109491586A (en) * 2018-11-14 2019-03-19 网易(杭州)网络有限公司 Virtual object control method and device, electronic equipment, storage medium
CN111766937A (en) * 2019-04-02 2020-10-13 广东虚拟现实科技有限公司 Virtual content interaction method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
US8086971B2 (en) Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
CN102904996B (en) The method and device of a kind of handset touch panel performance test, system
CN104169920B (en) For drawing the system of chemical constitution, method and apparatus using touch and gesture
CN109407954B (en) Writing track erasing method and system
CN105210012A (en) Virtual tools for use with touch-sensitive surfaces
CN102707827A (en) Electronic device and method for calibration of a touch screen
CN102693000B (en) In order to perform calculation element and the method for multi-finger gesture function
CN111401318B (en) Action recognition method and device
US20150169134A1 (en) Methods circuits apparatuses systems and associated computer executable code for providing projection based human machine interfaces
Sathiyanarayanan et al. Map navigation using hand gesture recognition: A case study using myo connector on apple maps
EP1228480B1 (en) Method for digitizing writing and drawing with erasing and/or pointing capability
CN105447897A (en) Server apparatus and data integration method
CN112506340A (en) Device control method, device, electronic device and storage medium
JP2015158900A (en) Information processing device, information processing method and information processing program
US20220350404A1 (en) Method for image display and related products
CN104407696A (en) Virtual ball simulation and control method of mobile device
CN113961107B (en) Screen-oriented augmented reality interaction method, device and storage medium
US20230135661A1 (en) Image processing method and apparatus for smart pen, and electronic device
CN112650390A (en) Input method, related device and input system
CN111258413A (en) Control method and device of virtual object
CN110750193B (en) Scene topology determination method and device based on artificial intelligence
GB2377607A (en) Analysing and displaying motion of hand held instrument
US11853483B2 (en) Image processing method and apparatus for smart pen including pressure switches, and electronic device
CN115033170A (en) Input control system and method based on virtual keyboard and related device
CN113535055B (en) Method, equipment and storage medium for playing point-to-read based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination