CN116069169A - Data processing method and system for inputting virtual text based on intelligent watch - Google Patents

Data processing method and system for inputting virtual text based on intelligent watch Download PDF

Info

Publication number
CN116069169A
CN116069169A CN202310319289.2A CN202310319289A CN116069169A CN 116069169 A CN116069169 A CN 116069169A CN 202310319289 A CN202310319289 A CN 202310319289A CN 116069169 A CN116069169 A CN 116069169A
Authority
CN
China
Prior art keywords
user
virtual
text
interaction
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310319289.2A
Other languages
Chinese (zh)
Inventor
吴贤荣
曾贤富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Light Speed Times Technology Co ltd
Original Assignee
Shenzhen Light Speed Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Light Speed Times Technology Co ltd filed Critical Shenzhen Light Speed Times Technology Co ltd
Priority to CN202310319289.2A priority Critical patent/CN116069169A/en
Publication of CN116069169A publication Critical patent/CN116069169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a data processing method and a system for inputting virtual text based on an intelligent watch, which are applied to the field of virtual data processing; according to the invention, the intelligent watch and the VR equipment are connected and data synchronized, so that a user can apply the sensing equipment arranged on the intelligent watch in the virtual application environment, hand data of the user are synchronized into the virtual application environment, when virtual text is input in the virtual application scene, hands of the user can accurately hit the virtual keyboard, the output efficiency of the user when text information is interacted in the virtual application scene is not influenced, and VR interaction experience of the user is effectively improved.

Description

Data processing method and system for inputting virtual text based on intelligent watch
Technical Field
The invention relates to the field of virtual data processing, in particular to a data processing method and system for inputting virtual text based on an intelligent watch.
Background
VR technology is based on virtual reality technology field, through the integration development with interactive technology, three-dimensional imaging technique and sensing technique etc. to the immersive sense of augmented virtual reality technology, interactivity and expansion user's imagination, since 2016 began, VR equipment was regarded as virtual reality's core hardware, walked into mainstream consumer market to with virtual reality technology application scene's extension, constantly promoted out newly, though three-dimensional graphics's display technology is comparatively mature, real-time dynamic interactive generation and virtual reality hardware's popular development did not realize.
At present, in the process of realizing VR technical interaction by using Unity3D, when a user needs to input virtual texts in a virtual application scene, hands often cannot hit a virtual keyboard accurately, so that text information output efficiency in the virtual application scene is greatly reduced, and VR interaction experience of the user is affected.
Disclosure of Invention
The invention aims to solve the problem that when a user needs to input a virtual text in a virtual application scene, the output efficiency of text information in the virtual application scene is greatly reduced because hands cannot hit a virtual keyboard accurately, and provides a data processing method and a data processing system for inputting the virtual text based on an intelligent watch.
The invention adopts the following technical means for solving the technical problems:
the invention provides a data processing method for inputting virtual text based on an intelligent watch, which comprises the following steps:
identifying a current virtual application scene of a user, and judging whether the virtual application scene needs text interaction or not;
if necessary, carrying out data synchronization on VR equipment and a pre-connected intelligent watch, acquiring hand data of the user through sensing equipment of the intelligent watch, synchronizing the hand data into the virtual application scene, identifying azimuth information of the hand data in the virtual application scene, generating a corresponding virtual drop point in the virtual application scene according to the azimuth information, and generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, wherein the sensing equipment comprises an accelerometer and an acceleration sensor, and the virtual keyboard is specifically used as a fixed drop point of the virtual drop point according to an index finger of the user;
Identifying text interaction efficiency of the user, and judging whether the text interaction efficiency reaches a preset interaction frequency;
if not, inputting the initial interaction keywords into a prediction model based on initial interaction keywords input through the virtual keyboard when the user performs text interaction, predicting through the prediction model to generate at least one additional keyword, pre-combining the additional keyword and the initial interaction keywords in the virtual application environment, generating a corresponding selectable sequence according to a preset priority ranking, selecting the selectable sequence by identifying hand swing positions of the user in a sliding manner, and confirming the selectable sequence by applying the virtual keyboard, wherein the priority ranking is particularly ranked according to the use times of the interaction keywords of the user, and the hand swing positions particularly comprise upper, lower, left and right directions.
Further, the step of inputting the initial interaction keyword into a prediction model, and predicting by the prediction model to generate at least one additional keyword in the virtual application environment includes:
Recording interactive contents input when the user performs text interaction, uploading the interactive contents to a preset database, identifying at least one text feature in the interactive contents, and collecting the identification times of each text feature;
and based on the interactive text with the largest user use times in the recognition times, prioritizing the pre-generated interactive text content to generate at least one interactive copy commonly used by the user.
Further, before the step of generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, the method includes:
acquiring visual field information of the user in the virtual application scene;
judging whether hand data of the user are detected in the visual field information;
and if so, generating a virtual keyboard in the virtual application scene based on the hand data and the virtual drop point.
Further, the step of identifying the text interaction efficiency of the user and determining whether the text interaction efficiency reaches a preset interaction frequency further includes:
collecting text data input by the user on the virtual keyboard, generating text interaction information to be uploaded by the user based on the text data, and simultaneously recording a period of time required to be consumed for generating the text interaction information to be uploaded;
Judging whether the time period required to be consumed is greater than a preset time period or not;
if yes, providing an interaction auxiliary function for the user, displaying pre-combined text auxiliary data in the virtual application environment when the user inputs the text data, and generating an interaction auxiliary text after the user selects the text auxiliary data.
Further, before the step of identifying the text interaction efficiency of the user, the method includes:
tracking the transformation azimuth of the hand data in real time, and judging whether a preset distance threshold exists between the transformation azimuth and the virtual keyboard;
and if so, adjusting the generation azimuth of the virtual keyboard in the virtual application environment according to the transformation azimuth, wherein the generation azimuth always tracks the virtual drop point generation.
Further, the step of identifying the current virtual application scenario of the user includes:
tracking the gaze point of the user in the virtual application scene, and judging whether the gaze point belongs to an interactive object or not;
if yes, acquiring the interaction content output by the interaction object, generating corresponding interaction options based on the interaction content, and presenting interaction modes corresponding to the different interaction options for the user according to the different interaction options selected by the user, wherein the interaction modes comprise text interaction, touch interaction or gesture interaction.
Further, the step of identifying the azimuth information of the hand data in the virtual application scene and generating the corresponding virtual drop point in the virtual application scene according to the azimuth information further includes:
acquiring an interactive content type to be input in a text interactive link, and presenting a corresponding virtual drop point preset range based on the interactive content type, wherein the interactive content type comprises numbers, symbols, directions and letters;
providing corresponding hand prompts for the user according to the virtual drop point preset range, and enabling the user to select interactive contents corresponding to different hands through the hand prompts, wherein the interactive contents corresponding to different hands specifically comprise the interactive contents of which the types are symbols and letters, and the interactive contents of which the types are numbers and directions.
The invention also provides a data processing system for inputting virtual text based on the intelligent watch, which comprises:
the judging module is used for identifying the current virtual application scene of the user and judging whether the virtual application scene needs text interaction or not;
the executing module is used for carrying out data synchronization on VR equipment and a pre-connected intelligent watch if needed, acquiring hand data of the user through sensing equipment of the intelligent watch, synchronizing the hand data into the virtual application scene, identifying azimuth information of the hand data in the virtual application scene, generating a corresponding virtual drop point in the virtual application scene according to the azimuth information, and generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, wherein the sensing equipment comprises an accelerometer and an acceleration sensor, and the virtual keyboard is specifically used as a fixed drop point of the virtual drop point according to an index finger of the user;
The second judging module is used for identifying the text interaction efficiency of the user and judging whether the text interaction efficiency reaches a preset interaction frequency or not;
and the second execution module is used for inputting the initial interaction keywords into a prediction model based on initial interaction keywords input through the virtual keyboard when the user performs text interaction if the user does not perform text interaction, predicting the initial interaction keywords through the prediction model to generate at least one additional keyword in the virtual application environment, pre-combining the additional keywords with the initial interaction keywords, generating a corresponding selectable sequence according to a preset priority ordering, selecting the selectable sequence by identifying hand swing positions of the user in a sliding manner, and confirming the selectable sequence by applying the virtual keyboard, wherein the priority ordering is specifically ordered according to the use times of the interaction keywords of the user, and the hand swing positions specifically comprise an upper position, a lower position, a left position and a right position.
Further, the second execution module further includes:
the recording unit is used for recording the interactive content input when the user performs text interaction, uploading the interactive content to a preset database, identifying at least one text feature in the interactive content, and collecting the identification times of each text feature;
And the generating unit is used for prioritizing the interactive contents corresponding to the text characteristics with the interactive text recently used by the user based on the identification times and generating at least one interactive copy recently used by the user.
Further, the method further comprises the following steps:
the acquisition module is used for acquiring the visual field information of the user in the virtual application scene;
the third judging module is used for judging whether the hand data of the user are detected in the visual field information;
and the third execution module is used for generating a virtual keyboard in the virtual application scene based on the hand data and the virtual drop point if the hand data and the virtual drop point are the same.
The invention provides a data processing method and a data processing system for inputting virtual texts based on an intelligent watch, which have the following beneficial effects:
according to the invention, the intelligent watch and the VR equipment are connected and data synchronized, so that a user can apply the sensing equipment arranged on the intelligent watch in the virtual application environment, hand data of the user are synchronized into the virtual application environment, when virtual text is input in the virtual application scene, hands of the user can accurately hit the virtual keyboard, the output efficiency of the user when text information is interacted in the virtual application scene is not influenced, and VR interaction experience of the user is effectively improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a data processing method for inputting virtual text based on a smart watch according to the present invention;
FIG. 2 is a block diagram illustrating an embodiment of a data processing system for inputting virtual text based on a smart watch according to the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present invention, as the achievement, functional features, and advantages of the present invention are further described with reference to the embodiments, with reference to the accompanying drawings.
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a data processing method for inputting virtual text based on a smart watch according to an embodiment of the present invention includes:
s1: identifying a current virtual application scene of a user, and judging whether the virtual application scene needs text interaction or not;
S2: if necessary, carrying out data synchronization on VR equipment and a pre-connected intelligent watch, acquiring hand data of the user through sensing equipment of the intelligent watch, synchronizing the hand data into the virtual application scene, identifying azimuth information of the hand data in the virtual application scene, generating a corresponding virtual drop point in the virtual application scene according to the azimuth information, and generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, wherein the sensing equipment comprises an accelerometer and an acceleration sensor, and the virtual keyboard is specifically used as a fixed drop point of the virtual drop point according to an index finger of the user;
s3: identifying text interaction efficiency of the user, and judging whether the text interaction efficiency reaches a preset interaction frequency;
s4: if not, inputting the initial interaction keywords into a prediction model based on initial interaction keywords input through the virtual keyboard when the user performs text interaction, predicting through the prediction model to generate at least one additional keyword, pre-combining the additional keyword and the initial interaction keywords in the virtual application environment, generating a corresponding selectable sequence according to a preset priority ranking, selecting the selectable sequence by identifying hand swing positions of the user in a sliding manner, and confirming the selectable sequence by applying the virtual keyboard, wherein the priority ranking is particularly ranked according to the use times of the interaction keywords of the user, and the hand swing positions particularly comprise upper, lower, left and right directions.
In this embodiment, the system identifies a virtual application scene in the VR glasses device worn by the user, and determines whether text interaction is required by the user in the virtual application scene, so as to execute corresponding steps; for example, when the system determines that the user does not currently interact with the object in the virtual application scene, the system keeps real-time observation of the user until the user needs to interact with the object in the virtual application scene; for example, when the system determines that the user needs to perform text interaction with an object in the virtual application scene at present, the system performs data synchronization on the VR glasses device and the smart watch worn by the user, and because the VR glasses device is connected with the smart watch through bluetooth in advance, the system can obtain hand change data of the user through sensing equipment arranged on the smart watch, and then convert the hand change data into hand data of the user in the virtual application scene, replace virtual hand data generated by the VR glasses device, and the system generates a virtual drop point in the virtual application scene according to the azimuth information by synchronizing the hand data of the user in the virtual application scene in real time and identifying azimuth information of the hand data in the virtual application scene; the system judges whether the text interaction efficiency reaches a preset interaction frequency or not by identifying the text interaction efficiency when the user carries out text interaction; for example, when the system determines that the current text interaction efficiency of the user reaches the preset interaction frequency, the system does not apply the interaction auxiliary function to help the user to carry out auxiliary interaction with the object in the virtual application environment; for example, when the system determines that the current text interaction efficiency of the user does not reach the preset interaction frequency, the system identifies initial interaction keywords input by the user in the virtual keyboard based on the interaction between the user and the object in the virtual application environment, inputs the initial interaction keywords into a prediction model for prediction, predicts and generates at least one additional keyword in the virtual application environment through the prediction model, at the moment, the user can generate corresponding selectable sequences by combining the additional keywords and the initial interaction keywords according to the preset priority order, and the system selects the selectable sequences by identifying the hand swing direction of the user in a sliding manner and confirms the selectable sequences in the virtual keyboard through the user; if the initial interaction keywords input by the user through the virtual keyboard are "you", and the additional keywords such as "good", "mini", "people" and the like which are generated by the prediction model are predicted, the corresponding selectable sequences "hello", "mini", "your" can be generated by pre-combining the additional keywords and the initial interaction keywords, the system can order the using times of the selectable sequences as priorities by the user, and if the number of times of using the "hello" is 2 times, the number of times of using the "mini" is 1 time, and the number of times of using the "your" is 0 time, the system can be optimal, suboptimal, third suboptimal and fourth suboptimal based on the up and down directions of the hand swing directions at this time, and because only three selectable sequences exist currently, the user can select different selectable sequences only through different hand swing directions up and down, and press the confirmation key in the virtual keyboard to complete auxiliary interaction content generated by helping the user interaction.
It should be noted that, the position of the virtual keyboard is changed according to the change of the position of the virtual drop point, and the position of the virtual drop point is changed based on the drop point of the index finger of the user, so that the virtual drop point tracks the hand position of the user to generate at least one virtual keyboard, and the hand position of the user changes along with the interaction process, so that at least one virtual keyboard is generated.
In this embodiment, the step S4 of inputting the initial interaction keyword into a prediction model, and generating at least one additional keyword by prediction through the prediction model in the virtual application environment includes:
s41: recording interactive contents input when the user performs text interaction, uploading the interactive contents to a preset database, identifying at least one text feature in the interactive contents, and collecting the identification times of each text feature;
s42: and based on the interactive text with the largest user use times in the recognition times, prioritizing the pre-generated interactive text content to generate at least one interactive copy commonly used by the user.
In this embodiment, the system records and uploads the interactive contents recorded by the virtual keyboard input by the user when performing text interaction to a database provided in advance, identifies the interactive contents based on the database to obtain at least one text feature in the interactive contents, and simultaneously collects the identification times of each text feature to obtain the text content with the largest input times of the user in the virtual application environment; the system then ranks the pre-generated interactive text contents in priority based on the recognition times of each text feature and the text content with the largest input times of the user in the virtual application scene, and generates at least one interactive copy content which accords with the daily use of the user; for example, the system recognizes that the interactive content is "hello", "text features in your", and the recognition times of "hello", "your" are 1 time, 3 times and 2 times respectively, and the system knows that the interactive text with the most use times after the user inputs "hello" text is "hello" through recording, at this time, the system ranks the interactive content which has been used by the user based on the recognition times, three interactive copies are generated for the user to perform text auxiliary selection in the virtual application environment, and ranked as "hello", "your good" according to the priority;
It should be noted that, because the text feature is specifically the first word in the interactive content, when the first word appears in the interactive content to be the same, the system can perform text feature recognition on the next word based on the first word in the interactive content, so that different recognition times can be recorded and obtained from different interactive contents of the same first word; the interactive copy is specifically a text interaction auxiliary function, when the system detects that the interaction efficiency of the user is low, the system can generate the interactive copy of the embodiment for the user to perform text auxiliary selection so as to improve the text interaction efficiency of the user, and the interactive copy is actually only a text interaction auxiliary function and is not text interaction content which the user has to select, so that the user can still disregard the interactive copy to continue text interaction when performing text interaction.
In this embodiment, before step S2 of generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, the method includes:
s201: acquiring visual field information of the user in the virtual application scene;
s202: judging whether hand data of the user are detected in the visual field information;
s203: and if so, generating a virtual keyboard in the virtual application scene based on the hand data and the virtual drop point.
In this embodiment, the system executes the corresponding steps by acquiring field-of-view range information of the user in the virtual application scene, that is, field-of-view information that can be seen by both eyes of the user after the user wears the VR device, and determining whether hand data of the user exists in the field-of-view information; for example, when the system determines that the hand data of the user does not exist in the visual field information, the system considers that the user does not need text interaction temporarily in the current virtual application environment, and because no hand data of the user exists in the visual field range of the user, the system cannot define the position of the virtual drop point, namely cannot generate a virtual keyboard; for example, when the system determines that the hand data of the user exists in the visual field information, the system considers that the user needs to perform text interaction in the current virtual application environment, the system obtains the hand data of the user, generates a virtual drop point in a specific direction based on the specific direction of an index finger in the hand data, and generates a complete virtual keyboard according to the virtual drop point at the same time, so that the user can perform text interaction with the VR virtual object in the current virtual application environment.
In this embodiment, the step S3 of identifying the text interaction efficiency of the user and determining whether the text interaction efficiency reaches a preset interaction frequency further includes:
S31: collecting text data input by the user on the virtual keyboard, generating text interaction information to be uploaded by the user based on the text data, and simultaneously recording a period of time required to be consumed for generating the text interaction information to be uploaded;
s32: judging whether the time period required to be consumed is greater than a preset time period or not;
s33: if yes, providing an interaction auxiliary function for the user, displaying pre-combined text auxiliary data in the virtual application environment when the user inputs the text data, and generating an interaction auxiliary text after the user selects the text auxiliary data.
In this embodiment, the system generates text interaction information to be uploaded by a user based on text data input by the user in the virtual keyboard by collecting the text data, and simultaneously records a time period consumed by the user when the user does not complete uploading of the text data, and determines whether the consumed time period is greater than a preset time period so as to execute corresponding steps; for example, when the system detects that the time period consumed by the user to input the text data is not greater than the preset time period, the system can consider that the text interaction efficiency of the user does not need to enable the text auxiliary function provided by the system, and the user can complete conventional text interaction with the VR virtual object according to the text interaction efficiency of the user; for example, when the system detects that the time period consumed by the user for inputting the text data is greater than the preset time period, the system considers that the text interaction efficiency of the user is too low and the text auxiliary function provided by the application system is needed, when the user inputs the text data, the system displays the pre-combined text auxiliary data in the virtual application environment for the user to select, and then interaction auxiliary texts can be generated, and the interaction auxiliary texts are directly generated in the interaction text content after being selected without the user inputting through a virtual keyboard, so that the text interaction efficiency of the user and the VR virtual object can be effectively improved.
In this embodiment, before step S3 of identifying the text interaction efficiency of the user, the method includes:
s301: tracking the transformation azimuth of the hand data in real time, and judging whether a preset distance threshold exists between the transformation azimuth and the virtual keyboard;
s302: and if so, adjusting the generation azimuth of the virtual keyboard in the virtual application environment according to the transformation azimuth, wherein the generation azimuth always tracks the virtual drop point generation.
In this embodiment, the system performs the corresponding step by tracking, in real time, the hand data transformation azimuth of the user in the virtual application environment, and determining whether the transformation azimuth and the generated virtual keyboard are within a preset distance threshold; for example, the system detects that the hand data conversion azimuth of the user is still within a preset distance threshold although the hand data conversion azimuth is converted, and at the moment, the system can consider that the hand data of the user only deviates from the virtual keyboard by a little, and does not need to search for a virtual drop point and generate another virtual keyboard according to the hand data again; for example, the system detects that the hand data transformation azimuth of the user is transformed and is not within a preset distance threshold, at this time, the system can re-search the azimuth of the virtual drop point in the virtual application environment according to the hand data transformation azimuth, correspondingly generate another virtual keyboard after the virtual drop point is regenerated, so that the situation that the user needs to rotate the body of the user in the virtual application environment to interact with a plurality of VR virtual objects is avoided, the visual field range is changed, only one virtual keyboard exists, and the text interaction between the user and the plurality of VR virtual objects is affected.
In this embodiment, the step S1 of identifying the current virtual application scenario of the user includes:
s11: tracking the gaze point of the user in the virtual application scene, and judging whether the gaze point belongs to an interactive object or not;
s12: if yes, acquiring the interaction content output by the interaction object, generating corresponding interaction options based on the interaction content, and presenting interaction modes corresponding to the different interaction options for the user according to the different interaction options selected by the user, wherein the interaction modes comprise text interaction, touch interaction or gesture interaction.
In this embodiment, the system determines whether the VR virtual object existing in the gaze point belongs to the interactive object by tracking the gaze point of the eye of the user in the virtual application environment, so as to execute the corresponding step; for example, when the system detects that the VR virtual object existing in the gaze point of the eye of the user in the virtual application environment does not belong to the interactive object, the system does not enable the considered interactive functions, including the interactive functions of text interaction, touch interaction or gesture interaction, because the VR virtual object in the gaze point of the user cannot interact; for example, when the system detects that a VR virtual object existing in the gaze point of the eye of the user in the virtual application environment belongs to the interactive object, the system acquires the interactive content output by the interactive object, generates corresponding interactive options for the user to select based on the interactive content, and can present interactive modes corresponding to the different interactive options for the user according to the different interactive options selected by the user, including interactive functions of text interaction, touch interaction or gesture interaction, wherein the interactive functions capture hand data of the user through sensing equipment set by the smart watch and are synchronized to the VR equipment to interact with the VR virtual object.
It should be noted that text interaction, touch interaction and gesture interaction are three different interaction modes, and each has different advantages and application scenes, and specifically comprises the following steps:
text interaction has the advantages that:
may be used to handle complex data inputs and processes such as long text, numbers, codes, etc.;
the method is easy to use, and a user can directly input the required content without conversion by other means;
information can be exchanged and transferred between different languages and different religions.
Advantages of touch interaction:
abundant interactive feedback and visual effects can be provided, and a user can interact with the VR virtual interface through touch operation to obtain more visual feedback;
the operation is easy, and the user can realize complex operation through simple gesture operation;
the requirements of various VR virtual scenes, such as mobile phones, tablet computers, game machines and the like, can be met.
Advantages of gesture interactions:
interaction is realized through natural gesture language, so that the immersion feeling and the operation efficiency of a user in the VR virtual scene can be improved;
a more personalized and customized way of interaction may be provided.
In this embodiment, the step S2 of identifying the azimuth information of the hand data in the virtual application scene and generating the corresponding virtual drop point in the virtual application scene according to the azimuth information further includes:
S21: acquiring an interactive content type to be input in a text interactive link, and presenting a corresponding virtual drop point preset range based on the interactive content type, wherein the interactive content type comprises numbers, symbols, directions and letters;
s22: providing corresponding hand prompts for the user according to the virtual drop point preset range, and enabling the user to select interactive contents corresponding to different hands through the hand prompts, wherein the interactive contents corresponding to different hands specifically comprise the interactive contents of which the types are symbols and letters, and the interactive contents of which the types are numbers and directions.
In this embodiment, in the system, by acquiring the types of interactive contents that the user needs to input in the text interaction link between the user and the VR virtual object, the system presents the preset range of the corresponding virtual drop point in the virtual keyboard based on the different types of interactive contents, and the system provides the corresponding hand prompt in the virtual keyboard for the user according to the preset range of the virtual drop point, where the different hand prompts correspond to different interactive contents in the virtual keyboard, and the left side in the virtual keyboard belongs to the interactive range of symbols and letters, and the right side in the virtual keyboard belongs to the interactive range of numbers and directions, and the system generates the virtual drop point on the left side or the right side of the main interactive contents in the virtual keyboard according to the type of the interactive contents at the moment, and the user can also input the symbols, letters, numbers and directions simultaneously through simultaneous operation, so as to combine the different text interactive contents.
Referring to fig. 2, a data processing system for inputting virtual text based on a smart watch according to an embodiment of the present invention includes:
the judging module 10 is used for identifying the current virtual application scene of the user and judging whether the virtual application scene needs text interaction or not;
the execution module 20 is configured to perform data synchronization on a VR device and a pre-connected smart watch, acquire hand data of the user through a sensing device of the smart watch, synchronize the hand data to the virtual application scene, identify azimuth information of the hand data in the virtual application scene, generate a corresponding virtual drop point in the virtual application scene according to the azimuth information, and generate at least one virtual keyboard in the virtual application scene according to the virtual drop point, where the sensing device includes an accelerometer and an acceleration sensor, and the virtual keyboard specifically uses an index finger of the user as a fixed drop point of the virtual drop point;
a second judging module 30, configured to identify text interaction efficiency of the user, and judge whether the text interaction efficiency reaches a preset interaction frequency;
the second execution module 40 is configured to, if not, input the initial interaction keyword into a prediction model based on the initial interaction keyword input through the virtual keyboard when the user performs text interaction, predict the initial interaction keyword through the prediction model to generate at least one additional keyword in the virtual application environment, pre-combine the additional keyword with the initial interaction keyword, generate a corresponding selectable sequence according to a preset priority ranking, and select the selectable sequence by identifying a hand swing direction sliding of the user, and apply the virtual keyboard to confirm the selectable sequence, where the priority ranking specifically ranks according to the number of times of use of the interaction keyword by the user, and the hand swing direction specifically includes upper, lower, left and right directions.
In this embodiment, the system identifies a virtual application scene in the VR glasses device worn by the user, and the determining module 10 determines whether text interaction is required by the user in the virtual application scene, so as to execute corresponding steps; for example, when the system determines that the user does not currently interact with the object in the virtual application scene, the system keeps real-time observation of the user until the user needs to interact with the object in the virtual application scene; for example, when the system determines that the user needs to perform text interaction with an object in the virtual application scene at present, the execution module 20 performs data synchronization on the VR glasses device and the smart watch worn by the user, and because the VR glasses device is connected with the smart watch through bluetooth in advance, the system can obtain hand change data of the user through sensing equipment provided by the smart watch, and then convert the hand change data into hand data of the user in the virtual application scene, replace virtual hand data generated by the VR glasses device, and the system performs real-time synchronization on the hand data of the user in the virtual application scene, and recognizes azimuth information of the hand data in the virtual application scene, and the system generates a virtual drop point in the virtual application scene according to the azimuth information and generates at least one virtual keyboard according to the virtual drop point; the second judging module 30 judges whether the text interaction efficiency reaches a preset interaction frequency or not by identifying the text interaction efficiency when the user performs text interaction; for example, when the system determines that the current text interaction efficiency of the user reaches the preset interaction frequency, the system does not apply the interaction auxiliary function to help the user to carry out auxiliary interaction with the object in the virtual application environment; for example, when the system determines that the current text interaction efficiency of the user does not reach the preset interaction frequency, the second execution module 40 identifies initial interaction keywords input by the user in the virtual keyboard based on the interaction between the user and the object in the virtual application environment, inputs the initial interaction keywords into the prediction model for prediction, predicts and generates at least one additional keyword in the virtual application environment through the prediction model, and the user can generate corresponding selectable sequences by combining the additional keyword and the initial interaction keywords according to the preset priority ranking, and the system selects the selectable sequences by identifying the hand swing direction sliding of the user and confirms the selectable sequences in the virtual keyboard through the user; if the initial interaction keywords input by the user through the virtual keyboard are "you", and the additional keywords such as "good", "mini", "people" and the like which are generated by the prediction model are predicted, the corresponding selectable sequences "hello", "mini", "your" can be generated by pre-combining the additional keywords and the initial interaction keywords, the system can order the using times of the selectable sequences as priorities by the user, and if the number of times of using the "hello" is 2 times, the number of times of using the "mini" is 1 time, and the number of times of using the "your" is 0 time, the system can be optimal, suboptimal, third suboptimal and fourth suboptimal based on the up and down directions of the hand swing directions at this time, and because only three selectable sequences exist currently, the user can select different selectable sequences only through different hand swing directions up and down, and press the confirmation key in the virtual keyboard to complete auxiliary interaction content generated by helping the user interaction.
It should be noted that, the position of the virtual keyboard is changed according to the change of the position of the virtual drop point, and the position of the virtual drop point is changed based on the drop point of the index finger of the user, so that the virtual drop point tracks the hand position of the user to generate at least one virtual keyboard, and the hand position of the user changes along with the interaction process, so that at least one virtual keyboard is generated.
In this embodiment, the second execution module further includes:
the recording unit is used for recording the interactive content input when the user performs text interaction, uploading the interactive content to a preset database, identifying at least one text feature in the interactive content, and collecting the identification times of each text feature;
and the generating unit is used for prioritizing the interactive contents corresponding to the text characteristics with the interactive text recently used by the user based on the identification times and generating at least one interactive copy recently used by the user.
In this embodiment, the system records and uploads the interactive contents recorded by the virtual keyboard input by the user when performing text interaction to a database provided in advance, identifies the interactive contents based on the database to obtain at least one text feature in the interactive contents, and simultaneously collects the identification times of each text feature to obtain the text content with the largest input times of the user in the virtual application environment; the system then ranks the pre-generated interactive text contents in priority based on the recognition times of each text feature and the text content with the largest input times of the user in the virtual application scene, and generates at least one interactive copy content which accords with the daily use of the user; for example, the system recognizes that the interactive content is "hello", "text features in your", and the recognition times of "hello", "your" are 1 time, 3 times and 2 times respectively, and the system knows that the interactive text with the most use times after the user inputs "hello" text is "hello" through recording, at this time, the system ranks the interactive content which has been used by the user based on the recognition times, three interactive copies are generated for the user to perform text auxiliary selection in the virtual application environment, and ranked as "hello", "your good" according to the priority;
It should be noted that, because the text feature is specifically the first word in the interactive content, when the first word appears in the interactive content to be the same, the system can perform text feature recognition on the next word based on the first word in the interactive content, so that different recognition times can be recorded and obtained from different interactive contents of the same first word; the interactive copy is specifically a text interaction auxiliary function, when the system detects that the interaction efficiency of the user is low, the system can generate the interactive copy of the embodiment for the user to perform text auxiliary selection so as to improve the text interaction efficiency of the user, and the interactive copy is actually only a text interaction auxiliary function and is not text interaction content which the user has to select, so that the user can still disregard the interactive copy to continue text interaction when performing text interaction.
In this embodiment, further comprising:
the acquisition module is used for acquiring the visual field information of the user in the virtual application scene;
the third judging module is used for judging whether the hand data of the user are detected in the visual field information;
and the third execution module is used for generating a virtual keyboard in the virtual application scene based on the hand data and the virtual drop point if the hand data and the virtual drop point are the same.
In this embodiment, the system determines whether there is hand data of the user in the field of view information by acquiring field of view information of the user in the virtual application scene, that is, field of view information that can be seen by both eyes of the user after the user wears the VR device, so as to execute the corresponding step; for example, when the system determines that the hand data of the user does not exist in the visual field information, the system considers that the user does not need text interaction temporarily in the current virtual application environment, and because no hand data of the user exists in the visual field range of the user, the system cannot define the position of the virtual drop point, namely cannot generate a virtual keyboard; for example, when the system determines that the hand data of the user exists in the visual field information, the system considers that the user needs to perform text interaction in the current virtual application environment, the system obtains the hand data of the user, generates a virtual drop point in a specific direction based on the specific direction of an index finger in the hand data, and generates a complete virtual keyboard according to the virtual drop point at the same time, so that the user can perform text interaction with the VR virtual object in the current virtual application environment.
In this embodiment, the second judging module further includes:
the acquisition unit is used for acquiring text data input by the user on the virtual keyboard, generating text interaction information to be uploaded by the user based on the text data, and recording a period of time required to be consumed for generating the text interaction information to be uploaded;
a judging unit for judging whether the required consumed time period is greater than a preset time period;
and the execution unit is used for providing an interaction auxiliary function for the user if the user inputs the text data, displaying the pre-combined text auxiliary data in the virtual application environment when the user inputs the text data, and generating an interaction auxiliary text after the user selects the text auxiliary data.
In this embodiment, the system generates text interaction information to be uploaded by a user based on text data input by the user in the virtual keyboard by collecting the text data, and simultaneously records a time period consumed by the user when the user does not complete uploading of the text data, and determines whether the consumed time period is greater than a preset time period so as to execute corresponding steps; for example, when the system detects that the time period consumed by the user to input the text data is not greater than the preset time period, the system can consider that the text interaction efficiency of the user does not need to enable the text auxiliary function provided by the system, and the user can complete conventional text interaction with the VR virtual object according to the text interaction efficiency of the user; for example, when the system detects that the time period consumed by the user for inputting the text data is greater than the preset time period, the system considers that the text interaction efficiency of the user is too low and the text auxiliary function provided by the application system is needed, when the user inputs the text data, the system displays the pre-combined text auxiliary data in the virtual application environment for the user to select, and then interaction auxiliary texts can be generated, and the interaction auxiliary texts are directly generated in the interaction text content after being selected without the user inputting through a virtual keyboard, so that the text interaction efficiency of the user and the VR virtual object can be effectively improved.
In this embodiment, further comprising:
a fourth judging module, configured to track a transformation azimuth of the hand data in real time, and judge whether a preset distance threshold exists between the transformation azimuth and the virtual keyboard;
and the fourth execution module is used for adjusting the generation azimuth of the virtual keyboard in the virtual application environment according to the transformation azimuth if the virtual keyboard is generated, wherein the generation azimuth always tracks the virtual drop point generation.
In this embodiment, the system performs the corresponding step by tracking, in real time, the hand data transformation azimuth of the user in the virtual application environment, and determining whether the transformation azimuth and the generated virtual keyboard are within a preset distance threshold; for example, the system detects that the hand data conversion azimuth of the user is still within a preset distance threshold although the hand data conversion azimuth is converted, and at the moment, the system can consider that the hand data of the user only deviates from the virtual keyboard by a little, and does not need to search for a virtual drop point and generate another virtual keyboard according to the hand data again; for example, the system detects that the hand data transformation azimuth of the user is transformed and is not within a preset distance threshold, at this time, the system can re-search the azimuth of the virtual drop point in the virtual application environment according to the hand data transformation azimuth, correspondingly generate another virtual keyboard after the virtual drop point is regenerated, so that the situation that the user needs to rotate the body of the user in the virtual application environment to interact with a plurality of VR virtual objects is avoided, the visual field range is changed, only one virtual keyboard exists, and the text interaction between the user and the plurality of VR virtual objects is affected.
In this embodiment, the judging module further includes:
the second judging unit is used for tracking the gaze point of the user in the virtual application scene and judging whether the gaze point belongs to an interaction object or not;
and the second execution unit is used for acquiring the interaction content output by the interaction object if the interaction object is the same, generating corresponding interaction options based on the interaction content, and presenting interaction modes corresponding to the different interaction options for the user according to the different interaction options selected by the user, wherein the interaction modes comprise text interaction, touch interaction or gesture interaction.
In this embodiment, the system determines whether the VR virtual object existing in the gaze point belongs to the interactive object by tracking the gaze point of the eye of the user in the virtual application environment, so as to execute the corresponding step; for example, when the system detects that the VR virtual object existing in the gaze point of the eye of the user in the virtual application environment does not belong to the interactive object, the system does not enable the considered interactive functions, including the interactive functions of text interaction, touch interaction or gesture interaction, because the VR virtual object in the gaze point of the user cannot interact; for example, when the system detects that a VR virtual object existing in the gaze point of the eye of the user in the virtual application environment belongs to the interactive object, the system acquires the interactive content output by the interactive object, generates corresponding interactive options for the user to select based on the interactive content, and can present interactive modes corresponding to the different interactive options for the user according to the different interactive options selected by the user, including interactive functions of text interaction, touch interaction or gesture interaction, wherein the interactive functions capture hand data of the user through sensing equipment set by the smart watch and are synchronized to the VR equipment to interact with the VR virtual object.
In this embodiment, the execution module further includes:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an interactive content type to be input in a text interactive link, and presenting a corresponding virtual drop point preset range based on the interactive content type, wherein the interactive content type comprises numbers, symbols, directions and letters;
the selection unit is used for providing corresponding hand prompts for the user according to the virtual drop point preset range, and the user can select interactive contents corresponding to different hands through the hand prompts, wherein the interactive contents corresponding to different hands specifically comprise the type of the interactive contents of the left hand is a symbol and a letter, and the type of the interactive contents of the right hand is a number and a direction.
In this embodiment, in the system, by acquiring the types of interactive contents that the user needs to input in the text interaction link between the user and the VR virtual object, the system presents the preset range of the corresponding virtual drop point in the virtual keyboard based on the different types of interactive contents, and the system provides the corresponding hand prompt in the virtual keyboard for the user according to the preset range of the virtual drop point, where the different hand prompts correspond to different interactive contents in the virtual keyboard, and the left side in the virtual keyboard belongs to the interactive range of symbols and letters, and the right side in the virtual keyboard belongs to the interactive range of numbers and directions, and the system generates the virtual drop point on the left side or the right side of the main interactive contents in the virtual keyboard according to the type of the interactive contents at the moment, and the user can also input the symbols, letters, numbers and directions simultaneously through simultaneous operation, so as to combine the different text interactive contents.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. The data processing method for inputting the virtual text based on the intelligent watch is characterized by comprising the following steps of:
identifying a current virtual application scene of a user, and judging whether the virtual application scene needs text interaction or not;
if necessary, carrying out data synchronization on VR equipment and a pre-connected intelligent watch, acquiring hand data of the user through sensing equipment of the intelligent watch, synchronizing the hand data into the virtual application scene, identifying azimuth information of the hand data in the virtual application scene, generating a corresponding virtual drop point in the virtual application scene according to the azimuth information, and generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, wherein the sensing equipment comprises an accelerometer and an acceleration sensor, and the virtual keyboard is specifically used as a fixed drop point of the virtual drop point according to an index finger of the user;
Identifying text interaction efficiency of the user, and judging whether the text interaction efficiency reaches a preset interaction frequency;
if not, inputting the initial interaction keywords into a prediction model based on initial interaction keywords input through the virtual keyboard when the user performs text interaction, predicting through the prediction model to generate at least one additional keyword, pre-combining the additional keyword and the initial interaction keywords in the virtual application environment, generating a corresponding selectable sequence according to a preset priority ranking, selecting the selectable sequence by identifying hand swing positions of the user in a sliding manner, and confirming the selectable sequence by applying the virtual keyboard, wherein the priority ranking is particularly ranked according to the use times of the interaction keywords of the user, and the hand swing positions particularly comprise upper, lower, left and right directions.
2. The method for processing data based on inputting virtual text into a smart watch according to claim 1, wherein the step of inputting the initial interaction keyword into a prediction model, and predicting by the prediction model to generate at least one additional keyword in the virtual application environment comprises:
Recording interactive contents input when the user performs text interaction, uploading the interactive contents to a preset database, identifying at least one text feature in the interactive contents, and collecting the identification times of each text feature;
and based on the interactive text with the largest user use times in the recognition times, prioritizing the pre-generated interactive text content to generate at least one interactive copy commonly used by the user.
3. The method for processing data based on inputting virtual text by a smart watch according to claim 1, wherein before the step of generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, the method comprises:
acquiring visual field information of the user in the virtual application scene;
judging whether hand data of the user are detected in the visual field information;
and if so, generating a virtual keyboard in the virtual application scene based on the hand data and the virtual drop point.
4. The method for processing data based on virtual text input by a smart watch according to claim 1, wherein the step of identifying text interaction efficiency of the user and determining whether the text interaction efficiency reaches a preset interaction frequency further comprises:
Collecting text data input by the user on the virtual keyboard, generating text interaction information to be uploaded by the user based on the text data, and simultaneously recording a period of time required to be consumed for generating the text interaction information to be uploaded;
judging whether the time period required to be consumed is greater than a preset time period or not;
if yes, providing an interaction auxiliary function for the user, displaying pre-combined text auxiliary data in the virtual application environment when the user inputs the text data, and generating an interaction auxiliary text after the user selects the text auxiliary data.
5. The method for processing data based on virtual text input by a smart watch according to claim 1, wherein before the step of identifying the text interaction efficiency of the user, the method comprises:
tracking the transformation azimuth of the hand data in real time, and judging whether a preset distance threshold exists between the transformation azimuth and the virtual keyboard;
and if so, adjusting the generation azimuth of the virtual keyboard in the virtual application environment according to the transformation azimuth, wherein the generation azimuth always tracks the virtual drop point generation.
6. The method for processing data based on inputting virtual text by a smart watch according to claim 1, wherein the step of identifying the current virtual application scenario of the user comprises:
Tracking the gaze point of the user in the virtual application scene, and judging whether the gaze point belongs to an interactive object or not;
if yes, acquiring the interaction content output by the interaction object, generating corresponding interaction options based on the interaction content, and presenting interaction modes corresponding to the different interaction options for the user according to the different interaction options selected by the user, wherein the interaction modes comprise text interaction, touch interaction or gesture interaction.
7. The method for processing data based on inputting virtual text by a smart watch according to claim 1, wherein the step of identifying the azimuth information of the hand data in the virtual application scene and generating the corresponding virtual drop point in the virtual application scene according to the azimuth information further comprises:
acquiring an interactive content type to be input in a text interactive link, and presenting a corresponding virtual drop point preset range based on the interactive content type, wherein the interactive content type comprises numbers, symbols, directions and letters;
providing corresponding hand prompts for the user according to the virtual drop point preset range, and enabling the user to select interactive contents corresponding to different hands through the hand prompts, wherein the interactive contents corresponding to different hands specifically comprise the interactive contents of which the types are symbols and letters, and the interactive contents of which the types are numbers and directions.
8. A data processing system for inputting virtual text based on a smart watch, comprising:
the judging module is used for identifying the current virtual application scene of the user and judging whether the virtual application scene needs text interaction or not;
the executing module is used for carrying out data synchronization on VR equipment and a pre-connected intelligent watch if needed, acquiring hand data of the user through sensing equipment of the intelligent watch, synchronizing the hand data into the virtual application scene, identifying azimuth information of the hand data in the virtual application scene, generating a corresponding virtual drop point in the virtual application scene according to the azimuth information, and generating at least one virtual keyboard in the virtual application scene according to the virtual drop point, wherein the sensing equipment comprises an accelerometer and an acceleration sensor, and the virtual keyboard is specifically used as a fixed drop point of the virtual drop point according to an index finger of the user;
the second judging module is used for identifying the text interaction efficiency of the user and judging whether the text interaction efficiency reaches a preset interaction frequency or not;
and the second execution module is used for inputting the initial interaction keywords into a prediction model based on initial interaction keywords input through the virtual keyboard when the user performs text interaction if the user does not perform text interaction, predicting the initial interaction keywords through the prediction model to generate at least one additional keyword in the virtual application environment, pre-combining the additional keywords with the initial interaction keywords, generating a corresponding selectable sequence according to a preset priority ordering, selecting the selectable sequence by identifying hand swing positions of the user in a sliding manner, and confirming the selectable sequence by applying the virtual keyboard, wherein the priority ordering is specifically ordered according to the use times of the interaction keywords of the user, and the hand swing positions specifically comprise an upper position, a lower position, a left position and a right position.
9. The smart watch-based virtual text entry data processing system of claim 8, wherein the second execution module further comprises:
the recording unit is used for recording the interactive content input when the user performs text interaction, uploading the interactive content to a preset database, identifying at least one text feature in the interactive content, and collecting the identification times of each text feature;
and the generating unit is used for prioritizing the interactive contents corresponding to the text characteristics with the interactive text recently used by the user based on the identification times and generating at least one interactive copy recently used by the user.
10. The smart watch-based virtual text entry data processing system of claim 8, further comprising:
the acquisition module is used for acquiring the visual field information of the user in the virtual application scene;
the third judging module is used for judging whether the hand data of the user are detected in the visual field information;
and the third execution module is used for generating a virtual keyboard in the virtual application scene based on the hand data and the virtual drop point if the hand data and the virtual drop point are the same.
CN202310319289.2A 2023-03-29 2023-03-29 Data processing method and system for inputting virtual text based on intelligent watch Pending CN116069169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310319289.2A CN116069169A (en) 2023-03-29 2023-03-29 Data processing method and system for inputting virtual text based on intelligent watch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310319289.2A CN116069169A (en) 2023-03-29 2023-03-29 Data processing method and system for inputting virtual text based on intelligent watch

Publications (1)

Publication Number Publication Date
CN116069169A true CN116069169A (en) 2023-05-05

Family

ID=86171743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310319289.2A Pending CN116069169A (en) 2023-03-29 2023-03-29 Data processing method and system for inputting virtual text based on intelligent watch

Country Status (1)

Country Link
CN (1) CN116069169A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117350696A (en) * 2023-12-05 2024-01-05 深圳市光速时代科技有限公司 Method and system for eliminating overdue task data by smart watch

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224069A (en) * 2014-07-03 2016-01-06 王登高 The device of a kind of augmented reality dummy keyboard input method and use the method
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
CN107357434A (en) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 Information input equipment, system and method under a kind of reality environment
CN109828672A (en) * 2019-02-14 2019-05-31 亮风台(上海)信息科技有限公司 It is a kind of for determining the method and apparatus of the human-machine interactive information of smart machine
CN114047872A (en) * 2021-10-11 2022-02-15 北京理工大学 Text input method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224069A (en) * 2014-07-03 2016-01-06 王登高 The device of a kind of augmented reality dummy keyboard input method and use the method
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
CN107357434A (en) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 Information input equipment, system and method under a kind of reality environment
CN109828672A (en) * 2019-02-14 2019-05-31 亮风台(上海)信息科技有限公司 It is a kind of for determining the method and apparatus of the human-machine interactive information of smart machine
CN114047872A (en) * 2021-10-11 2022-02-15 北京理工大学 Text input method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117350696A (en) * 2023-12-05 2024-01-05 深圳市光速时代科技有限公司 Method and system for eliminating overdue task data by smart watch
CN117350696B (en) * 2023-12-05 2024-05-31 深圳市光速时代科技有限公司 Method and system for eliminating overdue task data by smart watch

Similar Documents

Publication Publication Date Title
CN102789313B (en) User interaction system and method
CN107533360B (en) Display and processing method and related device
CN104428732A (en) Multimodal interaction with near-to-eye display
CN106569613A (en) Multi-modal man-machine interaction system and control method thereof
CN104571823B (en) A kind of contactless visual human's machine interaction method based on intelligent television
KR20170014353A (en) Apparatus and method for screen navigation based on voice
CN102789312B (en) A kind of user interactive system and method
CN105824409A (en) Interactive control method and device for virtual reality
WO2014025711A1 (en) Search user interface using outward physical expressions
CN110517685A (en) Audio recognition method, device, electronic equipment and storage medium
CN116069169A (en) Data processing method and system for inputting virtual text based on intelligent watch
CN110442233A (en) A kind of augmented reality key mouse system based on gesture interaction
CN111695408A (en) Intelligent gesture information recognition system and method and information data processing terminal
CN109426342B (en) Document reading method and device based on augmented reality
US20230400918A1 (en) Systems and Methods for Hands-Free Scrolling Based on a Detected User Reading Activity
WO2024050260A1 (en) One-handed zoom operation for ar/vr devices
CN108829329B (en) Operation object display method and device and readable medium
Jiang et al. A brief analysis of gesture recognition in VR
Wang et al. Multi-channel augmented reality interactive framework design for ship outfitting guidance
CN116243785A (en) Multi-person collaborative sand table command system based on mixed reality
Babu et al. Controlling Computer Features Through Hand Gesture
Carrino et al. Gesture-based hybrid approach for HCI in ambient intelligent environmments
CN115951787B (en) Interaction method of near-eye display device, storage medium and near-eye display device
CN111459288B (en) Method and device for realizing voice input by using head control
JP2019144822A (en) Explicit knowledge formalization system and method of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230505

RJ01 Rejection of invention patent application after publication