CN115079882B - Human-computer interaction processing method and system based on virtual reality - Google Patents

Human-computer interaction processing method and system based on virtual reality Download PDF

Info

Publication number
CN115079882B
CN115079882B CN202210679021.5A CN202210679021A CN115079882B CN 115079882 B CN115079882 B CN 115079882B CN 202210679021 A CN202210679021 A CN 202210679021A CN 115079882 B CN115079882 B CN 115079882B
Authority
CN
China
Prior art keywords
recognition
data
identification
evaluation result
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210679021.5A
Other languages
Chinese (zh)
Other versions
CN115079882A (en
Inventor
邓湛波
秦华军
宛汝国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guowei Culture Technology Co ltd
Original Assignee
Guangzhou Guowei Culture Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guowei Culture Technology Co ltd filed Critical Guangzhou Guowei Culture Technology Co ltd
Priority to CN202210679021.5A priority Critical patent/CN115079882B/en
Publication of CN115079882A publication Critical patent/CN115079882A/en
Application granted granted Critical
Publication of CN115079882B publication Critical patent/CN115079882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

According to the human-computer interaction processing method and system based on virtual reality, the initial recognition result of the human-computer interaction data to be recognized is obtained through the designated reference data recognition thread, the initial recognition result is analyzed, the similarity of each two recognition ranges is determined, the situation of data errors can be effectively reduced, the type matching degree can be accurately determined, the similarity of each i recognition range of the designated matching condition is met by the determined type matching degree, the corresponding reference recognition ranges are determined one by one, and the corresponding reference recognition results are determined based on the recognition constraint conditions respectively covered by each reference recognition range. Therefore, the problem of data identification errors caused by the fact that the data similarity is too high can be effectively solved, and therefore accuracy of human-computer interaction is improved.

Description

Human-computer interaction processing method and system based on virtual reality
Technical Field
The application relates to the technical field of data processing, in particular to a man-machine interaction processing method and system based on virtual reality.
Background
Human-computer interaction technology (Human-Computer Interaction Techniques) refers to technology that realizes Human-computer conversations in an efficient manner through computer input and output devices. The man-machine interaction technology comprises that the machine provides a large amount of related information, prompt requests and the like for people through an output or display device, and the people input the related information, answer questions, prompt requests and the like for the machine through an input device. Man-machine interaction technology is one of the important content in computer user interface design.
At present, the scientific technology is continuously developed and advanced, so that the man-machine interaction can be completed more rapidly, and the working efficiency is improved. However, in the actual man-machine interaction process, a large amount of interference information may exist, and thus, the man-machine interaction accuracy is lowered. Therefore, a technical solution is needed to improve the above technical problems.
Disclosure of Invention
In order to improve the technical problems in the related art, the application provides a man-machine interaction processing method and system based on virtual reality.
In a first aspect, a human-computer interaction processing method based on virtual reality is provided, and the method is applied to a human-computer interaction processing system, and at least comprises the following steps: acquiring an initial recognition result of man-machine interaction data to be recognized through a designated reference data recognition thread, wherein the initial recognition result at least covers: each recognition range covered by the man-machine interaction data to be recognized at least covers: generating an identification constraint condition according to the set interactive content; and executing the following steps one by one according to the similarity of each i identification ranges in the identification ranges: based on the human-computer interaction data expression of the i recognition ranges, which respectively cover the recognition constraint conditions, determining the respective category basis of the corresponding i recognition constraint conditions, and generating the category basis set of the i recognition constraint conditions by combining the designated human-computer interaction data set, and the corresponding category matching degree; wherein i=2; and determining the similarity of each identification range of which the determined category matching degree meets the specified matching condition one by one as a corresponding reference identification range, and determining a corresponding reference identification result based on the identification constraint condition covered by each reference identification range.
In an independent embodiment, before the thread is identified by the specified reference data and the initial identification result of the man-machine interaction data to be identified is obtained, the method further includes: obtaining an interactive data debugging set, wherein one interactive data debugging set comprises the following steps: the key features and the local interaction bases which are determined according to at least i kinds of bases are corresponding, and at least the key features cover: template man-machine interaction data clusters, wherein each template man-machine interaction data cover identification constraint conditions corresponding to at least i templates determined according to set template interaction content; combining the interactive data debugging in the interactive data debugging set to repeatedly optimize the appointed data identification thread, and outputting a data optimization thread when the appointed requirement is met; wherein, in the primary optimization process, the following steps are executed: and combining the data identification thread, acquiring a first evaluation result based on key features in the interactive data debugging, and debugging the vector of the data identification thread through a quantitative evaluation result between the first evaluation result and a corresponding local interaction basis.
In an independently implemented embodiment, the quantitative evaluation result between the first evaluation result and the corresponding local interaction basis is determined in combination with the following manner: generating a first quantitative evaluation result between the first evaluation result and a corresponding local interaction basis through a specified risk evaluation thread; generating a second quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through a specified difference compensation thread; generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through a specified classification prediction thread, wherein the classification prediction thread is determined through the type matching degree of the type basis set of the identification constraint condition corresponding to each i template in the one interaction data debugging; and processing the first credibility of each of the first quantized evaluation results, the second quantized evaluation results and the third quantized evaluation results through the first quantized evaluation results, and obtaining quantized evaluation results between the first evaluation results and corresponding local interaction basis.
In an independently implemented embodiment, the first evaluation result at least covers: each interactive data identification result determined by the data identification thread through the interactive data debugging; generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by the specified classification pre-estimation thread comprises: in the interactive data debugging, the human-computer interaction data of each template are executed one by one, and the following steps are executed: generating target values of the recognition constraint conditions corresponding to each i templates one by one based on the constraint ranges of the recognition constraint conditions corresponding to at least i templates covered by the template man-machine interaction data; determining a reference template range from the template man-machine interaction data through each acquired target value, and determining a reference judgment range corresponding to the reference template range; and generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by combining the acquired target values and the reference judgment range with the specified classification pre-estimation thread.
In an independent embodiment, the generating the target value of each i template variables one by one based on the constraint range of each recognition constraint condition covered by the template man-machine interaction data is not less than that of the i templates, and includes: generating corresponding local differences among the recognition constraint conditions corresponding to each i templates one by one based on the constraint ranges of the recognition constraint conditions corresponding to at least i templates covered by one template man-machine interaction data, and determining the local differences as target values corresponding to the recognition constraint conditions corresponding to the i templates; or, based on the constraint ranges of the recognition constraint conditions corresponding to at least i templates covered by the template man-machine interaction data, generating corresponding local comparison vectors among the recognition constraint conditions corresponding to each i template one by one, and determining the local comparison vectors as target values corresponding to the recognition constraint conditions corresponding to the i templates.
In an independent embodiment, the generating, by combining each obtained target value and the reference determination range with a specified classification prediction thread, a third quantitative evaluation result between the first evaluation result and a corresponding local interaction basis includes: the target values which are covered by the reference judging range and are matched with the recognition constraint conditions corresponding to the not less than i specified templates are determined one by one as reference matching values corresponding to the reference judging range; respectively carrying out corresponding depolarization treatment through each acquired reference matching value and each target value to acquire a corresponding first matching average value and a corresponding second matching average value; and generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by combining the first matching average value and the second matching average value with a specified classification pre-estimation thread.
In an independent embodiment, after the output data optimizing thread, further comprising: through the interactive data debugging in the interactive data debugging set, the repeated optimization processing is continuously carried out on the data optimization thread, and when the specified matching requirement is met, a reference data identification thread is output; wherein, in the primary optimization process, the following steps are executed: combining the data optimization thread, acquiring a second evaluation result through key features in the interactive data debugging, and debugging the vector of the reference data identification thread through a quantitative evaluation result between the second evaluation result and a corresponding local interaction basis; the second quantitative evaluation result and the corresponding local interaction basis are obtained by processing each allocated second credibility after the corresponding second credibility is allocated to the first quantitative evaluation result through a designated second updating thread, and the third quantitative evaluation result is allocated to the second quantitative evaluation result respectively.
In an independently implemented embodiment, the initial recognition result further comprises: the respective credibility of each identification range, determining a corresponding reference identification result based on the respective identification constraint condition covered by each reference identification range, includes: through each reference identification range, the following steps are executed one by one: determining the credibility of a reference identification range; if the credibility of the reference identification range is smaller than the designated judgment value, determining the identification constraint condition covered by the reference identification range as an abnormal identification constraint condition, and cleaning the abnormal identification constraint condition in the initial identification result; if the credibility of the reference recognition range is not less than the designated judgment value, determining the recognition constraint condition covered by the reference recognition range as a corresponding reference recognition constraint condition, and determining the category basis corresponding to the reference recognition constraint condition as a corresponding reference recognition result.
In a second aspect, a man-machine interaction processing system based on virtual reality is provided, including a processor and a memory in communication with each other, where the processor is configured to read a computer program from the memory and execute the computer program to implement the method described above.
According to the man-machine interaction processing method and system based on virtual reality, the initial recognition result of man-machine interaction data to be recognized is obtained through the designated reference data recognition thread, the initial recognition result is analyzed, the similarity of every two recognition ranges is determined, the situation of data errors can be effectively reduced, the type matching degree can be accurately determined, the similarity of every i recognition ranges of the designated matching condition is met by the determined type matching degree, the corresponding reference recognition ranges are determined one by one, and the corresponding reference recognition results are determined based on recognition constraint conditions covered by each reference recognition range. Therefore, the problem of data identification errors caused by the fact that the data similarity is too high can be effectively solved, and therefore accuracy of human-computer interaction is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a man-machine interaction processing method based on virtual reality according to an embodiment of the present application.
Fig. 2 is a block diagram of a man-machine interaction processing device based on virtual reality according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a man-machine interaction processing system based on virtual reality according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions described above, the following detailed description of the technical solutions of the present application is provided through the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and embodiments of the present application are detailed descriptions of the technical solutions of the present application, and not limit the technical solutions of the present application, and the technical features of the embodiments and embodiments of the present application may be combined with each other without conflict.
Referring to fig. 1, a human-computer interaction processing method based on virtual reality is shown, and the method may include the following technical solutions described in steps S101 and S102.
S101: acquiring an initial recognition result of man-machine interaction data to be recognized through a designated reference data recognition thread, wherein the initial recognition result at least covers: each recognition range covered by the man-machine interaction data to be recognized at least covers: generating an identification constraint condition according to the set interactive content; and executing the following steps one by one according to the similarity of each i identification ranges in the identification ranges: based on the human-computer interaction data expression of the i recognition ranges, which respectively cover the recognition constraint conditions, determining the respective category basis of the corresponding i recognition constraint conditions, and generating the category basis set of the i recognition constraint conditions by combining the designated human-computer interaction data set, and the corresponding category matching degree; where i=2.
S102: and determining the similarity of each identification range of which the determined category matching degree meets the specified matching condition one by one as a corresponding reference identification range, and determining a corresponding reference identification result based on the identification constraint condition covered by each reference identification range.
It can be understood that when executing the content described in S101 and S102, the initial recognition result of the man-machine interaction data to be recognized is obtained through the specified reference data recognition thread, the initial recognition result is parsed, and the similarity of each two recognition ranges is determined, so that the situation of data errors can be effectively reduced, the type matching degree can be accurately determined, the similarity of each i recognition range of the specified matching condition is satisfied by the determined type matching degree, the corresponding reference recognition ranges are determined one by one, and the corresponding reference recognition results are determined based on the recognition constraint conditions respectively covered by each reference recognition range. Therefore, the problem of data identification errors caused by the fact that the data similarity is too high can be effectively solved, and therefore accuracy of human-computer interaction is improved.
S201: obtaining an interactive data debugging set, wherein one interactive data debugging set comprises the following steps: and the key features and the local interaction bases which are determined according to at least i category bases.
For example, in order to obtain a reference data identification thread for key content identification by configuration, a designated data identification thread is configured in combination with each corresponding interactive data debug covering the important content corresponding to the key content, and further, in the interactive data debug, each template variable covering the important content corresponding to the key content is annotated with corresponding template man-machine interaction data according to the relative spatial relationship in the interactive data debug, wherein each template man-machine interaction data covers at least i recognition constraint conditions corresponding to the template determined by the template variable.
S202: and combining the interactive data debugging in the interactive data debugging set to repeatedly optimize the appointed data identification thread, and outputting the data optimization thread when the appointed requirement is met.
Wherein, in the primary optimization process, the following steps are executed: and combining the data identification thread, acquiring a first evaluation result based on key features in the interactive data debugging, and debugging the vector of the data identification thread based on a quantitative evaluation result between the first evaluation result and the corresponding local interaction basis.
Further, the above-mentioned interactive data debug set is determined as a calculation parameter of the artificial intelligent thread, so that repeated optimization processing is performed on the specified data identification thread until the thread processing is completed, and in each configuration process, by means of the specified quantitative evaluation result, the quantitative evaluation result between the first evaluation result output by the data identification thread and the corresponding interactive data debug is determined, and in the corresponding local interaction basis, which is represented as a real-time category basis determined according to the identification constraint condition corresponding to the corresponding template, the quality of the evaluation result output by the data identification thread can be evaluated by comparing the identified category basis with the real-time category basis represented by the corresponding local interaction basis, so that the relevant vector of the data identification thread is correspondingly debugged.
Further, to improve accuracy of the thread output evaluation result, in combination with the following manner, a quantitative evaluation result between the first evaluation result and the corresponding local interaction basis is determined, including the following steps.
S301: and determining a first quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through the designated risk evaluation thread.
S302: and determining a second quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through the specified difference compensation thread.
S303: and determining a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through the specified classification prediction thread.
In the embodiment of the invention, for improving the accuracy of the identification result, corresponding classification estimated threads are built according to the type matching degree of the type basis set of the identification constraint conditions corresponding to each i templates in each interactive data debugging, so that the data identification threads are debugged.
If the number of types of the recognition constraint conditions corresponding to the templates is 2, building corresponding classification prediction threads based on the type matching degree of the corresponding two types of the type basis sets.
For example, in the interactive data debugging set, two different kinds of bases are covered, for example, in the interactive data debugging set, which covers two different kinds of bases, is combined with the following manner to determine the classification estimated thread, which includes the following matters.
S701: aiming at the man-machine interaction data of each template in the interaction data debugging, the following steps are executed one by one: and generating target values among the recognition constraint conditions corresponding to each i templates one by one based on the constraint ranges of the recognition constraint conditions corresponding to at least i specified templates covered by the template man-machine interaction data.
(1) Based on the constraint ranges of the recognition constraint conditions corresponding to at least i specified templates covered by the template man-machine interaction data, generating local differences among the recognition constraint conditions corresponding to each i template one by one, and determining the local differences as target values among the recognition constraint conditions corresponding to the corresponding i templates.
(2): based on the constraint ranges of the recognition constraint conditions corresponding to at least i specified templates covered by the template man-machine interaction data, generating local comparison vectors corresponding to the recognition constraint conditions corresponding to each i template one by one, and determining the local comparison vectors as target values among the recognition constraint conditions corresponding to the corresponding i templates.
S702: and determining a reference template range from the template man-machine interaction data through each acquired target value, and determining a reference judgment range corresponding to the reference template range.
Further, after each target value is determined based on the above manner, the template man-machine interaction data X covering the maximum target value is determined based on each target value, and is the corresponding reference template range, and then after the data recognition thread outputs the recognition result, the interaction data recognition result X corresponding to the reference template range (template man-machine interaction data X) is determined in each interaction data recognition result, and the interaction data recognition result X is determined as the corresponding reference determination range.
S703: and determining a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by combining the acquired target value and the reference judgment range with the specified classification prediction thread.
Preferably, based on S702, the target values of the respective matches of the recognition constraint condition X corresponding to the template and the recognition constraint condition Y corresponding to the template covered by the reference determination range (the interactive data recognition result X) are determined one by one as the reference matching value corresponding to the reference determination range, and depolarization processing is performed on each target value by using each obtained reference matching value to obtain the corresponding first matching average value and second matching average value.
If the number of types of the recognition constraint conditions corresponding to the templates exceeds 2, building corresponding classification prediction threads based on the type matching degree of each two types of type basis sets in each type basis.
For example, assuming that the interactive data debug set includes two or more different kinds of bases, calculating target values of each two kinds of bases in each kind of bases based on the random one of the above-described mode (1) and mode (2), calculating a corresponding second matching average value by each obtained target value, and further, after determining the corresponding reference determination range, determining the corresponding first matching average value based on each reference matching value corresponding to the reference determination range.
S304: based on the first quantitative evaluation result, the second quantitative evaluation result and the third quantitative evaluation result, the respective first credibility of the first quantitative evaluation result is processed, and the quantitative evaluation result between the first evaluation result and the corresponding local interaction basis is obtained.
After the reference data identification thread is obtained based on the above process, the embodiment of the invention provides a man-machine interaction processing method based on virtual reality, which can comprise the following steps.
S1001: acquiring an initial recognition result of the man-machine interaction data to be recognized through a designated reference data recognition thread, wherein the initial recognition result at least covers: each recognition range covered by the man-machine interaction data to be recognized at least covers: and generating an identification constraint condition according to the set interaction content.
For example, it is assumed that human-computer interaction data to be identified is input into the above-mentioned reference data identification thread, and an initial identification result output by the reference data identification thread is obtained.
S1002: for the similarity of every i identification ranges in each identification range, the following steps are executed one by one: based on the human-computer interaction data expression of the recognition constraint conditions covered by the similarity of the i recognition ranges, generating the corresponding type basis of the i recognition constraint conditions one by one, and determining the type basis set of the i recognition constraint conditions by combining with the specified human-computer interaction data set, wherein the corresponding type matching degree is determined.
S1003: and determining the similarity of each identification range of which the determined category matching degree meets the specified matching condition one by one as a corresponding reference identification range, and determining a corresponding reference identification result based on the identification constraint condition covered by each reference identification range.
And based on the steps, selecting the credibility of each reference recognition range, determining each reference recognition range with the credibility conforming to the specified judgment value, updating the initial recognition result according to the category basis corresponding to the recognition constraint condition respectively represented by each corresponding reference recognition range, and further guaranteeing the accuracy of outputting the reference recognition result.
On the basis of the above, please refer to fig. 2 in combination, a man-machine interaction processing device 200 based on virtual reality is provided, which is applied to a man-machine interaction processing system based on virtual reality, and the device includes:
the result identifying module 210 is configured to identify a thread through specified reference data, obtain an initial identifying result of the human-computer interaction data to be identified, where the initial identifying result at least includes: each recognition range covered by the man-machine interaction data to be recognized at least covers: generating an identification constraint condition according to the set interactive content; and executing the following steps one by one according to the similarity of each i identification ranges in the identification ranges: based on the human-computer interaction data expression of the i recognition ranges, which respectively cover the recognition constraint conditions, determining the respective category basis of the corresponding i recognition constraint conditions, and generating the category basis set of the i recognition constraint conditions by combining the designated human-computer interaction data set, and the corresponding category matching degree; wherein i=2;
the result determining module 220 determines, one by one, the similarity of each of the i recognition ranges for which the determined category matching degree satisfies the specified matching condition as a corresponding reference recognition range, and determines a corresponding reference recognition result based on the recognition constraint condition covered by each of the reference recognition ranges.
On the basis of the above, please refer to fig. 3 in combination, a man-machine interaction processing system 300 based on virtual reality is shown, which includes a processor 310 and a memory 320 in communication with each other, wherein the processor 310 is configured to read and execute a computer program from the memory 320, so as to implement the above method.
On the basis of the above, there is also provided a computer readable storage medium on which a computer program stored which, when run, implements the above method.
In summary, based on the above scheme, through the appointed reference data recognition thread, the initial recognition result of the man-machine interaction data to be recognized is obtained, the initial recognition result is analyzed, and the similarity of every two recognition ranges is determined, so that the situation of data errors can be effectively reduced, the type matching degree can be accurately determined, the similarity of every i recognition ranges of the appointed matching condition is met by the determined type matching degree, the corresponding reference recognition ranges are determined one by one, and the corresponding reference recognition results are determined based on the recognition constraint conditions respectively covered by each reference recognition range. Therefore, the problem of data identification errors caused by the fact that the data similarity is too high can be effectively solved, and therefore accuracy of human-computer interaction is improved.
It should be appreciated that the systems and modules thereof shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only with hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software, such as executed by various types of processors, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations of the present application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, etc., a conventional programming language such as C language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, ruby and Groovy, or other programming languages, etc. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application and are not intended to limit the order in which the processes and methods of the application are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed herein and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the numbers allow for adaptive variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this application is hereby incorporated by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the present application, documents that are currently or later attached to this application for which the broadest scope of the claims to the present application is limited. It is noted that the descriptions, definitions, and/or terms used in the subject matter of this application are subject to such descriptions, definitions, and/or terms if they are inconsistent or conflicting with such descriptions, definitions, and/or terms.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of this application. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present application may be considered in keeping with the teachings of the present application. Accordingly, embodiments of the present application are not limited to only the embodiments explicitly described and depicted herein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (6)

1. The human-computer interaction processing method based on virtual reality is characterized by being applied to a human-computer interaction processing system, and at least comprises the following steps:
acquiring an initial recognition result of man-machine interaction data to be recognized through a designated reference data recognition thread, wherein the initial recognition result at least covers: each recognition range covered by the man-machine interaction data to be recognized at least covers: generating an identification constraint condition according to the set interactive content; and executing the following steps one by one according to the similarity of each i identification ranges in the identification ranges: based on the human-computer interaction data expression of the i recognition ranges, which respectively cover the recognition constraint conditions, determining the respective category basis of the corresponding i recognition constraint conditions, and generating the category basis set of the i recognition constraint conditions by combining the designated human-computer interaction data set, and the corresponding category matching degree; wherein i=2;
the similarity of each identification range of which the determined category matching degree meets the specified matching condition is determined one by one to be a corresponding reference identification range, and a corresponding reference identification result is determined based on the identification constraint condition covered by each reference identification range;
the method further comprises the steps of:
obtaining an interactive data debugging set, wherein one interactive data debugging set comprises the following steps: the key features and the local interaction bases which are determined according to at least i kinds of bases are corresponding, and at least the key features cover: template man-machine interaction data clusters, wherein each template man-machine interaction data cover identification constraint conditions corresponding to at least i templates determined according to set template interaction content;
combining the interactive data debugging in the interactive data debugging set to repeatedly optimize the appointed data identification thread, and outputting a data optimization thread when the appointed requirement is met; wherein, in the primary optimization process, the following steps are executed: combining the data identification thread, acquiring a first evaluation result based on key features in interactive data debugging, and debugging vectors of the data identification thread through a quantitative evaluation result between the first evaluation result and a corresponding local interaction basis;
the quantitative evaluation result between the first evaluation result and the corresponding local interaction basis is determined by combining the following modes: generating a first quantitative evaluation result between the first evaluation result and a corresponding local interaction basis through a specified risk evaluation thread;
generating a second quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through a specified difference compensation thread;
generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis through a specified classification prediction thread, wherein the classification prediction thread is determined through the type matching degree of the type basis set of the identification constraint condition corresponding to each i template in the one interaction data debugging; processing the first credibility of each of the second quantized evaluation results and the third quantized evaluation results through the first quantized evaluation results to obtain quantized evaluation results between the first evaluation results and corresponding local interaction basis;
after the output data optimizing thread, the method further comprises: through the interactive data debugging in the interactive data debugging set, the repeated optimization processing is continuously carried out on the data optimization thread, and when the specified matching requirement is met, a reference data identification thread is output; wherein, in the primary optimization process, the following steps are executed: combining the data optimization thread, acquiring a second evaluation result through key features in the interactive data debugging, and debugging the vector of the reference data identification thread through a quantitative evaluation result between the second evaluation result and a corresponding local interaction basis; the second quantitative evaluation result and the corresponding local interaction basis are obtained by processing each allocated second credibility after the corresponding second credibility is allocated to the first quantitative evaluation result through a designated second updating thread, and the third quantitative evaluation result is allocated to the second quantitative evaluation result respectively.
2. The method of claim 1, wherein the first evaluation result comprises at least: each interactive data identification result determined by the data identification thread through the interactive data debugging; generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by the specified classification pre-estimation thread comprises:
in the interactive data debugging, the human-computer interaction data of each template are executed one by one, and the following steps are executed: generating target values of the recognition constraint conditions corresponding to each i templates one by one based on the constraint ranges of the recognition constraint conditions corresponding to at least i templates covered by the template man-machine interaction data; determining a reference template range from the template man-machine interaction data through each acquired target value, and determining a reference judgment range corresponding to the reference template range;
and generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by combining the acquired target values and the reference judgment range with the specified classification pre-estimation thread.
3. The method according to claim 2, wherein the generating the target value of each i template variable one by one based on the constraint range of each of the recognition constraint conditions corresponding to not less than i templates covered by the template man-machine interaction data comprises:
generating corresponding local differences among the recognition constraint conditions corresponding to each i templates one by one based on the constraint ranges of the recognition constraint conditions corresponding to at least i templates covered by one template man-machine interaction data, and determining the local differences as target values corresponding to the recognition constraint conditions corresponding to the i templates;
or, based on the constraint ranges of the recognition constraint conditions corresponding to at least i templates covered by the template man-machine interaction data, generating corresponding local comparison vectors among the recognition constraint conditions corresponding to each i template one by one, and determining the local comparison vectors as target values corresponding to the recognition constraint conditions corresponding to the i templates.
4. A method according to claim 2 or 3, wherein the generating, by each target value and the reference determination range obtained, a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis in combination with a specified classification prediction thread, comprises:
the target values which are covered by the reference judging range and are matched with the recognition constraint conditions corresponding to the not less than i specified templates are determined one by one as reference matching values corresponding to the reference judging range;
respectively carrying out corresponding depolarization treatment through each acquired reference matching value and each target value to acquire a corresponding first matching average value and a corresponding second matching average value;
and generating a third quantitative evaluation result between the first evaluation result and the corresponding local interaction basis by combining the first matching average value and the second matching average value with a specified classification pre-estimation thread.
5. A method according to claim 3, wherein the initial recognition result further comprises: the respective credibility of each identification range, determining a corresponding reference identification result based on the respective identification constraint condition covered by each reference identification range, includes:
through each reference identification range, the following steps are executed one by one: determining the credibility of a reference identification range;
if the credibility of the reference identification range is smaller than the designated judgment value, determining the identification constraint condition covered by the reference identification range as an abnormal identification constraint condition, and cleaning the abnormal identification constraint condition in the initial identification result;
if the credibility of the reference recognition range is not less than the designated judgment value, determining the recognition constraint condition covered by the reference recognition range as a corresponding reference recognition constraint condition, and determining the category basis corresponding to the reference recognition constraint condition as a corresponding reference recognition result.
6. A virtual reality based human-machine interaction processing system comprising a processor and a memory in communication with each other, the processor being adapted to read a computer program from the memory and execute it to implement the method of any of claims 1-5.
CN202210679021.5A 2022-06-16 2022-06-16 Human-computer interaction processing method and system based on virtual reality Active CN115079882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210679021.5A CN115079882B (en) 2022-06-16 2022-06-16 Human-computer interaction processing method and system based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210679021.5A CN115079882B (en) 2022-06-16 2022-06-16 Human-computer interaction processing method and system based on virtual reality

Publications (2)

Publication Number Publication Date
CN115079882A CN115079882A (en) 2022-09-20
CN115079882B true CN115079882B (en) 2024-04-05

Family

ID=83253957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210679021.5A Active CN115079882B (en) 2022-06-16 2022-06-16 Human-computer interaction processing method and system based on virtual reality

Country Status (1)

Country Link
CN (1) CN115079882B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104267835A (en) * 2014-09-12 2015-01-07 西安闻泰电子科技有限公司 Self-adaption gesture recognition method
CN108319421A (en) * 2018-01-29 2018-07-24 维沃移动通信有限公司 A kind of display triggering method and mobile terminal
CN110136701A (en) * 2018-02-09 2019-08-16 阿里巴巴集团控股有限公司 Interactive voice service processing method, device and equipment
CN110597446A (en) * 2018-06-13 2019-12-20 北京小鸟听听科技有限公司 Gesture recognition method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201322070A (en) * 2011-11-21 2013-06-01 Novatek Microelectronics Corp Noise filtering method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104267835A (en) * 2014-09-12 2015-01-07 西安闻泰电子科技有限公司 Self-adaption gesture recognition method
CN108319421A (en) * 2018-01-29 2018-07-24 维沃移动通信有限公司 A kind of display triggering method and mobile terminal
CN110136701A (en) * 2018-02-09 2019-08-16 阿里巴巴集团控股有限公司 Interactive voice service processing method, device and equipment
CN110597446A (en) * 2018-06-13 2019-12-20 北京小鸟听听科技有限公司 Gesture recognition method and electronic equipment

Also Published As

Publication number Publication date
CN115079882A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN115481197B (en) Distributed data processing method, system and cloud platform
CN115079882B (en) Human-computer interaction processing method and system based on virtual reality
CN116112746B (en) Online education live video compression method and system
CN115473822B (en) 5G intelligent gateway data transmission method, system and cloud platform
CN115373688B (en) Optimization method and system of software development thread and cloud platform
CN113626538B (en) Medical information intelligent classification method and system based on big data
CN117037982A (en) Medical big data information intelligent acquisition method and system
CN115457340A (en) Image recognition processing method and system based on artificial intelligence
CN113947709A (en) Image processing method and system based on artificial intelligence
CN113627490B (en) Operation and maintenance multi-mode decision method and system based on multi-core heterogeneous processor
CN115409510B (en) Online transaction security system and method
CN115756576B (en) Translation method of software development kit and software development system
CN116912351B (en) Correction method and system for intracranial structure imaging based on artificial intelligence
CN114611478B (en) Information processing method and system based on artificial intelligence and cloud platform
CN115631829B (en) Network connection multi-mode detection method and system based on acupoint massage equipment
CN114691830B (en) Network security analysis method and system based on big data
CN115292301B (en) Task data abnormity monitoring and processing method and system based on artificial intelligence
CN115345194A (en) Signal processing method and system based on mixed tree algorithm
CN113918963B (en) Authority authorization processing method and system based on business requirements
CN113609362B (en) Data management method and system based on 5G
CN113610117B (en) Underwater sensing data processing method and system based on depth data
CN113643818B (en) Method and system for integrating medical data based on regional data
CN113643701B (en) Method and system for intelligently recognizing voice to control home
CN113590952B (en) Data center construction method and system
CN113596849B (en) Wireless communication channel dynamic allocation method and system for smart home

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant