CN115079881A - Virtual reality-based picture correction method and system - Google Patents

Virtual reality-based picture correction method and system Download PDF

Info

Publication number
CN115079881A
CN115079881A CN202210678903.XA CN202210678903A CN115079881A CN 115079881 A CN115079881 A CN 115079881A CN 202210678903 A CN202210678903 A CN 202210678903A CN 115079881 A CN115079881 A CN 115079881A
Authority
CN
China
Prior art keywords
picture
interaction
interactive
information
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210678903.XA
Other languages
Chinese (zh)
Inventor
宛汝国
邓湛波
秦华军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guowei Culture Technology Co ltd
Original Assignee
Guangzhou Guowei Culture Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guowei Culture Technology Co ltd filed Critical Guangzhou Guowei Culture Technology Co ltd
Priority to CN202210678903.XA priority Critical patent/CN115079881A/en
Publication of CN115079881A publication Critical patent/CN115079881A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The virtual reality-based picture correction method and system provided by the application obtain the confidence coefficient of the virtual reality interactive picture after recording the interactive information of the undetermined picture by combining the picture motion vector according to the picture interaction decision basis of the interactive information of the undetermined picture, the confidence coefficient after the interactive information of the undetermined picture is recorded in the virtual reality interactive picture is corrected through the obtained confidence coefficient difference of the interactive picture recording undetermined picture interactive information carrying one picture activity vector, the corrected confidence coefficient can be obtained, because the confidence coefficient difference needs to refer to two factors of the interactive picture description and the comparison result in parallel, the real-time confidence coefficient and the prediction confidence coefficient are associated with the interactive picture description and the comparison result in parallel, the interference caused by various factors can be avoided as much as possible, the prediction accuracy of the confidence coefficient can be effectively improved, therefore, at least one picture interactive information corresponding to the virtual reality interactive picture can be more reliably obtained.

Description

Virtual reality-based picture correction method and system
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and a system for correcting a picture based on virtual reality.
Background
Virtual Reality (abbreviated as VR) is a new practical technology developed in the 20 th century. Virtual reality technology encompasses computer, electronic information, simulation technology, the basic implementation of which is that a computer simulates a virtual environment to give a person a sense of environmental immersion. With the continuous development of social productivity and scientific technology, VR technology is increasingly in great demand in various industries. The VR technology has made great progress and gradually becomes a new scientific and technical field.
At present, the technical field of virtual reality technology application is more extensive. With the continuous improvement of the living standard of the user, the requirement on the virtual reality technology is higher, a clearer and more vivid picture is needed, and the experience of the user is improved. Therefore, a technical solution is needed to solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems in the related art, the application provides a picture correction method and system based on virtual reality.
In a first aspect, a virtual reality-based picture correction method is provided, where the method at least includes: obtaining at least one picture motion vector of the virtual reality interaction picture; generating at least one piece of picture interaction information with the best similarity with one of the picture activity vectors for one of the picture activity vectors, and determining the picture interaction information to be determined corresponding to the one of the picture activity vectors; acquiring a picture interaction decision basis of the to-be-determined picture interaction information, and acquiring a confidence coefficient of the virtual reality interaction picture after the to-be-determined picture interaction information is recorded through the at least one picture activity vector; according to the obtained confidence difference of the interactive picture carrying one picture activity vector and recording the interactive information of the to-be-determined picture, correcting the confidence of the virtual reality interactive picture after recording the interactive information of the to-be-determined picture, and obtaining the corrected confidence of the virtual reality interactive picture recording the interactive information of the to-be-determined picture, wherein the confidence difference is obtained by the real-time confidence and the prediction confidence of the example interactive picture carrying one picture activity vector after recording the interactive information of the to-be-determined picture; and clustering the to-be-determined picture interaction information based on the corrected confidence coefficient, and determining at least one picture interaction information corresponding to the virtual reality interaction picture from the to-be-determined picture interaction information according to a clustering result.
In an independently implemented embodiment, the determining that no less than one picture interaction information with the best similarity to the one picture motion vector is determined as the picture interaction information to be determined further includes: obtaining an interaction theme which is not less than one example interaction picture and is processed completely, and constructing an event distribution situation according to the interaction theme which is not less than one example interaction picture and is processed completely, wherein the interaction theme which is processed completely comprises a real-time optimization mode of the example interaction picture after picture interaction information is recorded, the event distribution situation is characterized in that one interaction picture of the example interaction picture determined from the interaction theme which is processed completely is described as a picture text, and a structure in the event distribution situation is used for representing picture interaction information which is determined from the interaction theme which is processed completely and has the best similarity with the picture text of the event distribution situation; the determining that at least one picture interaction information with the best similarity to the motion vector of one picture is determined as the interaction information of the picture to be determined comprises the following steps: searching the event distribution condition of one of the picture motion vectors in the picture text, and determining the event distribution condition as a target event distribution condition; and determining the picture interaction information represented by the framework in the target event distribution situation as the pending picture interaction information.
In a separate embodiment, the processed interactive theme further includes a prediction confidence level after the example interactive picture recording picture interaction information; the correcting the confidence coefficient after the virtual reality interactive picture records the interactive information of the picture to be determined further comprises the following steps: generating a confidence difference after the example interactive picture recording picture interaction information carrying one of the example interactive picture descriptions based on one of the example interactive picture descriptions, the prediction confidence after the example interactive picture recording picture interaction information and a real-time optimization mode; determining the picture interaction information represented by the architecture in the event distribution situation taking one of the example interaction picture descriptions as the picture text, and loading the confidence difference of the example interaction picture carrying one of the example interaction picture descriptions after recording the picture interaction information represented by the architecture into a corresponding architecture; the method for correcting the confidence coefficient after the virtual reality interactive picture records the interactive information of the picture to be determined comprises the following steps: and obtaining the confidence difference of the interaction picture carrying the picture activity vector after the interaction information of the picture to be determined is recorded in the framework in the distribution situation of the target event.
In an embodiment, the setting up the event distribution according to the processed interactive topics of not less than one example interactive screen includes: performing convolution processing on the processed interactive theme of the at least one sample interactive picture to obtain a sample interactive picture description of the at least one sample interactive picture; loading the example interactive picture description to a previously configured optimization thread, and acquiring information of not less than one picture interactive information which is output by the optimization thread and has the best similarity with the example interactive picture description; and constructing an event distribution case with the example interactive picture description as a picture text, and constructing a corresponding number of structures in the event distribution case according to the number of not less than one picture interactive information with the best similarity to the example interactive picture description, wherein each structure is used for representing one picture interactive information with the best similarity to the example interactive picture description.
In an independently implemented embodiment, the generating a confidence difference after the example interactive picture recording picture interaction information carrying the one example interactive picture description based on the one example interactive picture description, the prediction confidence after the example interactive picture recording picture interaction information, and the real-time optimization method comprises: constructing a correlation between the one example interactive screen description, the recorded screen interactive information, the number of records, the optimized number and the prediction confidence of the example interactive screen based on the processed interactive theme of the example interactive screen and the one example interactive screen description; generating the global record quantity, the global optimization quantity and the depolarization variable of the prediction confidence coefficient after all the example interaction pictures carrying the description of one example interaction picture record the picture interaction information for one piece of picture interaction information related to the processed interaction subject based on the association condition; obtaining real-time confidence degrees after all the example interactive pictures carrying the one example interactive picture description record the picture interactive information based on the global record quantity and the global optimization quantity when all the example interactive pictures carrying the one example interactive picture description record the picture interactive information; obtaining a confidence difference of the example interactive picture carrying the example interactive picture description after recording the picture interaction information based on the real-time confidence and the prediction confidence depolarization variable after recording the picture interaction information of all the example interactive pictures carrying the one example interactive picture description.
In an embodiment, the obtaining the confidence difference after the example interactive picture carrying the one example interactive picture description records the picture interaction information based on the real-time confidence and the prediction confidence depolarization variable after the all example interactive pictures carrying the one example interactive picture description record the picture interaction information includes: and determining the comparison result of the real-time confidence and the prediction confidence depolarization variables after the picture interaction information is recorded in all the example interaction pictures carrying the one example interaction picture description as the confidence difference after the picture interaction information is recorded in the example interaction pictures carrying the one example interaction picture description.
In an independently implemented embodiment, the correcting, according to the confidence difference of the interaction information of the undetermined picture recorded in the obtained interaction picture carrying the one picture motion vector, the confidence after the interaction information of the undetermined picture is recorded in the virtual reality interaction picture includes: obtaining the confidence difference of the interaction picture carrying the one picture activity vector and related to all the picture activity vectors of the virtual reality interaction picture after the interaction picture records the interaction information of the picture to be determined, and determining the confidence difference as the confidence difference to be processed; determining the confidence difference to be processed, performing depolarization variable weight processing, and determining the confidence difference after recording the interaction information of the picture to be processed for the virtual reality interaction picture; and correcting the confidence coefficient of the virtual reality interactive picture after the undetermined picture interactive information is recorded based on the confidence coefficient difference of the virtual reality interactive picture after the undetermined picture interactive information is recorded.
In an independently implemented embodiment, the correcting the confidence level of the virtual reality interactive picture after recording the interaction information of the undetermined picture based on the confidence level difference after recording the interaction information of the undetermined picture by the virtual reality interactive picture includes: and acquiring the corrected confidence coefficient of the undetermined picture interaction information recorded by the virtual reality interaction picture based on the fusion result of the confidence coefficient difference of the undetermined picture interaction information recorded by the virtual reality interaction picture and the confidence coefficient of the undetermined picture interaction information recorded by the virtual reality interaction picture.
In a second aspect, a virtual reality-based picture correction system is provided, which includes a processor and a memory, which are communicated with each other, and the processor is configured to read a computer program from the memory and execute the computer program to implement the method described above.
The picture correction method and system based on virtual reality provided by the embodiment of the application generate picture interaction information with the best similarity with each picture activity vector by obtaining the picture activity vectors of virtual reality interaction pictures, determine the picture interaction information to be undetermined, further obtain the confidence coefficient of the virtual reality interaction pictures after recording the undetermined picture interaction information by combining the picture activity vectors according to the picture interaction decision basis of the undetermined picture interaction information, correct the confidence coefficient of the virtual reality interaction pictures after recording the undetermined picture interaction information by the confidence coefficient difference of the interaction pictures with one obtained picture activity vector, namely obtain the corrected confidence coefficient, because the confidence coefficient difference needs to refer to two factors of interactive picture description and comparison result in parallel, associate the real-time confidence coefficient and the prediction confidence coefficient with the interactive picture description and the comparison result in parallel, interference caused by various factors can be avoided as much as possible, and the prediction accuracy of the confidence coefficient can be effectively improved, so that at least one picture interaction information corresponding to the virtual reality interaction picture can be more reliably obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of a virtual reality-based picture correction method according to an embodiment of the present disclosure.
Fig. 2 is a block diagram of a virtual reality-based image correction apparatus according to an embodiment of the present disclosure.
Fig. 3 is an architecture diagram of a virtual reality-based picture correction system according to an embodiment of the present disclosure.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Referring to fig. 1, a virtual reality-based picture correction method is shown, which may include the following technical solutions described in steps S101 to S104.
S101, at least one picture motion vector of the virtual reality interactive picture is obtained.
S102, generating at least one piece of picture interaction information with the best similarity with one of the picture motion vectors for one of the picture motion vectors, and determining the picture interaction information to be determined corresponding to one of the picture motion vectors.
In the current picture interaction information determining picture correcting system, an optimization thread is usually obtained through big data configuration, and the picture interaction information corresponding to each interaction picture description can be determined by analyzing the interaction picture description of each interaction picture in the big data and the picture interaction information with high similarity to each interaction picture through the optimization thread.
Therefore, for each frame motion vector, not less than one frame interaction information with the best similarity can be obtained, for example, if the virtual reality interaction frame carries three frame motion vectors, which are respectively X1, X2 and X3, then not less than one frame interaction information is respectively determined for 3 frame motion vectors one by one, for example, the frame interaction information with the best similarity to X1 is Y1 and Y2, the frame interaction information with the best similarity to X2 is Y3 and Y4, and the frame interaction information with the best similarity to X3 is Y5, then 5 frame interaction information can be obtained for the virtual reality interaction frame. The embodiment of the invention does not limit the number of the picture interactive information with high similarity of each picture motion vector.
S103, obtaining a picture interaction decision basis of the interaction information of the picture to be determined, and combining at least one picture activity vector to obtain the confidence coefficient of the virtual reality interaction picture after the interaction information of the picture to be determined is recorded.
It should be understood that, similar to the interactive screen, the screen interaction information also carries a feature representing a local tag, and is determined as a screen interaction decision basis.
And after the picture interaction decision basis of the undetermined picture interaction information is obtained, the confidence coefficient of the virtual reality interaction picture after the undetermined picture interaction information is recorded is obtained according to the picture activity vector of the virtual reality interaction picture and the picture interaction decision basis of the undetermined picture interaction information.
And S104, according to the obtained confidence difference of the interactive picture recording undetermined picture interactive information carrying one picture activity vector, correcting the confidence of the virtual reality interactive picture recording the undetermined picture interactive information, and obtaining the corrected confidence of the virtual reality interactive picture recording the undetermined picture interactive information.
The confidence difference of the embodiment of the invention is obtained in advance by the real-time confidence and the prediction confidence after the mutual information of the undetermined picture is recorded by the example mutual picture carrying one picture activity vector, wherein the real-time confidence after the mutual information of the undetermined picture is recorded by the example mutual picture carrying one picture activity vector can determine the number of records of the mutual information of the undetermined picture recorded by the example mutual picture carrying one picture activity vector and then send an optimization mode to obtain the confidence difference.
Furthermore, when the processed interactive subject of the example interactive picture is collected, the confidence after the picture interaction information is recorded in the example interactive picture every time can be predicted, and then the polarization variable can be taken out according to the prediction confidence after the picture interaction information is recorded in different example interactive pictures with the same example interactive picture description, so that the prediction confidence after the picture interaction information is recorded in the example interactive pictures with the same example interactive picture description can be obtained.
When the confidence difference is calculated, the real-time confidence and the prediction confidence are associated with the interactive picture description and the comparison result in parallel by referring to the interactive picture description and the comparison result in parallel, so that the interference caused by various factors can be avoided as much as possible, and the prediction accuracy of the confidence can be effectively improved.
Since the confidence obtained in step S103 belongs to the prediction confidence, it can be known that the corrected confidence is closer to the real-time confidence by combining with the definition of the confidence difference, and when the picture interaction information is determined by further using the confidence closer to the real-time confidence, the requirement of the virtual reality interaction picture can be more satisfied.
And S105, clustering the interaction information of the to-be-determined picture according to the corrected confidence coefficient, and determining at least one picture interaction information corresponding to the virtual reality interaction picture from the interaction information of the to-be-determined picture according to the clustering result.
And clustering according to the three pieces of to-be-determined picture interaction information, wherein the clustering results are second to-be-determined picture interaction information, first to-be-determined picture interaction information and third to-be-determined picture interaction information. And if only one piece of picture interaction information is recommended to the virtual reality interaction picture, sending the second picture interaction information to the virtual reality interaction picture.
The picture correcting method based on the virtual reality of the embodiment of the invention generates picture interaction information with the best similarity with each picture activity vector by obtaining the picture activity vector of a virtual reality interaction picture, determines the picture interaction information to be undetermined, further combines the picture activity vector to obtain the confidence coefficient of the virtual reality interaction picture after recording the undetermined picture interaction information according to the picture interaction decision basis of the undetermined picture interaction information, corrects the confidence coefficient of the virtual reality interaction picture after recording the undetermined picture interaction information by the obtained confidence coefficient difference of the interaction picture recording undetermined picture interaction information carrying one picture activity vector, can obtain the corrected confidence coefficient, and associates the real-time confidence coefficient and the prediction confidence coefficient with the interaction picture description and the comparison result in parallel due to the confidence coefficient difference needing to refer to two factors of the interaction picture description and the comparison result in parallel, interference caused by various factors can be avoided as much as possible, and the prediction accuracy of the confidence coefficient can be effectively improved, so that at least one picture interaction information corresponding to the virtual reality interaction picture can be more reliably obtained.
Further, at least one picture interaction information with the best similarity with one picture motion vector is generated and determined as the picture interaction information to be determined, and the method also comprises the following steps: and obtaining the interaction subjects which are not less than one example interaction picture and are processed, and building an event distribution situation according to the interaction subjects which are not less than one example interaction picture and are processed.
It can be seen from the foregoing embodiments that the processed interactive theme includes a real-time optimization mode after the example interactive picture records the picture interaction information, and the real-time optimization mode may be that the interactive picture does not have the optimization mode, or that the interactive picture generates the optimization mode. It can be understood that the interaction theme obtained by the embodiment of the present invention after the processing is completed relates to a more diversified comparison result and the example interaction picture, and the more accurate the picture interaction information is obtained accurately.
By analyzing the processed interactive theme of the example interactive picture, the interactive picture description, the comparison result characteristics, the similarity between the interactive picture description and different comparison results, and other information of the example interactive picture can be determined, and further the event distribution condition can be established by using the information. The event distribution case is described as screen text by one interactive screen of the example interactive screen determined from the interactive theme after the processing is finished, and the framework in the event distribution case is used for representing the screen interactive information which is determined from the interactive theme after the processing is finished and has the best similarity with the screen text of the event distribution case.
Correspondingly, at least one picture interactive information with the best similarity with one picture motion vector is generated and determined as the interactive information of the picture to be determined, and the interactive information of the picture to be determined comprises the following contents: searching an image text for the event distribution condition of one image motion vector, and determining the image text as a target event distribution condition; and determining the picture interaction information of the architecture representation in the target event distribution situation as the pending picture interaction information.
By setting up an event distribution situation which takes interactive picture description as a picture text and determines picture interactive information with the best similarity with the interactive picture description as a framework, considering that the confidence coefficient of the virtual reality interactive picture recording undetermined picture interactive information needs to be corrected, and the confidence coefficient of the virtual reality interactive picture recording undetermined picture interactive information and the confidence coefficient of each picture activity vector of the virtual reality interactive picture to the undetermined picture interactive information can generate interference, the embodiment of the invention can set the confidence coefficient difference of the picture activity vector to the undetermined picture interactive information in advance in the event distribution situation, so that after the event distribution situation that the picture activity vector is the picture text is obtained, the corresponding confidence coefficient difference can be searched in the event distribution situation, which is equivalent to executing the steps of obtaining the undetermined picture interactive information and obtaining the interactive picture recording undetermined picture with one of the picture activity vectors The confidence difference of the face interaction information is two pieces of information, and the accuracy of the picture interaction information is guaranteed.
Based on the above concept, the interaction theme after the processing is completed further includes the prediction confidence after the example interaction picture records the picture interaction information.
Furthermore, the confidence coefficient after the interactive information of the undetermined picture is recorded in the virtual reality interactive picture is corrected, and the method also comprises the following steps.
S201, generating a confidence difference after recording the interactive information of the example interactive picture carrying one of the example interactive picture descriptions according to one of the example interactive picture descriptions, the prediction confidence after recording the interactive information of the picture and a real-time optimization mode of the example interactive picture.
The method for determining the confidence difference after the example interactive picture recording picture interaction information carrying one of the example interactive picture descriptions according to the embodiment of the present invention may include the following steps.
S2011, according to the interaction theme of the example interaction picture after the processing is completed and one of the example interaction picture descriptions, establishing a correlation between one of the example interaction picture descriptions, the recorded picture interaction information, the recording quantity, the optimization quantity and the prediction confidence of the example interaction picture.
S2012, generating the global record number, the global optimization number and the depolarization variable of the prediction confidence coefficient after carrying the mutual information of all the example interactive picture record pictures described by one of the example interactive pictures according to the association condition for one of the picture interactive information related to the processed interactive topic.
By summarizing the correlation conditions carrying the same interactive picture description and the same picture interactive information, the global record quantity, the global optimization quantity and the depolarization variable of the prediction confidence coefficient after all the example interactive pictures carrying the same interactive picture description record the same picture interactive information can be obtained. The prediction confidence depolarization variable is a depolarization variable of the prediction confidence included in the association situation carrying the same interactive picture description and the same picture interaction information.
S2013, obtaining real-time confidence degrees of the recorded picture interaction information of all the example interaction pictures carrying one of the example interaction picture descriptions according to the global record quantity and the global optimization quantity of the recorded picture interaction information of all the example interaction pictures carrying one of the example interaction picture descriptions.
S2014, obtaining a confidence difference after the example interactive picture recording picture interaction information carrying the example interactive picture description according to the real-time confidence after the picture interaction information is recorded in all the example interactive pictures carrying one of the example interactive picture descriptions and the depolarization variable of the prediction confidence.
Further, the comparison result of the real-time confidence level and the prediction confidence level depolarization variable after recording the frame interactive information to all the example interactive frames carrying one of the example interactive frame descriptions is determined as the confidence level difference after recording the frame interactive information to the example interactive frames carrying one of the example interactive frame descriptions.
S202, determining picture interaction information represented by a structure in an event distribution situation taking one of the example interaction picture descriptions as a picture text, and loading the confidence difference carrying the picture interaction information represented by the example interaction picture recording structure of one of the example interaction picture descriptions into a corresponding structure.
On the basis of the above embodiments, the method is determined to be an optional embodiment, and obtaining the confidence difference after recording the interaction information of the undetermined picture in the interactive picture carrying one of the picture motion vectors includes the following steps: and obtaining the confidence difference after recording the interaction information of the undetermined picture to the interactive picture carrying the picture activity vector from the framework in the distribution condition of the target event.
And obtaining the confidence level output by the confidence level thread by loading the interactive picture description of the interactive picture and the characteristics of the comparison result in the event distribution situation to the confidence level thread.
And finally, cleaning the confidence coefficient by using the confidence coefficient difference, thus obtaining the confidence coefficient after cleaning.
On the basis of the above embodiments, it is determined as an optional embodiment that the event distribution is constructed according to the interaction theme of not less than one example interaction picture after the processing is completed, and the method includes the following steps.
S301, performing convolution processing on the processed interactive subjects of at least one sample interactive picture to obtain sample interactive picture descriptions of at least one sample interactive picture;
s302, loading the example interactive picture description to a previously configured optimization thread, and obtaining information of not less than one picture interactive information which is output by the optimization thread and has the best similarity with the example interactive picture description.
S303, building an event distribution situation with the example interactive picture description as the picture text, and building a corresponding number of structures in the event distribution situation according to the number of not less than one picture interactive information with the best similarity to the example interactive picture description, wherein each structure is used for representing one picture interactive information with the best similarity to the example interactive picture description.
On the basis of the above embodiments, the method is determined to be an optional embodiment, and the confidence coefficient after the undetermined picture interaction information is recorded in the virtual reality interaction picture is corrected according to the confidence coefficient difference of the acquired interaction picture recording undetermined picture interaction information carrying one picture activity vector, and the method comprises the following steps.
S401, obtaining the confidence difference of the interactive picture which is related to all picture activity vectors of the virtual reality interactive picture and carries one picture activity vector and records the interactive information of the picture to be determined, and determining the confidence difference as the confidence difference to be processed.
Because the virtual reality interactive picture carries not less than one picture activity vector, and each picture activity vector and each undetermined picture interaction information have a confidence difference, the procedure of the embodiment of the present invention needs to obtain the confidence difference after the interaction picture carrying one of the picture activity vectors, which is related to all the picture activity vectors of the virtual reality interactive picture, records the undetermined picture interaction information.
S402, determining the confidence difference to be processed, performing depolarization variable weight processing, and determining the confidence difference of interaction information of the picture to be processed recorded for the virtual reality interaction picture.
And S403, according to the confidence difference after the undetermined picture interaction information is recorded in the virtual reality interaction picture, correcting the confidence after the undetermined picture interaction information is recorded in the virtual reality interaction picture.
And further, acquiring the corrected confidence coefficient of the virtual reality interaction picture recording undetermined picture interaction information according to the fusion result of the confidence coefficient difference of the virtual reality interaction picture recording undetermined picture interaction information and the confidence coefficient of the virtual reality interaction picture recording undetermined picture interaction information. On the basis of the above, please refer to fig. 2, which provides a virtual reality-based image correction apparatus 200, applied to a virtual reality-based image correction system, the apparatus comprising:
the information interaction module 210 is configured to obtain at least one frame motion vector of the virtual reality interaction frame; generating at least one piece of picture interaction information with the best similarity with one of the picture activity vectors for one of the picture activity vectors, and determining the picture interaction information to be determined corresponding to the one of the picture activity vectors;
a confidence coefficient obtaining module 220, configured to obtain a picture interaction decision basis of the to-be-determined picture interaction information, and obtain a confidence coefficient after the virtual reality interaction picture records the to-be-determined picture interaction information through the at least one picture motion vector;
an information correction module 230, configured to correct a confidence level of the virtual reality interactive screen after recording the interaction information of the to-be-determined screen according to a confidence level difference of the acquired interactive screen carrying the one screen activity vector and recording the interaction information of the to-be-determined screen, and obtain a corrected confidence level of the virtual reality interactive screen after recording the interaction information of the to-be-determined screen, where the confidence level difference is obtained from a real-time confidence level and a prediction confidence level of the example interactive screen carrying the one screen activity vector after recording the interaction information of the to-be-determined screen;
and an information determining module 240, configured to perform clustering processing on the to-be-determined picture interaction information based on the corrected confidence level, and determine, according to a clustering processing result, at least one picture interaction information corresponding to the virtual reality interaction picture from the to-be-determined picture interaction information.
On the basis of the above, please refer to fig. 3, which shows a virtual reality-based picture correcting system 300, which includes a processor 310 and a memory 320, which are communicated with each other, wherein the processor 310 is configured to read a computer program from the memory 320 and execute the computer program to implement the above method.
On the basis of the above, a computer-readable storage medium is also provided, on which a computer program stored is executed to implement the above-described method.
In summary, based on the above solution, the image motion vector of the virtual reality interactive image is obtained, the image interaction information with the best similarity to each image motion vector is generated, the image interaction information is determined to be the undetermined image interaction information, the confidence level of the virtual reality interactive image after the undetermined image interaction information is recorded is further obtained according to the image interaction decision basis of the undetermined image interaction information by combining the image motion vectors, the confidence level of the virtual reality interactive image after the undetermined image interaction information is recorded is corrected by the obtained confidence level difference of the interactive image recording undetermined image interaction information carrying one of the image motion vectors, and the corrected confidence level can be obtained, because the confidence level difference needs to refer to two factors of interactive image description and comparison result in parallel, the real-time confidence level and the prediction confidence level are both associated with the interactive image description and the comparison result in parallel, interference caused by various factors can be avoided as much as possible, and the prediction accuracy of the confidence coefficient can be effectively improved, so that at least one picture interaction information corresponding to the virtual reality interaction picture can be more reliably obtained.
It should be appreciated that the system and its modules shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the foregoing description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the numbers allow for adaptive variation. Accordingly, in some embodiments, the numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application can be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those embodiments explicitly described and depicted herein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A picture correction method based on virtual reality is characterized by at least comprising the following steps:
obtaining at least one picture motion vector of the virtual reality interaction picture; generating at least one piece of picture interaction information with the best similarity with one of the picture activity vectors for one of the picture activity vectors, and determining the picture interaction information to be determined corresponding to the one of the picture activity vectors;
acquiring a picture interaction decision basis of the to-be-determined picture interaction information, and acquiring a confidence coefficient of the virtual reality interaction picture after the to-be-determined picture interaction information is recorded through the at least one picture activity vector;
according to the obtained confidence difference of the interactive picture carrying one picture activity vector and recording the interactive information of the to-be-determined picture, correcting the confidence of the virtual reality interactive picture after recording the interactive information of the to-be-determined picture, and obtaining the corrected confidence of the virtual reality interactive picture recording the interactive information of the to-be-determined picture, wherein the confidence difference is obtained by the real-time confidence and the prediction confidence of the example interactive picture carrying one picture activity vector after recording the interactive information of the to-be-determined picture;
and clustering the interaction information of the to-be-determined picture based on the corrected confidence coefficient, and determining at least one picture interaction information corresponding to the virtual reality interaction picture from the interaction information of the to-be-determined picture according to a clustering result.
2. The virtual reality-based picture correction method according to claim 1, wherein the determining that not less than one picture interaction information with the best similarity to the one picture motion vector is determined as the picture interaction information to be determined further comprises: obtaining an interaction theme which is not less than one example interaction picture and is processed completely, and constructing an event distribution situation according to the interaction theme which is not less than one example interaction picture and is processed completely, wherein the interaction theme which is processed completely comprises a real-time optimization mode of the example interaction picture after picture interaction information is recorded, the event distribution situation is characterized in that one interaction picture of the example interaction picture determined from the interaction theme which is processed completely is described as a picture text, and a structure in the event distribution situation is used for representing picture interaction information which is determined from the interaction theme which is processed completely and has the best similarity with the picture text of the event distribution situation;
the determining that at least one picture interaction information with the best similarity to the motion vector of one picture is determined as the interaction information of the picture to be determined comprises the following steps: searching the event distribution condition of one of the picture motion vectors in the picture text, and determining the event distribution condition as a target event distribution condition; and determining the picture interaction information represented by the framework in the target event distribution situation as the pending picture interaction information.
3. The virtual reality-based picture correction method of claim 2, wherein the processed interaction theme further comprises a prediction confidence level after the example interaction picture records the picture interaction information; the correcting the confidence coefficient after the virtual reality interactive picture records the interactive information of the picture to be determined further comprises the following steps:
generating a confidence difference after the example interactive picture recording picture interaction information carrying one of the example interactive picture descriptions based on one of the example interactive picture descriptions, the prediction confidence after the example interactive picture recording picture interaction information and a real-time optimization mode;
determining the frame interaction information represented by the framework in the event distribution situation taking one of the example interaction frame descriptions as the frame text, and loading the confidence difference of the frame interaction information represented by the framework recorded in the example interaction frame carrying one of the example interaction frame descriptions into the corresponding framework;
the correcting the confidence coefficient after the virtual reality interactive picture records the interactive information of the picture to be determined further comprises the following steps: and obtaining the confidence difference of the interaction picture carrying the picture activity vector after the interaction information of the picture to be determined is recorded in the framework in the distribution situation of the target event.
4. The virtual reality-based picture correction method of claim 2, wherein the building of the event distribution according to the processed interaction theme of not less than one example interaction picture comprises:
performing convolution processing on the processed interactive theme of the at least one sample interactive picture to obtain a sample interactive picture description of the at least one sample interactive picture;
loading the example interactive picture description to a previously configured optimization thread, and acquiring information of not less than one picture interactive information which is output by the optimization thread and has the best similarity with the example interactive picture description;
and constructing an event distribution case with the example interactive picture description as a picture text, and constructing a corresponding number of structures in the event distribution case according to the number of not less than one picture interactive information with the best similarity to the example interactive picture description, wherein each structure is used for representing one picture interactive information with the best similarity to the example interactive picture description.
5. The method of claim 3, wherein the generating the confidence difference after the example interactive picture recording of the interactive information carried with the one example interactive picture description based on the one example interactive picture description, the prediction confidence after the example interactive picture recording of the interactive information and the real-time optimization method comprises:
constructing a correlation between the one example interactive screen description, the recorded screen interactive information, the number of records, the optimized number and the prediction confidence of the example interactive screen based on the processed interactive theme of the example interactive screen and the one example interactive screen description;
generating the global record quantity, the global optimization quantity and the depolarization variable of the prediction confidence coefficient after all the example interaction pictures carrying the description of one example interaction picture record the picture interaction information for one piece of picture interaction information related to the processed interaction subject based on the association condition;
obtaining real-time confidence degrees after all the example interactive pictures carrying the one example interactive picture description record the picture interactive information based on the global record quantity and the global optimization quantity when all the example interactive pictures carrying the one example interactive picture description record the picture interactive information; obtaining a confidence difference of the example interactive picture carrying the example interactive picture description after recording the picture interaction information based on the real-time confidence and the prediction confidence depolarization variable after recording the picture interaction information of all the example interactive pictures carrying the one example interactive picture description.
6. The method of claim 5, wherein obtaining the confidence difference of the example interactive screens with the one example interactive screen description after recording the screen interaction information based on the real-time confidence and the prediction confidence depolarization variables after recording the screen interaction information of all the example interactive screens with the one example interactive screen description comprises: and determining the comparison result of the real-time confidence and the prediction confidence depolarization variables after the picture interaction information is recorded in all the example interaction pictures carrying the one example interaction picture description as the confidence difference after the picture interaction information is recorded in the example interaction pictures carrying the one example interaction picture description.
7. The virtual reality-based picture correction method according to claim 3, wherein the correcting the confidence level of the virtual reality interactive picture after recording the interaction information of the picture to be determined according to the confidence level difference of the interaction information of the picture to be determined recorded which is obtained and carries the one picture motion vector comprises:
obtaining the confidence difference of the interaction picture carrying the one picture activity vector and related to all the picture activity vectors of the virtual reality interaction picture after the interaction picture records the interaction information of the picture to be determined, and determining the confidence difference as the confidence difference to be processed; determining the confidence difference to be processed, performing depolarization variable weight processing, and determining the confidence difference after recording the interaction information of the picture to be processed for the virtual reality interaction picture; and correcting the confidence coefficient of the virtual reality interactive picture after the undetermined picture interactive information is recorded based on the confidence coefficient difference of the virtual reality interactive picture after the undetermined picture interactive information is recorded.
8. The virtual reality-based picture correction method according to claim 7, wherein the correcting the confidence level of the virtual reality interaction picture after recording the interaction information of the picture to be determined based on the confidence level difference after recording the interaction information of the picture to be determined by the virtual reality interaction picture comprises: and acquiring the corrected confidence coefficient of the undetermined picture interaction information recorded by the virtual reality interaction picture based on the fusion result of the confidence coefficient difference of the undetermined picture interaction information recorded by the virtual reality interaction picture and the confidence coefficient of the undetermined picture interaction information recorded by the virtual reality interaction picture.
9. A virtual reality based picture correction system comprising a processor and a memory in communication with each other, the processor being configured to read a computer program from the memory and execute the computer program to implement the method of any one of claims 1 to 8.
CN202210678903.XA 2022-06-16 2022-06-16 Virtual reality-based picture correction method and system Pending CN115079881A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210678903.XA CN115079881A (en) 2022-06-16 2022-06-16 Virtual reality-based picture correction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210678903.XA CN115079881A (en) 2022-06-16 2022-06-16 Virtual reality-based picture correction method and system

Publications (1)

Publication Number Publication Date
CN115079881A true CN115079881A (en) 2022-09-20

Family

ID=83254100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210678903.XA Pending CN115079881A (en) 2022-06-16 2022-06-16 Virtual reality-based picture correction method and system

Country Status (1)

Country Link
CN (1) CN115079881A (en)

Similar Documents

Publication Publication Date Title
CN113886468A (en) Online interactive data mining method and system based on Internet
CN116112746B (en) Online education live video compression method and system
CN115473822B (en) 5G intelligent gateway data transmission method, system and cloud platform
CN115079881A (en) Virtual reality-based picture correction method and system
CN115373688B (en) Optimization method and system of software development thread and cloud platform
CN114187552A (en) Method and system for monitoring power environment of machine room
CN113485203B (en) Method and system for intelligently controlling network resource sharing
CN113380363B (en) Medical data quality evaluation method and system based on artificial intelligence
CN113626688A (en) Intelligent medical data acquisition method and system based on software definition
CN113947709A (en) Image processing method and system based on artificial intelligence
CN113380352A (en) Medical micro-service arrangement-based intermediate language description method and system
CN113875228A (en) Video frame insertion method and device and computer readable storage medium
CN115511524B (en) Advertisement pushing method, system and cloud platform
CN112860779B (en) Batch data importing method and device
CN115563153B (en) Task batch processing method, system and server based on artificial intelligence
CN115474094A (en) Advertisement playing method and system and cloud platform
CN113610133B (en) Laser data and visual data fusion method and system
CN115079882B (en) Human-computer interaction processing method and system based on virtual reality
CN113590952B (en) Data center construction method and system
CN115545749A (en) E-commerce user interest analysis method and system based on artificial intelligence
CN115361286A (en) User information communication method and system based on sparse communication
CN113610127A (en) Genetic crossing algorithm-based image feature fusion method and system
CN115345226A (en) Energy consumption statistical method and system combining cloud computing
CN115631829A (en) Network connection multi-mode detection method and system based on acupoint massage equipment
CN116094752A (en) Security management policy generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination