CN115291723A - Cloud film playing processing method and system based on pupil position identification - Google Patents

Cloud film playing processing method and system based on pupil position identification Download PDF

Info

Publication number
CN115291723A
CN115291723A CN202210918218.XA CN202210918218A CN115291723A CN 115291723 A CN115291723 A CN 115291723A CN 202210918218 A CN202210918218 A CN 202210918218A CN 115291723 A CN115291723 A CN 115291723A
Authority
CN
China
Prior art keywords
playing data
cloud
film playing
cloud film
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210918218.XA
Other languages
Chinese (zh)
Inventor
杨尉
戴风鹏
王灏宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yingqing Electronic Technology Co ltd
Original Assignee
Guangzhou Yingqing Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yingqing Electronic Technology Co ltd filed Critical Guangzhou Yingqing Electronic Technology Co ltd
Priority to CN202210918218.XA priority Critical patent/CN115291723A/en
Publication of CN115291723A publication Critical patent/CN115291723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • G06F16/4393Multimedia presentations, e.g. slide shows, multimedia albums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The cloud film playing processing method and system based on pupil position identification can execute optimization processing of first cloud film playing data through at least one sample cloud film playing data, due to the fact that the sample cloud film playing data comprise key description contents of the first cloud film playing data, accuracy and reliability of the obtained optimized cloud film playing data are improved relative to the first cloud film playing data, even on the basis that the effect of the first cloud film playing data is poor, the cloud film playing data can be optimized, accurate optimized cloud film playing data are obtained, namely the cloud film playing data can be conveniently executed and optimized based on the plurality of sample cloud film playing data, and therefore differentiated film playing upgrading processing is carried out according to the optimized cloud film playing data, and film viewing quality is improved.

Description

Cloud film playing processing method and system based on pupil position identification
Technical Field
The application relates to the technical field of data processing, in particular to a cloud film playing processing method and system based on pupil position identification.
Background
The cloud film can be understood as an internet + cloud film, and is based on an internet platform, a terminal, user watching interest and user requirement big data analysis. In the actual operation process, when the cloud film is played, due to different sitting positions of the users, the problem of uneven watching effect may exist. Therefore, a technical solution is needed to improve the above technical problems.
Disclosure of Invention
In order to solve the technical problems in the related art, the application provides a cloud film playing processing method and system based on pupil position identification.
In a first aspect, a cloud film playing processing method based on pupil position identification is provided, where the method at least includes: determining first cloud film playing data; determining at least one sample cloud film playing data of the first cloud film playing data, wherein the sample cloud film playing data covers sample data of a reference pupil position in the first cloud film playing data; and performing targeted optimization on the first cloud film playing data through at least one sample cloud film playing data of the first cloud film playing data to obtain optimized cloud film playing data.
In an independently implemented embodiment, the determining not less than one sample cloud movie playback data of the first cloud movie playback data includes: determining spatial positioning data of the first cloud film playing data, wherein the spatial positioning data covers X identification tags of reference pupil positions in the first cloud film playing data; determining sample cloud film playing data associated with at least one reference indication of the reference pupil position through the spatial positioning data of the first cloud film playing data; wherein X is a positive number of 1 or more.
In an independently implemented embodiment, the performing targeted optimization on the first cloud movie playback data through not less than one sample cloud movie playback data of the first cloud movie playback data to obtain optimized cloud movie playback data includes: optimizing the at least one sample cloud film playing data according to the real-time watching condition of the reference pupil position in the first cloud film playing data to obtain optimized cloud film playing data bound with the sample cloud film playing data in the real-time watching condition; selecting local cloud film playing data with at least one reference indication from optimized cloud film playing data bound with the sample cloud film playing data through at least one reference indication in the at least one sample cloud film playing data and associated with the reference pupil position; and obtaining the optimized cloud film playing data based on the selected local cloud film playing data and the first cloud film playing data.
In an independently implemented embodiment, the obtaining the optimized cloud movie playback data based on the selected local cloud movie playback data and the first cloud movie playback data includes: and iterating an instruction bound with a reference instruction in the first cloud film playing data through the selected local cloud film playing data to obtain the optimized cloud film playing data, or performing feature extraction operation on the local cloud film playing data and the first cloud film playing data to obtain the optimized cloud film playing data.
In an independently implemented embodiment, the performing targeted optimization on the first cloud film playing data through not less than one sample cloud film playing data of the first cloud film playing data to obtain optimized cloud film playing data includes: performing cloud film playing data updating processing on the first cloud film playing data to obtain second cloud film playing data, wherein a quantitative analysis index of the second cloud film playing data exceeds a quantitative analysis index of the first cloud film playing data; optimizing the at least one sample cloud film playing data according to the real-time watching condition of the reference pupil position in the second cloud film playing data to obtain optimized cloud film playing data bound with the sample cloud film playing data in the real-time watching condition; selecting local cloud film playing data with at least one reference indication from optimized cloud film playing data bound with the sample cloud film playing data through at least one reference indication in the at least one sample cloud film playing data and associated with the pupil position; and obtaining the optimized cloud film playing data based on the selected local cloud film playing data and the second cloud film playing data.
In an independently implemented embodiment, the obtaining the optimized cloud movie playback data based on the selected local cloud movie playback data and the second cloud movie playback data includes: and iterating an instruction bound with a reference instruction in the second cloud film playing data through the selected local cloud film playing data to obtain the optimized cloud film playing data, or performing feature extraction operation through the local cloud film playing data and the second cloud film playing data to obtain the optimized cloud film playing data.
In a separately implemented embodiment, the method further comprises: and reading a label through the optimized cloud film playing data, and determining label data associated with the pupil position.
In an independently implemented embodiment, the cloud movie playing data updating processing is performed on the first cloud movie playing data through a first AI thread, so as to obtain the second cloud movie playing data, and the method further includes a step of training the first AI thread, including: determining a first training cloud film playing data cluster, wherein the first training cloud film playing data cluster comprises a plurality of first training cloud film playing data and first cloud film description contents bound with the first training cloud film playing data; loading at least one piece of first training cloud film playing data in the first training cloud film playing data cluster to the first AI thread to execute the cloud film playing data updating processing, so as to obtain previous training cloud film playing data bound with the first training cloud film playing data; loading the prior training cloud film playing data to a first comparison thread, a first description screening thread and a first cloud film playing data classification thread respectively to obtain a distinguishing result, a description screening result and a cloud film playing data classification result aiming at the prior training cloud film playing data; and obtaining a first thread quantitative evaluation result by combining the distinguishing result of the previously trained cloud film playing data, the description screening result and the cloud film playing data classification result, and updating the calculation vector of the first AI thread according to the first thread quantitative evaluation result until a first training condition is met.
In an independently implemented embodiment, the obtaining a first thread quantitative evaluation result by combining the differentiation result, the description screening result, and the cloud movie playing data classification result of the prior training cloud movie playing data bound to the first training cloud movie playing data includes: determining a first key semantic quantitative evaluation result through prior training cloud film playing data bound to the first training cloud film playing data and first example cloud film playing data bound to the first training cloud film playing data in the first cloud film description content; obtaining a first comparison quantitative evaluation result through the discrimination result of the prior training cloud film playing data and the discrimination result of the first comparison thread on the first example cloud film playing data; determining a first mining quantitative evaluation result through artificial intelligence thread operation of the prior training cloud film playing data and the first example cloud film playing data; obtaining a first significance quantitative evaluation result through the description screening result of the prior training cloud film playing data and the first example description in the first cloud film description content; obtaining a first distinguishing quantitative evaluation result through the cloud film playing data classification result of the previously trained cloud film playing data and a first example classification result bound with a first training sample in the first cloud film description content; and obtaining the first thread quantitative evaluation result through the integration processing of the first comparison quantitative evaluation result, the first key semantic quantitative evaluation result, the first mining quantitative evaluation result, the first significance quantitative evaluation result and the first distinguishing quantitative evaluation result.
In an independently implemented embodiment, the targeted optimization is performed by a second AI thread to obtain the optimized cloud film playing data, and the method further includes a step of training the second AI thread, including: determining a second training cloud film playing data cluster, wherein the second training cloud film playing data cluster comprises second training cloud film playing data, sample training cloud film playing data bound with the second training cloud film playing data, and second cloud film description content; optimizing the sample training cloud film playing data through the second training cloud film playing data to obtain training optimized cloud film playing data, loading the training optimized cloud film playing data and the second training cloud film playing data to the second AI thread, and performing targeted optimization on the second training cloud film playing data to obtain optimized pre-training cloud film playing data of the second training cloud film playing data; loading the optimized pre-trained cloud film playing data to a second comparison thread, a second description screening thread and a second cloud film playing data classification thread respectively to obtain a distinguishing result, a description screening result and a cloud film playing data classification result aiming at the optimized pre-trained cloud film playing data; and obtaining a second thread quantitative evaluation result of the second AI thread by combining the discrimination result of the optimized pre-trained cloud film playing data, the description screening result and the cloud film playing data classification result, and updating the calculation vector of the second AI thread according to the second thread quantitative evaluation result until a second training condition is met.
In an independently implemented embodiment, the optimizing a result of classifying previously trained cloud video playing data, describing a screening result, and classifying the cloud video playing data to obtain a second thread quantitative evaluation result of the second AI thread in combination with the trained cloud video playing data binding includes: optimizing a distinguishing result of the cloud film playing data trained in advance, describing a screening result and classifying results of the cloud film playing data by the second training cloud film playing data to obtain an integral quantitative evaluation result and a partial quantitative evaluation result; and obtaining the second thread quantitative evaluation result through the integrated processing of the whole quantitative evaluation result and the partial quantitative evaluation result.
In an independently implemented embodiment, optimizing the discrimination result of the pre-trained cloud video playing data, the description screening result, and the cloud video playing data classification result by binding the trained cloud video playing data to obtain an overall quantitative evaluation result includes: optimizing pre-training cloud film playing data bound by the second training cloud film playing data and second example cloud film playing data bound with the second training cloud film playing data in the second cloud film description content to determine a second key semantic quantitative evaluation result; obtaining a second comparison quantitative evaluation result by optimizing a discrimination result of the cloud film playing data trained in advance and a discrimination result of the second comparison thread on the second example cloud film playing data; determining a second mining quantitative evaluation result by optimizing artificial intelligence thread operations of the pre-trained cloud film playing data and the second example cloud film playing data; obtaining a second significance quantitative evaluation result by optimizing the description screening result of the pre-trained cloud film playing data and the second example description in the second cloud film description content; obtaining a second distinguishing quantitative evaluation result by optimizing a cloud film playing data classification result of the cloud film playing data which is trained in advance and a second example classification result in the second cloud film description content; and obtaining the integral quantitative evaluation result through the integration processing of the second comparison quantitative evaluation result, the second key semantic quantitative evaluation result, the second mining quantitative evaluation result, the second significance quantitative evaluation result and the second distinguishing quantitative evaluation result.
In an independently implemented embodiment, optimizing the discrimination result of the pre-trained cloud video playing data, the description screening result, and the cloud video playing data classification result by binding the trained cloud video playing data to obtain a partial quantitative evaluation result includes: selecting at least one indicated local cloud film playing data in the optimized pre-trained cloud film playing data, and respectively loading the at least one indicated local cloud film playing data to a comparison thread, a description screening thread and a cloud film playing data classification thread to obtain a distinguishing result, a description screening result and a cloud film playing data classification result of the at least one indicated local cloud film playing data; determining a third comparison quantitative evaluation result of the at least one indication through the result of distinguishing the at least one indication of the local cloud film playing data and the result of distinguishing the at least one indication of the local cloud film playing data in the second example cloud film playing data bound to the second training cloud film playing data by the second comparison thread; obtaining a third significance quantitative evaluation result of the at least one indication through the description screening result of the at least one indication indicating local cloud film playing data and the example description of the at least one indication in the second cloud film description content; obtaining at least one indicated third segmentation evaluation result by the at least one indicated cloud film playing data classification result indicating local cloud film playing data and the at least one indicated example classification result in the second cloud film description content; and obtaining a partial quantitative evaluation result of the thread through optimization processing of the at least one indicated third comparison quantitative evaluation result, the third significance quantitative evaluation result and the third differentiation quantitative evaluation result.
In a second aspect, a cloud film playing processing system based on pupil position identification is provided, which includes a processor and a memory, which are in communication with each other, and the processor is configured to retrieve a computer program from the memory and implement the above method by running the computer program.
The cloud film playing processing method and system based on pupil position identification can execute optimization processing of first cloud film playing data through at least one sample cloud film playing data, due to the fact that the sample cloud film playing data comprise key description contents of the first cloud film playing data, accuracy and reliability of the obtained optimized cloud film playing data are improved relative to the first cloud film playing data, even on the basis that the effect of the first cloud film playing data is poor, the accurate optimized cloud film playing data can be obtained through optimizing the sample cloud film playing data, namely the cloud film playing data can be conveniently optimized based on the plurality of sample cloud film playing data, and therefore differentiated film playing upgrading processing is guaranteed to be carried out according to the optimized cloud film playing data, and film viewing quality is improved.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of a cloud film playing processing method based on pupil position identification according to an embodiment of the present application.
Fig. 2 is a block diagram of a cloud film playing processing device based on pupil position identification according to an embodiment of the present disclosure.
Fig. 3 is an architecture diagram of a cloud film playing processing system based on pupil position identification according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Referring to fig. 1, a cloud film playing processing method based on pupil position identification is shown, and the method may include the technical solutions described in step10-step30 below.
step10: and determining first cloud film playing data.
In a possible implementation embodiment, the pupil position of the cloud film playing data to be processed, that is, the first cloud film playing data, may be determined first, and the first cloud film playing data in the embodiment of the present application may be understood as low accuracy of the quantitative analysis index.
step20: determining that at least one sample cloud film playing data of the first cloud film playing data is included, wherein the sample cloud film playing data covers sample data of a reference pupil position in the first cloud film playing data.
In one possible implementation, the first cloud movie playback data may have not less than one sample cloud movie playback data. Sample data of the reference pupil position in the first cloud video playing data is included in the sample cloud video playing data, and may include, for example, sample data of not less than one reference indication of the reference pupil position.
In a possible implementation example, there may be sample cloud movie playing data associated with the first cloud movie playing data, and the sample cloud movie playing data may also be obtained according to the determined spatial location data about the reference pupil position. Wherein, the spatial localization data can include X kind of identification tags of referring to pupil position, for example when referring to pupil position as local pupil position, the spatial localization data can include: the identification tag indicated by the X reference about the local pupil position, or the spatial location data may also directly include global spatial location data of a reference pupil position in the first cloud movie playing data, for example, the reference pupil position is some determined spatial location data for locating the pupil position. The same cloud film playing data of at least one reference indication of the reference pupil position of the first cloud film playing data or the cloud film playing data comprising the pupil position consistent with the pupil position in the first cloud film playing data can be determined through the spatial positioning data, and each obtained same cloud film playing data or cloud film playing data comprising the consistent pupil position can be understood as sample cloud film playing data.
step30: and performing targeted optimization on the first cloud film playing data through at least one sample cloud film playing data of the first cloud film playing data to obtain optimized cloud film playing data.
After the at least one sample cloud film playing data bound to the first cloud film playing data is obtained, the optimization of the first cloud film playing data can be executed according to the obtained at least one cloud film playing data. Since the sample cloud film playing data includes sample data of at least one reference indication of the reference pupil position in the first cloud film playing data, the first cloud film playing data can be optimized in a targeted manner according to the sample data. And even on the basis that the first cloud film playing data is the cloud film playing data with serious shortage of money, the optimized cloud film playing data which is more accurate can be optimized based on the sample data.
In a possible implementation embodiment, the sample cloud movie playing data indicated by the corresponding reference may be directly iterated to the first cloud movie playing data, so as to obtain optimized cloud movie playing data.
In a possible implementation embodiment, the optimized cloud movie playing data may also be obtained based on the sample cloud movie playing data and the feature extraction operation of the first cloud movie playing data.
In a possible implementation embodiment, because there may be a difference between the positioning of the pupil position of the sample cloud movie playing data with reference to the pupil position in the obtained first cloud movie playing data and the positioning of the reference pupil position in the first cloud movie playing data, it is necessary to compare each sample cloud movie playing data with the first cloud movie playing data. The method comprises the steps of trimming the positioning of the pupil position in the sample cloud film playing data to be the same as the positioning of the reference pupil position in the first cloud film playing data, then executing optimization processing on the first cloud film playing data through the trimmed and positioned sample cloud film playing data, and improving the reliability of the optimized cloud film playing data through the steps.
In this embodiment, the optimization of the first cloud film playing data is realized based on at least one sample cloud film playing data of the first cloud film playing data, and the obtained optimized cloud film playing data can optimize sample data of each sample cloud film playing data, so that the method and the device have high accuracy and reliability.
With reference to the cloud film playing processing method based on pupil position identification in an embodiment of the present application, the determining content described by at least one sample cloud film playing data of the first cloud film playing data may specifically include the following steps.
step21: and determining the spatial positioning data of the first cloud film playing data.
Further, the spatial positioning data of the first cloud film playing data may include an identification tag (or key spatial positioning data) of not less than one reference indication of a reference pupil position in the first cloud film playing data.
step22: and determining sample cloud film playing data associated with at least one reference indication of the pupil position through the spatial positioning data of the first cloud film playing data.
After the spatial positioning data is obtained, sample cloud film playing data associated with the pupil position in the first cloud film playing data can be determined according to the spatial positioning data.
In a possible implementation example, the spatial location data may also include tag data about a first pupil position in the first cloud movie playing data, and at this time, cloud movie playing data associated with the tag data may be screened out from the data center based on the tag data as sample cloud movie playing data.
Through the steps, the cloud film playing data associated with at least one reference indication of the pupil position in the first cloud film playing data can be determined based on the spatial positioning data, and the accuracy of the determined cloud film playing data can be improved by optimizing the cloud film playing data through the cloud film playing data.
After the sample cloud film playing data is obtained, the optimization process of the cloud film playing data can be executed according to the sample cloud film playing data, except that the sample cloud film playing data can be directly iterated to the corresponding reference indication of the first cloud film playing data, in addition, the embodiment of the application can also perform the optimization processing on the sample cloud film playing data, and then the optimized cloud film playing data can be obtained by executing the iteration or the feature extraction.
With reference to the cloud film playing processing method based on pupil position identification in the embodiments of the present application, the cloud film playing data is specifically optimized through at least one sample cloud film playing data of the first cloud film playing data to obtain optimized cloud film playing data, and the described content may specifically include the following steps.
step31: and optimizing the at least one sample cloud film playing data according to the real-time watching condition of the reference pupil position in the first cloud film playing data to obtain optimized cloud film playing data bound with the sample cloud film playing data in the real-time watching condition.
In a possibly implemented embodiment, since there may be a difference between the obtained location of the pupil position of the sample cloud movie playback data about the pupil position in the first cloud movie playback data and the location of the pupil position in the first cloud movie playback data, it is necessary to compare each sample cloud movie playback data with the first cloud movie playback data, that is, to make the location of the pupil position in the sample cloud movie playback data consistent with the location of the reference pupil position in the first cloud movie playback data.
According to the embodiment of the application, the cloud film playing data of the sample can be optimized through the optimization processing step, and the positioning of the pupil position in the optimized cloud film playing data of the sample (namely, the optimized cloud film playing data) is consistent with the positioning of the reference pupil position in the first cloud film playing data.
Through the content, at least one optimized cloud film playing data (each sample cloud film playing data is optimized to obtain one optimized cloud film playing data) consistent with the positioning in the first cloud film playing data can be obtained, and the optimized cloud film playing data is compared with the first cloud film playing data.
step32: and selecting local cloud film playing data with at least one reference indication from optimized cloud film playing data bound with the sample cloud film playing data through at least one reference indication in the at least one sample cloud film playing data, wherein the reference indication is associated with the reference pupil position.
Because the obtained sample cloud film playing data is cloud film playing data associated with at least one reference indication in the first cloud film playing data, after optimized cloud film playing data bound to each sample cloud film playing data is obtained through optimization processing, local cloud film playing data indicated by the sample can be screened from the optimized cloud film playing data based on the sample indication (the reference indication associated with the pupil position) bound to each sample cloud film playing data, that is, the local cloud film playing data indicated by the reference indication associated with the pupil position in the first cloud film playing data is distinguished from the optimized cloud film playing data.
step33: and obtaining the optimized cloud film playing data based on the selected local cloud film playing data and the first cloud film playing data.
After the local cloud film playing data of at least one reference indication of the reference pupil position is obtained, cloud film playing data optimization can be performed through the obtained local cloud film playing data and the first cloud film playing data, and optimized cloud film playing data are obtained.
In one possible implementation, since each local cloud movie playback data may be associated with not less than one reference indication in the pupil position of the first cloud movie playback data, the cloud movie playback data in which the associated indication exists in the local cloud movie playback data may be iterated to the corresponding indication in the first cloud movie playback data.
Alternatively, in a possible implementation embodiment, the optimized cloud film playing data may also be obtained through a feature extraction operation of the local cloud film playing data and the first cloud film playing data.
The cloud film playing data can be loaded to the AI thread, at least one feature extraction operation is executed, feature optimization of the cloud film playing data is achieved, optimized key content is obtained finally, and the optimized cloud film playing data bound by the optimized key content can be obtained based on the optimized key content.
For a feasible embodiment, in order to further improve accuracy and reliability of the cloud film playing data of the optimized cloud film playing data, the first cloud film playing data may also be processed to obtain second cloud film playing data which is larger than a quantitative analysis index of the first cloud film playing data, and the optimized cloud film playing data is obtained by performing cloud film playing data optimization through the second cloud film playing data. In the cloud film playing processing method based on pupil position identification in the embodiment of the application, the specific optimization of the first cloud film playing data through at least one sample cloud film playing data of the first cloud film playing data to obtain the content described by the optimized cloud film playing data may specifically include the following steps.
step301: and performing cloud film playing data updating processing on the first cloud film playing data to obtain second cloud film playing data, wherein the quantitative analysis index of the second cloud film playing data exceeds the quantitative analysis index of the first cloud film playing data.
In a possible implementation example, on the basis of obtaining the first cloud film playing data, the cloud film playing data may be updated, so as to obtain second cloud film playing data that improves the cloud film playing data quantitative analysis index. The cloud film playing data updating process can update the cloud film playing data with high quantitative analysis indexes through low quantitative analysis index cloud film playing data or cloud film playing data sequencing.
step302: and optimizing the at least one sample cloud film playing data according to the real-time watching condition of the reference pupil position in the second cloud film playing data to obtain optimized cloud film playing data bound with the sample cloud film playing data in the real-time watching condition.
Because the second cloud film playing data is cloud film playing data in which a quantitative analysis index is improved relative to the first cloud film playing data, the positioning of the reference pupil position in the second cloud film playing data may be different from the positioning of the sample cloud film playing data, and before optimization is performed, the sample cloud film playing data may be optimized and changed according to the positioning of the reference pupil position in the second cloud film playing data, so as to obtain optimized cloud film playing data in accordance with the positioning of the reference pupil position in the second cloud film playing data.
step303: and selecting local cloud film playing data with at least one reference indication from optimized cloud film playing data bound with the sample cloud film playing data through at least one reference indication in the at least one sample cloud film playing data, wherein the reference indication is associated with the pupil position.
Because the obtained sample cloud film playing data is cloud film playing data associated with at least one reference indication in the second cloud film playing data, after optimized cloud film playing data bound to each sample cloud film playing data is obtained through optimization processing, local cloud film playing data indicated by the sample can be screened from the optimized cloud film playing data based on the sample indication (the reference indication associated with the pupil position) bound to each sample cloud film playing data, that is, the local cloud film playing data indicated by the reference indication associated with the pupil position in the first cloud film playing data is distinguished from the optimized cloud film playing data.
step304: and obtaining the optimized cloud film playing data based on the selected local cloud film playing data and the second cloud film playing data.
After the local cloud film playing data of at least one reference indication of the reference pupil position is obtained, cloud film playing data optimization can be performed through the obtained local cloud film playing data and the second cloud film playing data, and optimized cloud film playing data are obtained.
In one possible implementation example, since each of the local cloud movie playback data may be associated with not less than one reference indication in the pupil position of the second cloud movie playback data, the cloud movie playback data in which the associated indication exists in the local cloud movie playback data may be iterated to a corresponding indication in the second cloud movie playback data. Or, in a possibly implemented embodiment, the optimized cloud movie playing data may also be obtained through feature extraction operations of the local cloud movie playing data and the second cloud movie playing data.
Through the steps, the accuracy of the quantitative analysis index of the first cloud film playing data can be further improved through updating, and more accurate optimized cloud film playing data can be obtained in parallel.
The first AI thread is trained in the embodiment of the present application, and the process of training the AI thread for the first AI thread in the embodiment of the present application may specifically include the following steps.
step51: determining a first training cloud film playing data cluster, wherein the first training cloud film playing data cluster comprises a plurality of first training cloud film playing data and first cloud film description contents bound with the first training cloud film playing data.
In one possible implementation example, the training cloud movie playing data cluster may include a plurality of first training cloud movie playing data, and the plurality of first training cloud movie playing data may be understood as cloud movie playing data with a relatively low quantitative analysis index.
step52: and loading at least one piece of first training cloud film playing data in the first training cloud film playing data cluster to the first AI thread to execute the cloud film playing data updating processing, so as to obtain the prior training cloud film playing data bound with the first training cloud film playing data.
When the first AI thread is trained, the cloud film playing data in the first training cloud film playing data cluster may be loaded to the first AI thread together, or may be loaded to the first AI thread for multiple times, so as to obtain the updated prior training cloud film playing data bound to each first training cloud film playing data one by one.
step53: and loading the prior training cloud film playing data to a first comparison thread, a first description screening thread and a first cloud film playing data classification thread one by one to obtain a distinguishing result, a description screening result and a cloud film playing data classification result of the prior training cloud film playing data bound to the first training cloud film playing data.
And loading the prior training cloud film playing data to the comparison thread, the description screening thread and the cloud film playing data classification thread to obtain a distinguishing result, a description screening result and a cloud film playing data classification result of the prior training cloud film playing data bound to the training cloud film playing data.
step54: and combining the distinguishing result, the description screening result and the cloud film playing data classification result of the previously trained cloud film playing data to obtain a first thread quantitative evaluation result, and updating the calculation vector of the first AI thread according to the first thread quantitative evaluation result until a first training condition is met.
In a possible implementation example, a comparison quantitative evaluation result may be obtained according to a discrimination result of previously-trained cloud film playing data, a discrimination quantitative evaluation result may be obtained according to a cloud film playing data classification result, a significance quantitative evaluation result may be obtained according to an obtained description screening result, and a corresponding key semantic quantitative evaluation result and a processed mining quantitative evaluation result may be obtained according to the obtained previously-trained cloud film playing data.
In the embodiment of the present application, the cloud film playing data optimization process of step30 may also be performed by a second AI thread, for example, the second AI thread may be understood as a convolution AI thread. In combination with the training of the second AI thread in the embodiment of the present application, the process of training the second AI thread may specifically include the following steps.
step61: and determining a second training cloud film playing data cluster, wherein the second training cloud film playing data cluster comprises a plurality of second training cloud film playing data, sample training cloud film playing data bound with the second training cloud film playing data, and second cloud film description content.
In a possibly implemented embodiment, the second training cloud movie playing data in the second training cloud movie playing data cluster may be understood as previously-trained cloud movie playing data formed by the first AI thread through previous training, or may also be understood as cloud movie playing data obtained through the remaining steps and having relatively low accuracy of quantitative analysis indexes.
When the training of the second AI thread is executed, it can also be understood that each training cloud film playing data is trained to be not less than one sample training cloud film playing data, and the sample training cloud film playing data includes sample data of the bound second training cloud film playing data, such as not less than one indicated cloud film playing data. The cloud film playing data trained by the sample are high quantitative analysis indexes and accurate cloud film playing data. Each second training cloud film playing data may include a different number of sample training cloud film playing data, and the sample indications bound to each sample training cloud film playing data may also be inconsistent.
The second cloud film description content may also be determined according to the calculation vector of the quantitative evaluation result thread, and may include second example cloud film playing data (accurate cloud film playing data) bound to the second training cloud film playing data, a second example description of the second example cloud film playing data, a second example classification result (real-time classification result of each indication), a discrimination result (discrimination result output by the comparison thread) of each indication in the second example cloud film playing data, a description screening result, a classification result, and the like.
step62: optimizing the sample training cloud film playing data through second training cloud film playing data to obtain training optimization cloud film playing data, loading the training optimization cloud film playing data and the second training cloud film playing data to the second AI thread, and performing targeted optimization on the second training cloud film playing data to obtain optimized pre-training cloud film playing data of the second training cloud film playing data.
Further, each second training cloud film playing data may have at least one sample cloud film playing data bound, and the sample training cloud film playing data may be optimized by positioning the pupil position in the second training cloud film playing data), so that at least one training optimized cloud film playing data is obtained. At least one training optimized cloud film playing data and second training cloud film playing data bound to the second training cloud film playing data can be loaded into the second AI thread, and corresponding optimized pre-training cloud film playing data can be obtained.
step63: and loading the optimized pre-trained cloud film playing data bound to the training cloud film playing data to a second comparison thread, a second description screening thread and a second cloud film playing data classification thread respectively to obtain a distinguishing result, a description screening result and a cloud film playing data classification result of the optimized pre-trained cloud film playing data bound to the second training cloud film playing data.
step64: and optimizing a discrimination result of the pre-trained cloud film playing data, describing a screening result and a cloud film playing data classification result in combination with the second trained cloud film playing data, so as to obtain a second thread quantitative evaluation result of the second AI thread, and updating a calculation vector of the second AI thread according to the second thread quantitative evaluation result until a second training condition is met.
In a possible implementation example, the second thread quantitative evaluation result may be understood as an integrated process of an overall quantitative evaluation result and a partial quantitative evaluation result, that is, the differentiation result of the cloud film playing data, the description screening result, and the cloud film playing data classification result may be trained in advance through optimization of the training cloud film playing data binding to obtain the overall quantitative evaluation result and the partial quantitative evaluation result, and the second thread quantitative evaluation result is obtained through the integrated process of the overall quantitative evaluation result and the partial quantitative evaluation result.
Further, the overall quantitative evaluation result can be understood as an integration process of a comparison quantitative evaluation result, a key semantic quantitative evaluation result, a mining quantitative evaluation result, a distinguishing quantitative evaluation result and a significance quantitative evaluation result based on optimization of the cloud film playing data trained in advance.
Optionally, consistent with the determining step of the first comparison quantitative evaluation result, referring to a comparison quantitative evaluation result thread, a second comparison quantitative evaluation result may be obtained by the comparison thread with respect to the result of distinguishing the optimized pre-trained cloud film playing data and the result of distinguishing the second example cloud film playing data in the second cloud film description content; consistent with the determination step of the first key semantic quantitative evaluation result, referring to the key semantic quantitative evaluation result thread, a second key semantic quantitative evaluation result can be determined by optimizing the cloud film playing data bound to the second training cloud film playing data and the second exemplar cloud film playing data bound to the second training cloud film playing data in advance; according to the step of determining the first mining quantitative evaluation result, referring to the mining quantitative evaluation result thread, the second mining quantitative evaluation result can be determined by optimizing artificial intelligence thread operation of the pre-trained cloud film playing data and the second example cloud film playing data bound to the second training cloud film playing data; consistent with the determination step of the first significance quantitative evaluation result, referring to the significance quantitative evaluation result thread, the second significance quantitative evaluation result can be obtained by optimizing the description screening result of the pre-trained cloud film playing data and the second example description in the second cloud film description content, which are bound by the second training cloud film playing data; consistent with the determination step of the first distinguishing quantitative evaluation result, referring to the distinguishing quantitative evaluation result thread, the second distinguishing quantitative evaluation result can be obtained by optimizing the cloud film playing data classification result of the pre-trained cloud film playing data and the second example classification result in the second cloud film description content, which are bound to the second trained cloud film playing data; and obtaining the integral quantitative evaluation result through the integration processing of the second comparison quantitative evaluation result, the second key semantic quantitative evaluation result, the second mining quantitative evaluation result, the second significance quantitative evaluation result and the second distinguishing quantitative evaluation result.
In an alternative embodiment, the manner of determining the partial quantitative evaluation result of the second AI thread may include: selecting at least one instruction local cloud film playing data bound with an instruction in the optimized pre-trained cloud film playing data, and respectively loading the at least one instruction local cloud film playing data to a comparison thread, a description screening thread and a cloud film playing data classification thread to obtain a distinguishing result, a description screening result and a cloud film playing data classification result of the at least one instruction local cloud film playing data; determining a third comparison quantitative evaluation result of the at least one indication according to the distinguishing result of the at least one indication indicating local cloud film playing data and the distinguishing result of the at least one indication indicating local cloud film playing data in the second example cloud film playing data bound to the second training cloud film playing data by the second comparison thread; obtaining a third significance quantitative evaluation result of the at least one indication through the description screening result of the at least one indication indicating local cloud film playing data and the example description of the corresponding indication in the second cloud film description content; obtaining at least one indicated third segmentation evaluation result by the at least one indicated cloud film playing data classification result indicating local cloud film playing data and the at least one indicated example classification result in the second cloud film description content; and obtaining a partial quantitative evaluation result of the thread through the optimization processing of the at least one indicated third comparison thread quantitative evaluation result, the third significance quantitative evaluation result and the third region quantitative evaluation result.
Consistent with the step of determining the quantitative evaluation result, the partial quantitative evaluation result of each instruction can be determined by optimizing the third comparison quantitative evaluation result, the third key semantic quantitative evaluation result and the third mining quantitative evaluation result of the local cloud film playing data of each instruction in the pre-trained cloud film playing data.
On the basis, please refer to fig. 2 in combination, a cloud video playing processing apparatus 200 based on pupil position identification is provided, which is applied to a cloud video playing processing system based on pupil position identification, and the apparatus includes:
a play data determining module 210, configured to determine first cloud film play data;
a sample data determining module 220, configured to determine at least one sample cloud film playing data of the first cloud film playing data, where the sample cloud film playing data covers sample data of a reference pupil position in the first cloud film playing data;
the playing data optimizing module 230 is configured to perform targeted optimization on the first cloud film playing data through at least one sample cloud film playing data of the first cloud film playing data, so as to obtain optimized cloud film playing data.
On the basis of the above, please refer to fig. 3, which shows a cloud video playing processing system 300 based on pupil position identification, including a processor 310 and a memory 320, which are in communication with each other, where the processor 310 is configured to read a computer program from the memory 320 and execute the computer program to implement the above method.
On the basis of the above, there is also provided a computer-readable storage medium on which a computer program is stored, which when executed implements the above-described method.
In conclusion, based on the above scheme, optimization processing of the first cloud film playing data can be executed through at least one sample cloud film playing data, and because the sample cloud film playing data comprises key description contents of the first cloud film playing data, the accuracy and reliability of the obtained optimized cloud film playing data are improved relative to the first cloud film playing data.
It should be appreciated that the system and its modules shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It is to be noted that different embodiments may produce different advantages, and in different embodiments, the advantages that may be produced may be any one or combination of the above, or any other advantages that may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered as illustrative only and not limiting of the application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, the present application uses specific words to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics may be combined as suitable in one or more embodiments of the application.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, a conventional programming language such as C, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, unless explicitly recited in the claims, the order of processing elements and sequences, use of numbers and letters, or use of other designations in this application is not intended to limit the order of the processes and methods in this application. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features are required than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single disclosed embodiment.
Where numerals describing the number of components, attributes or the like are used in some embodiments, it is to be understood that such numerals used in the description of the embodiments are modified in some instances by the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the numbers allow for variation in flexibility. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application may be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those explicitly described and illustrated herein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A cloud film playing processing method based on pupil position identification is characterized by at least comprising the following steps:
determining first cloud film playing data;
determining at least one sample cloud film playing data of the first cloud film playing data, wherein the sample cloud film playing data covers sample data of a reference pupil position in the first cloud film playing data;
and performing targeted optimization on the first cloud film playing data through at least one sample cloud film playing data of the first cloud film playing data to obtain optimized cloud film playing data.
2. The method of claim 1, wherein the determining not less than one sample cloud movie playback data of the first cloud movie playback data comprises: determining spatial positioning data of the first cloud film playing data, wherein the spatial positioning data covers X identification tags of reference pupil positions in the first cloud film playing data; determining sample cloud film playing data associated with at least one reference indication of the reference pupil position through the spatial positioning data of the first cloud film playing data; wherein X is a positive number of 1 or more.
3. The method according to claim 2, wherein the performing targeted optimization on the first cloud movie playback data through not less than one sample cloud movie playback data of the first cloud movie playback data to obtain optimized cloud movie playback data includes:
optimizing the at least one sample cloud film playing data according to the real-time watching condition of the reference pupil position in the first cloud film playing data to obtain optimized cloud film playing data bound with the sample cloud film playing data in the real-time watching condition;
selecting local cloud film playing data with at least one reference indication from optimized cloud film playing data bound with the sample cloud film playing data through at least one reference indication in the at least one sample cloud film playing data and associated with the reference pupil position; and obtaining the optimized cloud film playing data based on the selected local cloud film playing data and the first cloud film playing data.
4. The method according to claim 3, wherein the obtaining the optimized cloud movie playback data based on the selected local cloud movie playback data and the first cloud movie playback data includes:
and iterating an instruction bound with a reference instruction in the first cloud film playing data through the selected local cloud film playing data to obtain the optimized cloud film playing data, or performing feature extraction operation on the local cloud film playing data and the first cloud film playing data to obtain the optimized cloud film playing data.
5. The method according to claim 2, wherein the performing the targeted optimization on the first cloud film playing data through not less than one sample cloud film playing data of the first cloud film playing data to obtain optimized cloud film playing data comprises:
performing cloud film playing data updating processing on the first cloud film playing data to obtain second cloud film playing data, wherein the quantitative analysis index of the second cloud film playing data exceeds the quantitative analysis index of the first cloud film playing data;
optimizing the at least one sample cloud film playing data according to the real-time watching condition of the reference pupil position in the second cloud film playing data to obtain optimized cloud film playing data bound with the sample cloud film playing data in the real-time watching condition;
selecting local cloud film playing data with at least one reference indication from optimized cloud film playing data bound with the sample cloud film playing data through at least one reference indication in the at least one sample cloud film playing data and associated with the pupil position;
and obtaining the optimized cloud film playing data based on the selected local cloud film playing data and the second cloud film playing data.
6. The method according to claim 5, wherein the deriving the optimized cloud movie playback data based on the selected local cloud movie playback data and the second cloud movie playback data includes: and iterating an instruction bound with a reference instruction in the second cloud film playing data through the selected local cloud film playing data to obtain the optimized cloud film playing data, or performing feature extraction operation through the local cloud film playing data and the second cloud film playing data to obtain the optimized cloud film playing data.
7. The method of claim 2, wherein the method further comprises: and reading a label through the optimized cloud film playing data, and determining label data associated with the pupil position.
8. The method according to claim 6, wherein performing the cloud movie playing data updating process on the first cloud movie playing data through a first AI thread to obtain the second cloud movie playing data, and the method further includes a step of training the first AI thread, including:
determining a first training cloud film playing data cluster, wherein the first training cloud film playing data cluster comprises a plurality of first training cloud film playing data and first cloud film description contents bound with the first training cloud film playing data;
loading at least one piece of first training cloud film playing data in the first training cloud film playing data cluster to the first AI thread to execute cloud film playing data updating processing, and obtaining prior training cloud film playing data bound by the first training cloud film playing data;
loading the prior training cloud film playing data to a first comparison thread, a first description screening thread and a first cloud film playing data classification thread respectively to obtain a distinguishing result, a description screening result and a cloud film playing data classification result aiming at the prior training cloud film playing data;
obtaining a first thread quantitative evaluation result by combining the distinguishing result of the previously trained cloud film playing data, the description screening result and the cloud film playing data classification result, and updating the calculation vector of the first AI thread according to the first thread quantitative evaluation result until a first training condition is met;
the method for obtaining the first thread quantitative evaluation result by combining the distinguishing result of the prior training cloud film playing data bound by the first training cloud film playing data, the description screening result and the cloud film playing data classification result comprises the following steps:
determining a first key semantic quantitative evaluation result through prior training cloud film playing data bound to the first training cloud film playing data and first example cloud film playing data bound to the first training cloud film playing data in the first cloud film description content; obtaining a first comparison quantitative evaluation result through the distinguishing result of the prior training cloud film playing data and the distinguishing result of the first comparison thread on the first example cloud film playing data; determining a first mining quantitative evaluation result through artificial intelligence thread operation of the prior training cloud film playing data and the first example cloud film playing data; obtaining a first significance quantitative evaluation result through the description screening result of the prior training cloud film playing data and the first example description in the first cloud film description content;
obtaining a first distinguishing quantitative evaluation result through the cloud film playing data classification result of the previously trained cloud film playing data and a first example classification result bound with a first training sample in the first cloud film description content;
and obtaining a first thread quantitative evaluation result through integrated processing of the first comparison quantitative evaluation result, the first key semantic quantitative evaluation result, the first mining quantitative evaluation result, the first significance quantitative evaluation result and the first distinguishing quantitative evaluation result.
9. The method of claim 2, wherein the targeted optimization is performed by a second AI thread resulting in the optimized cloud movie playback data, the method further comprising the step of training the second AI thread comprising:
determining a second training cloud film playing data cluster, wherein the second training cloud film playing data cluster comprises second training cloud film playing data, sample training cloud film playing data bound with the second training cloud film playing data, and second cloud film description content; optimizing the sample training cloud film playing data through the second training cloud film playing data to obtain training optimized cloud film playing data, loading the training optimized cloud film playing data and the second training cloud film playing data to the second AI thread, and performing targeted optimization on the second training cloud film playing data to obtain optimized pre-training cloud film playing data of the second training cloud film playing data;
loading the optimized pre-trained cloud film playing data to a second comparison thread, a second description screening thread and a second cloud film playing data classification thread respectively to obtain a distinguishing result, a description screening result and a cloud film playing data classification result aiming at the optimized pre-trained cloud film playing data;
obtaining a second thread quantitative evaluation result of the second AI thread by combining the discrimination result of optimizing the pre-trained cloud film playing data, the description screening result and the cloud film playing data classification result, and updating the calculation vector of the second AI thread according to the second thread quantitative evaluation result until a second training condition is met;
the optimizing the discrimination result of the cloud film playing data trained in advance, describing the screening result, and classifying the cloud film playing data to obtain the second thread quantitative evaluation result of the second AI thread in combination with the training cloud film playing data includes: optimizing a distinguishing result of pre-trained cloud film playing data, describing a screening result and a cloud film playing data classification result by binding the second trained cloud film playing data to obtain an integral quantitative evaluation result and a partial quantitative evaluation result; obtaining a second thread quantitative evaluation result through integrated processing of the whole quantitative evaluation result and the partial quantitative evaluation result;
the method comprises the following steps of training a discrimination result of cloud film playing data, describing a screening result and classifying a cloud film playing data to obtain an integral quantitative evaluation result by optimizing and pre-training cloud film playing data binding, wherein the integral quantitative evaluation result comprises the following steps:
optimizing pre-training cloud film playing data bound by the second training cloud film playing data and second example cloud film playing data bound with the second training cloud film playing data in the second cloud film description content to determine a second key semantic quantitative evaluation result; obtaining a second comparison quantitative evaluation result by optimizing a discrimination result of the cloud film playing data trained in advance and a discrimination result of the second comparison thread on the second example cloud film playing data;
determining a second mining quantitative evaluation result by optimizing artificial intelligence thread operations of the pre-trained cloud film playing data and the second example cloud film playing data; obtaining a second significance quantitative evaluation result by optimizing the description screening result of the pre-trained cloud film playing data and the second example description in the second cloud film description content;
obtaining a second distinguishing quantitative evaluation result by optimizing a cloud film playing data classification result of the cloud film playing data which is trained in advance and a second example classification result in the second cloud film description content;
obtaining an integral quantitative evaluation result through the integration processing of the second comparison quantitative evaluation result, the second key semantic quantitative evaluation result, the second mining quantitative evaluation result, the second significance quantitative evaluation result and the second distinguishing quantitative evaluation result;
the method for optimizing the binding of the training cloud film playing data and training the distinguishing result of the cloud film playing data in advance, describing and screening results and the cloud film playing data classification result to obtain partial quantitative evaluation results comprises the following steps: selecting at least one indicated local cloud film playing data in the optimized pre-trained cloud film playing data, and respectively loading the at least one indicated local cloud film playing data to a comparison thread, a description screening thread and a cloud film playing data classification thread to obtain a distinguishing result, a description screening result and a cloud film playing data classification result of the at least one indicated local cloud film playing data; determining a third comparison quantitative evaluation result of the at least one indication through the result of distinguishing the at least one indication of the local cloud film playing data and the result of distinguishing the at least one indication of the local cloud film playing data in the second example cloud film playing data bound to the second training cloud film playing data by the second comparison thread; obtaining a third significance quantitative evaluation result of the at least one indication through the description screening result of the at least one indication indicating local cloud film playing data and the example description of the at least one indication in the second cloud film description content; obtaining at least one indicated third segmentation evaluation result by the at least one indicated cloud film playing data classification result indicating local cloud film playing data and the at least one indicated example classification result in the second cloud film description content; and obtaining a partial quantitative evaluation result of the thread through optimization processing of the at least one indicated third comparison quantitative evaluation result, the third significance quantitative evaluation result and the third differentiation quantitative evaluation result.
10. A cloud movie playing processing system based on pupil position identification, comprising a processor and a memory, which are in communication with each other, wherein the processor is configured to retrieve a computer program from the memory and implement the method according to any one of claims 1 to 9 by running the computer program.
CN202210918218.XA 2022-08-01 2022-08-01 Cloud film playing processing method and system based on pupil position identification Pending CN115291723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210918218.XA CN115291723A (en) 2022-08-01 2022-08-01 Cloud film playing processing method and system based on pupil position identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210918218.XA CN115291723A (en) 2022-08-01 2022-08-01 Cloud film playing processing method and system based on pupil position identification

Publications (1)

Publication Number Publication Date
CN115291723A true CN115291723A (en) 2022-11-04

Family

ID=83825875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210918218.XA Pending CN115291723A (en) 2022-08-01 2022-08-01 Cloud film playing processing method and system based on pupil position identification

Country Status (1)

Country Link
CN (1) CN115291723A (en)

Similar Documents

Publication Publication Date Title
CN113378554A (en) Medical information intelligent interaction method and system
CN115757370A (en) User information communication method and system based on Internet of things
CN116112746B (en) Online education live video compression method and system
CN115473822B (en) 5G intelligent gateway data transmission method, system and cloud platform
CN114580495A (en) Business service information analysis method based on artificial intelligence and server
CN115291723A (en) Cloud film playing processing method and system based on pupil position identification
CN113626538B (en) Medical information intelligent classification method and system based on big data
CN115687618A (en) User intention analysis method and system based on artificial intelligence
CN114779923A (en) VR simulation scene positioning method and system based on ultrasonic waves
CN113626688A (en) Intelligent medical data acquisition method and system based on software definition
CN113610373A (en) Information decision processing method and system based on intelligent manufacturing
CN115756576B (en) Translation method of software development kit and software development system
CN115563153B (en) Task batch processing method, system and server based on artificial intelligence
CN113626559B (en) Semantic-based intelligent network document retrieval method and system
CN114691830B (en) Network security analysis method and system based on big data
CN113609931B (en) Face recognition method and system based on neural network
CN115409510B (en) Online transaction security system and method
CN113627490B (en) Operation and maintenance multi-mode decision method and system based on multi-core heterogeneous processor
CN113596849B (en) Wireless communication channel dynamic allocation method and system for smart home
CN114201973B (en) Resource pool object data mining method and system based on artificial intelligence
CN113626429B (en) Metadata-based intelligent range emergency medical knowledge base construction method and system
CN116468534A (en) Credit information level analysis method and system for collective economic organization
CN113609931A (en) Face recognition method and system based on neural network
CN115564476A (en) Advertisement playing progress adjusting method and system and cloud platform
CN114911850A (en) Magnetic suspension weightlessness control method and system based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination