CN116095381A - Data processing method, device, computer equipment and readable storage medium - Google Patents

Data processing method, device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN116095381A
CN116095381A CN202111303768.2A CN202111303768A CN116095381A CN 116095381 A CN116095381 A CN 116095381A CN 202111303768 A CN202111303768 A CN 202111303768A CN 116095381 A CN116095381 A CN 116095381A
Authority
CN
China
Prior art keywords
media data
commentary
target
media
comment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111303768.2A
Other languages
Chinese (zh)
Inventor
徐宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111303768.2A priority Critical patent/CN116095381A/en
Publication of CN116095381A publication Critical patent/CN116095381A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a data processing method, a device, computer equipment and a readable storage medium, wherein the method can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, video and the like, and comprises the following steps: displaying target media data in a media playing interface; responding to starting operation of the commentary media data display function in the media playing interface, and displaying a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface; in the commentary media data presentation region, target commentary media data associated with the target media data is displayed. By adopting the method and the device, the simplified display of the commentary media data can be realized, the display effect of the commentary media data is improved, and the influence of the commentary media data on the display effect of the target media data is reduced.

Description

Data processing method, device, computer equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, apparatus, computer device, and readable storage medium.
Background
With the development of multimedia technology, video has become a major carrier for acquiring information and enjoying entertainment in daily life. In order to view comments for a video during viewing of the video, a commentary media data function (e.g., a bullet screen function) of the video is typically turned on to present the commentary media data (i.e., bullet screen data) of the video on a screen.
The existing bullet screen display mode can display a plurality of bullet screens in the current picture of the video at the same time, the plurality of bullet screens can be displayed on the whole screen in a covering mode, and the bullet screens on the whole screen can influence the display effect of the video. In addition, when the density of the barrage data is large, a plurality of barrage data can be displayed on the screen in parallel in a superimposed mode, namely, the plurality of barrage data can be displayed in a superimposed mode at the same position, and therefore the display effect of the barrage data can be further reduced.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, computer equipment and a readable storage medium, which can improve the display effect of commentary media data and reduce the influence of commentary media data on the display effect of target media data.
In one aspect, an embodiment of the present application provides a data processing method, including:
Displaying target media data in a media playing interface;
responding to starting operation of the commentary media data display function in the media playing interface, and displaying a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data;
displaying, in a commentary media data presentation region, target commentary media data associated with target media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
An aspect of an embodiment of the present application provides a data processing apparatus, including:
the first display module is used for displaying target media data in the media playing interface;
the second display module is used for responding to the starting operation of the commentary media data display function in the media playing interface, and displaying the virtual object and the commentary media data display area associated with the virtual object in the target area of the media playing interface; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data;
A third display module for displaying, in the commentary media data presentation region, target commentary media data associated with the target media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
Wherein the second display module includes:
a first display unit for displaying a commentary media data control associated with a commentary media data display function in a media play interface;
the second display unit is used for responding to the triggering operation of the comment media data control, and displaying the comment media data start control;
and the third display unit is used for responding to the starting operation of the starting control for the commentary media data and displaying the virtual object and the commentary media data display area associated with the virtual object in the target area of the media playing interface.
Wherein the apparatus further comprises:
the fourth display module is used for displaying comment media data update controls associated with the comment media data update function in the media playing interface;
the fifth display module is used for responding to the triggering operation of the comment media data update control and displaying N display duration controls; n is a positive integer;
The time length determining module is used for responding to the triggering operation of the target display time length control in the N display time length controls and determining the comment media data display time length indicated by the target display time length control as the comment media data display time length corresponding to the target media data;
the third display module is specifically configured to display, in the comment media data display area, the target comment media data associated with the target media data based on the comment media data display duration corresponding to the target media data.
Wherein the third display unit includes:
a first obtaining subunit, configured to obtain a first set of commentary media data associated with the target media data in response to a start operation of a start control for the commentary media data; the first set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on the first playback schedule; the first playing progress is the current playing progress of the target media data in the media playing interface;
the first display subunit is configured to display, based on the first set of commentary media data, the virtual object and a commentary media data presentation area associated with the virtual object in a target area of the media playback interface.
Wherein the first display subunit comprises:
the searching processing subunit is used for searching the first comment media data set based on the first playing progress to obtain a searching result;
the first searching subunit is configured to determine, if the searching result indicates that the comment media data corresponding to the first playing progress is found in the first comment media data set, the comment media data corresponding to the first playing progress as target comment media data associated with the target media data;
the region display subunit is used for determining a virtual object matched with the target comment media data and determining the media size of the target comment media data;
and the region display subunit is used for determining the region size of the commentary media data display region associated with the virtual object according to the media size, and displaying the virtual object and the commentary media data display region with the region size in the target region of the media playing interface.
The region display subunit is specifically configured to input target media data into the media data analysis model, and perform scene type analysis on the target media data through the media data analysis model to obtain media scene types of the target media data at different playing schedules;
The region display subunit is specifically configured to obtain, from media scene types of the target media data at different playing progress, a media scene type of the target media data at the first playing progress;
the region display subunit is specifically configured to determine a virtual object that matches the target comment media data according to a media scene type of the target media data on the first playing progress; the object attribute type of the virtual object is matched with the media scene type of the first playing progress.
The regional display subunit is specifically configured to input target comment media data into a comment media data analysis model, and perform semantic analysis on the target comment media data through the comment media data analysis model to obtain a semantic type of the target comment media data;
the region display subunit is specifically configured to determine, according to the semantic type, a virtual object that matches the target comment media data; the object attribute type of the virtual object matches the semantic type.
Wherein the first display subunit further comprises:
the second searching subunit is configured to, if the searching result indicates that the comment media data corresponding to the first playing progress is not found in the first comment media data, continue to search the first comment media data set based on the second playing progress when the playing progress of the target media data is played from the first playing progress to the second playing progress; the second playing progress is the next playing progress of the first playing progress.
Wherein the third display unit further includes:
the progress acquisition subunit is used for acquiring a third playing progress corresponding to the target media data; the third playing progress is the playing progress after the first playing progress;
a second obtaining subunit, configured to obtain, if there is no commentary media data associated with the target playing progress interval in the device memory, a second set of commentary media data associated with the target media data based on the third playing progress; the second set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on the target play progress interval; the minimum playing progress in the target playing progress interval is the third playing progress;
and the second display subunit is used for displaying the updated virtual object and the updated comment media data display area associated with the updated virtual object in the target area of the media playing interface based on the second comment media data set when the playing progress of the target media data played in the media playing interface is the third playing progress.
The interval length of the target playing progress interval is a preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data.
In one aspect, an embodiment of the present application provides a data processing method, including:
receiving a comment media data acquisition request for target media data in a media playing interface sent by an application client, and acquiring a first comment media data set associated with the target media data according to the comment media data acquisition request; the comment media data acquisition request is sent by the application client after responding to the starting operation of the comment media data display function in the media playing interface;
returning the first set of commentary media data to the application client, such that the application client displays the virtual object and a commentary media data presentation area associated with the virtual object in a target area of the media playing interface based on the first set of commentary media data, and displays target commentary media data associated with the target media data in the commentary media data presentation area; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data; the first set of commentary media data includes target commentary media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
An aspect of an embodiment of the present application provides a data processing apparatus, including:
the request receiving module is used for receiving a comment media data acquisition request which is sent by the application client and aims at target media data in the media playing interface, and acquiring a first comment media data set associated with the target media data according to the comment media data acquisition request; the comment media data acquisition request is sent by the application client after responding to the starting operation of the comment media data display function in the media playing interface;
the set returning module is used for returning the first commentary media data set to the application client so that the application client can display the virtual object and the commentary media data display area associated with the virtual object in the target area of the media playing interface based on the first commentary media data set, and display the target commentary media data associated with the target media data in the commentary media data display area; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data; the first set of commentary media data includes target commentary media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
Wherein the apparatus further comprises:
the data acquisition module is used for acquiring a comprehensive commentary media data set associated with target media data and acquiring key text filtering data for filtering the comprehensive commentary media data set;
the filtering processing module is used for filtering the comment media data containing the key text filtering data in the comprehensive comment media data set, and determining the comprehensive comment media data set after the filtering processing as a filtered comment media data set;
the aggregation processing module is used for carrying out aggregation processing on the comment media data in the filtered comment media data set to obtain an initial comment media data set associated with the target media data;
the request receiving module is specifically configured to obtain, from the initial set of commentary media data, a first set of commentary media data associated with the target media data according to a first playing progress carried by the commentary media data obtaining request.
Wherein, the polymerization processing module includes:
the data dividing unit is used for acquiring the comment media time indicated by the comment media data in the comment media data filtering set, dividing the comment media data in the comment media data filtering set according to the comment media time, and obtaining a comment media data dividing set; the comment media time indicated by each comment media data in one divided comment media data set is the same;
The time mapping unit is used for acquiring the comprehensive semantic similarity corresponding to each piece of commentary media data in the divided commentary media data set, and taking the commentary media data with the largest comprehensive semantic similarity in the divided commentary media data set as commentary media data with a mapping relation with commentary media time; the comprehensive semantic similarity corresponding to one piece of commentary media data is determined by the semantic similarity between the commentary media data and each piece of commentary media data in the affiliated divided commentary media data set;
the first composing unit is used for composing the commentary media data with the mapping relation with the commentary media time into an initial commentary media data set associated with the target media data.
Wherein the request receiving module comprises:
the interval determining unit is used for determining a playing progress interval to be requested corresponding to the first playing progress according to the first playing progress carried by the commentary media data acquisition request; the minimum playing progress in the playing progress interval to be requested is the first playing progress; the interval length of the progress interval to be requested is a preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data;
The second construction unit is used for acquiring the commentary media data in the progress interval to be requested from the initial commentary media data set, and constructing the commentary media data in the progress interval to be requested into a first commentary media data set associated with the target media data.
Wherein the apparatus further comprises:
the set auditing module is used for sending the initial commentary media data set to the management equipment used for monitoring the application client so that the management equipment can audit the initial commentary media data set to obtain an auditing result;
the result receiving module is used for receiving an auditing result returned by the management equipment, and if the auditing result is an auditing passing result, executing the step of acquiring a first commentary media data set associated with target media data from the initial commentary media data set according to a first playing progress carried by the commentary media data acquisition request.
In one aspect, a computer device is provided, including: a processor and a memory;
the processor is connected to the memory, wherein the memory is configured to store a computer program, and when the computer program is executed by the processor, the computer device is caused to execute the method provided in the embodiment of the application.
In one aspect, the present application provides a computer readable storage medium storing a computer program adapted to be loaded and executed by a processor, so that a computer device having the processor performs the method provided in the embodiments of the present application.
In one aspect, the present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the embodiments of the present application.
The computer device in the embodiment of the application can display the target media data in the media playing interface, and further respond to the starting operation of the commentary media data display function in the media playing interface, and display the virtual object and the commentary media data display area associated with the virtual object in the target area of the media playing interface. The virtual object and the commentary media data display area are displayed on the local media data in an overlaying mode, wherein the local media data are media data in a target area of the media playing interface in the target media data. Further, the computer device may display the target commentary media data associated with the target media data in a commentary media data presentation region, where the region size of the commentary media data presentation region matches the media size of the target commentary media data. Therefore, according to the embodiment of the application, the target media data can be displayed in the media playing interface, and the commentary media data display function is started for the target media data so as to display the virtual object and the commentary media data display area associated with the virtual object on the local media data in the target area of the media playing interface.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
Fig. 1 is a schematic structural diagram of a network architecture according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario for data interaction according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 4 is a schematic view of a scenario for enabling a commentary media data display function provided in an embodiment of the present application;
FIG. 5 is a schematic view of a scenario in which a criticizing media data presentation duration is determined according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a scenario for turning off a commentary media data display function provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of a method for displaying commentary media data according to an embodiment of the present application;
FIG. 9 is a flow diagram of a segmented preloading of a set of commentary media data provided by an embodiment of the present application;
FIG. 10a is a schematic view of a scenario for switching commentary media data provided in an embodiment of the present application;
FIG. 10b is a schematic view of a scenario for switching commentary media data provided in an embodiment of the present application;
FIG. 11 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 12 is a flow chart of a data processing method according to an embodiment of the present disclosure;
FIG. 13 is a flow chart of determining an initial set of commentary media data provided by an embodiment of the present application;
FIG. 14 is a flow chart of generating an initial set of commentary media data provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 16 is a schematic diagram of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 17 is a schematic diagram of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 18 is a schematic diagram of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 19 is a schematic diagram of a computer device according to an embodiment of the present application;
FIG. 20 is a schematic diagram of a data processing system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be appreciated that artificial intelligence (Artificial Intelligence, AI for short) is the intelligence of a person using a digital computer or a machine controlled by a digital computer to simulate, extend and extend the environment, sense the environment, acquire knowledge and use knowledge to obtain the best results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Specifically, referring to fig. 1, fig. 1 is a schematic structural diagram of a network architecture according to an embodiment of the present application. As shown in fig. 1, the network architecture may include a service server 2000 and a cluster of terminal devices. Wherein the cluster of terminal devices may in particular comprise one or more terminal devices, the number of terminal devices in the cluster of terminal devices will not be limited here. As shown in fig. 1, the plurality of terminal devices may specifically include a terminal device 3000a, a terminal device 3000b, terminal devices 3000c, …, a terminal device 3000n; the terminal devices 3000a, 3000b, 3000c, …, 3000n may be directly or indirectly connected to the service server 2000 through a wired or wireless communication manner, so that each terminal device may interact with the service server 2000 through the network connection.
Wherein each terminal device in the terminal device cluster may include: intelligent terminals with data processing functions such as intelligent televisions, intelligent mobile phones, tablet computers, notebook computers, desktop computers, intelligent home, wearable equipment and vehicle-mounted terminals. It should be understood that each terminal device in the terminal device cluster shown in fig. 1 may be integrally provided with an application client, and when the application client runs in each terminal device, the application client may perform data interaction with the service server 2000 shown in fig. 1. The application client may be an independent client or an embedded sub-client integrated in a certain client, which is not limited in this application.
The application client may specifically include a browser, a vehicle-mounted client, an intelligent home client, an entertainment client, a multimedia client (e.g., a video client), a social client, an information client, and other clients with data processing functions. The vehicle-mounted terminal can be an intelligent terminal in an intelligent traffic scene, and the application client on the vehicle-mounted terminal can be the vehicle-mounted client.
The service server 2000 may be a server corresponding to an application client, where the service server 2000 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, a cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and an artificial intelligence platform.
For ease of understanding, the embodiment of the present application may select one terminal device from the plurality of terminal devices shown in fig. 1 as the target terminal device. For example, the embodiment of the present application may use the terminal device 3000b shown in fig. 1 as a target terminal device, and an application client having a data processing function may be integrated in the target terminal device. At this time, the target terminal device may perform data interaction between the application client and the service server 2000.
The target media data in the embodiments of the present application may be video data (e.g. movie data), audio data (e.g. music data), text data (e.g. novel data), which is not limited herein, and for ease of understanding, the embodiments of the present application will be described by taking the target media data as video data as an example. For ease of understanding, the pop-up in the video data may be referred to as comment media data, where the pop-up may represent a short comment sent by a user watching the video, so as to implement a presentation manner of comments on the video while watching the video. In addition, the embodiments of the present application may refer to the audio comments in the audio data and the text comments in the text data as commentary media data.
For easy understanding, in the embodiments of the present application, media (e.g., a movie) selected by a user (e.g., user Y) in a media recommendation interface (e.g., a video recommendation interface, an audio recommendation interface, a text recommendation interface) of an application client and attached to own interests may be collectively referred to as target media, and media data corresponding to the target media may be target media data.
It should be understood that the service scenario applicable to the network framework may specifically include: the network frame can realize simplified display of comment media data in service scenes such as entertainment program on-demand scenes, online music listening scenes, online novice reading scenes and the like, and the service scenes suitable for the network frame are not listed one by one. For example, in an entertainment on-demand scenario, the target media may be entertainment selected by user Y in a video recommendation interface (e.g., an entertainment recommendation list) that matches the user's own interests. For another example, in an online music listening scenario, the target media may be popular music selected by the user Y in an audio recommendation interface (e.g., a popular song list) to fit the user's own interests. For another example, in an online novel reading scenario, the target media may be a hot novel selected by the user Y in a text recommendation interface (e.g., a novel ranking list) that fits his own interests.
It should be appreciated that when a user (e.g., user Y) views target media data (e.g., video data S) in an application client of a target terminal device, the application client may display the video data S in a media playback interface. Further, when the user Y needs to open the commentary media data display function (e.g., the bullet screen function) of the application client, a start operation may be performed for the bullet screen function in the media playing interface, so that the application client may send a commentary media data acquisition request (e.g., a bullet screen acquisition request) to the service server 2000 in response to the start operation performed by the user Y for the bullet screen function. Therefore, after receiving the bullet screen acquisition request sent by the application client in the target terminal device, the service server 2000 may acquire the first set of commentary media data associated with the video data S, and then return the first set of commentary media data to the application client, so that the application client acquires the target commentary media data from the first set of commentary media data, displays, in a target area of the media playing interface, a virtual object associated with the target commentary media data, and a commentary media data display area associated with the virtual object, and displays the target commentary media data in the commentary media data display area.
For ease of understanding, further, please refer to fig. 2, fig. 2 is a schematic diagram of a scenario for data interaction according to an embodiment of the present application. The server 20a shown in fig. 2 may be the service server 2000 in the embodiment corresponding to fig. 1, and the terminal device 20b shown in fig. 2 may be the target terminal device in the embodiment corresponding to fig. 1. Wherein the terminal device 20b has installed thereon an application client that can be used to present the target media data and the commentary media data associated with the target media data. The media playing interface 21a and the media playing interface 21c shown in fig. 2 may be media playing interfaces of application clients in the terminal device 20b at different moments. When the terminal device 20b is a smart television, the application client on the terminal device 20b may be a video client on the smart television.
As shown in fig. 2, the application client may display the target media data in the media playing interface 21a through the media identifier (e.g., the video identifier) of the target media data, and further when the user corresponding to the application client performs the start operation for the commentary media data display function in the media playing interface 21a, the application client may determine the playing progress (e.g., the playing progress J) of the target media data in the application client in response to the start operation for the commentary media data display function in the media playing interface 21a, and send the commentary media data acquisition request for the target media data to the server 20a based on the playing progress J. In this way, upon receiving the commentary media data acquisition request, server 20a may acquire an initial set of commentary media data associated with the target media data from set of commentary media data 21b based on the media identification of the target media data.
Wherein, the set of commentary media data 21b may include a plurality of sets of commentary media data, and the plurality of sets of commentary media data may specifically include: an initial set of commentary media data 30a, initial sets of commentary media data 30b, …, initial set of commentary media data 30n. Multiple initial sets of commentary media data may be associated with different media data, e.g., initial set of commentary media data 30a may be associated with media data S 1 The associated, initial set of commentary media data 30b may be associated with media data S 2 The associated, …, initial set of commentary media data 30n may be associated with media data S n And (5) associating. For example, when the target media data is media data S 1 When the server 20a may obtain an initial set of commentary media data 30a from the set of commentary media data 21b, the initial set of commentary media data 30a being the initial set of commentary media data associated with the target media data.
As shown in fig. 2, after the server 20a obtains the initial set of commentary media data 30a, a first set of commentary media data may be obtained from the initial set of commentary media data 30a based on the playing progress J carried by the commentary media data obtaining request, and then the first set of commentary media data may be returned to the application client in the terminal device 20 b. The first set of commentary media data may include commentary media data in a to-be-requested playing progress interval after the playing progress J, where a request interval duration of the to-be-requested playing progress interval may be less than a total interval duration after the playing progress J. Optionally, the first set of commentary media data may further include commentary media data in all playing progress intervals corresponding to all interval durations after the playing progress J. Optionally, the first set of commentary media data may also include commentary media data in the initial set of commentary media data associated with the target media data.
As shown in fig. 2, after receiving the first comment media data set, the terminal device 20b may obtain comment media data corresponding to the playing progress J of the target media data from the first comment media data set, and if the target media data has corresponding comment media data in the playing progress J, determine the comment media data corresponding to the playing progress J as the target comment media data. Optionally, if the target media data does not have corresponding comment media data in the playing progress, a next playing progress (for example, playing progress (j+1)) of the playing progress J is obtained, and further comment media data corresponding to the playing progress (j+1) of the target media data is obtained from the first comment media data set, so that when the target media data has corresponding comment media data in the playing progress (j+1), the comment media data corresponding to the playing progress (j+1) is determined to be the target comment media data.
Further, the terminal device 20b may divide the media playing interface 21a into a target area 21d, and further display the virtual object 22b associated with the target commentary media data and the commentary media data presentation area 22a associated with the virtual object 22b in the target area 21 d. At this time, the media playing interface is switched from the media playing interface 21a to the media playing interface 21c. Wherein the media data located in the target area 21d in the target media data may be referred to as local media data, the virtual object 22b and the commentary media data presentation area 22a are overlaid and displayed on the local media data, e.g. the virtual object 22b is an object independent of the local media data, the commentary media data presentation area 22a is an area independent of the local media data, in other words, the virtual object 22b is an object hovering over the local media data, and the commentary media data presentation area 22a is an area hovering over the local media data. Further, the virtual object 22b and the commentary media data presentation region 22a may be such that local media data is obscured for display; the virtual object 22b and the commentary media data presentation region 22a may also be displayed with the transparency adjusted so that the virtual object 22b, the commentary media data presentation region 22a and the local media data may be displayed together in the target region.
The target area 21d may be located at any position in the media playing interface 21c to reduce the influence on the display effect of the target media data, and in general, the target area 21d may be located at an edge position in the media playing interface 21c, for convenience of understanding, in this embodiment, the target area 21d is illustrated as being located at a lower right position in the media playing interface 21c, and accordingly, the virtual object 22b and the commentary media data display area 22a are located at a lower right position in the media playing interface 21 c.
Therefore, in the case that the target media data is displayed in the media playing interface, the method and the device can receive the starting operation of the user on the commentary media data display function, and further display the virtual object and the commentary media data display area in the target area of the media playing interface, wherein the commentary media data display area can be used for displaying the target commentary media data associated with the target media data. Therefore, under the condition that the commentary media data display function is started, the display effect of the commentary media data is improved through the virtual object and the commentary media data display area, and the influence of the commentary media data on the display effect of the target media data is reduced through the relation between the virtual object and the commentary media data display area and the target area.
Further, referring to fig. 3, fig. 3 is a flow chart of a data processing method according to an embodiment of the present application. The method may be executed by a server, or may be executed by an application client, or may be executed by a server and an application client together, where the server may be the server 20a in the embodiment corresponding to fig. 2, and the application client may be the application client in the embodiment corresponding to fig. 2. For ease of understanding, embodiments of the present application are described in terms of this method being performed by an application client. The data processing method may include the following steps S101 to S103:
step S101, displaying target media data in a media playing interface;
it may be appreciated that when a user (e.g., user Y) corresponding to the application client needs to watch a certain media (e.g., a target media), a play operation may be performed for the target media (e.g., a target video) in a media recommendation interface (e.g., a video recommendation interface) of the smart tv, so that the application client may send a media play request (e.g., a video play request) to a server corresponding to the application client based on a media identification (e.g., a video identification) of the target media in response to the play operation performed by the user Y for the target media, and further display the target media data in a media play interface (e.g., a video play interface) of the smart tv after receiving the target media data corresponding to the target media.
Step S102, responding to the starting operation of the commentary media data display function in the media playing interface, and displaying a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface;
specifically, an application client in the smart television may display a commentary media data control associated with a commentary media data display function in a media playback interface. Further, through data interaction between the smart television and the smart television remote controller, the application client can respond to triggering operation of the comment media data control, and the comment media data start control is displayed. Further, the application client may display the virtual object and the commentary media data presentation area associated with the virtual object in a target area of the media playback interface in response to a launch operation for the commentary media data launch control. The virtual object and the commentary media data display area are displayed on the local media data in an overlaying mode, wherein the local media data are media data in a target area of the media playing interface in the target media data.
The application client can respond to the triggering operation for the media playing interface, display a media control interface independent of the media playing interface, and display comment media data control associated with the comment media data display function in the media control interface.
The triggering operation and the starting operation may include touch operations such as clicking, long pressing, sliding, and the like, and may also include non-touch operations such as voice and gestures, which are not limited herein. When the terminal device is an intelligent television, the triggering operation can be a clicking operation for a player menu on the intelligent television remote controller or a clicking operation for a television button on the intelligent television.
For ease of understanding, fig. 4 is a schematic view of a scenario for starting the commentary media data display function according to an embodiment of the present application. The media playing interface in the application client may be the media playing interface 40a as shown in fig. 4, and the target media data may be displayed in the media playing interface 40 a. As shown in fig. 4, the application client may display a media control interface 40b independent of the media play interface 40a in response to a triggering operation for the media play interface 40a, the media control interface 40b may include a commentary media data control 41a associated with a commentary media data display function.
As shown in fig. 4, the application client may display the commentary media data activation control 41b in response to a trigger operation for the commentary media data control 41 a. In addition, the application client can display a comment media data closing control, the comment media data closing control defaults to an open state, and a comment media data display function can be closed through the comment media data closing control.
Among other things, it can be appreciated that a user corresponding to an application client can perform a launch operation with respect to the commentary media data launch control 41b, such that the application client can display a virtual object (not shown in the figures) and a commentary media data presentation area (not shown in the figures) associated with the virtual object in a target area of the media playback interface 40a in response to the launch operation with respect to the commentary media data launch control 41b. The virtual object may be the virtual object 22b in the embodiment corresponding to fig. 2, and the commentary media data display area may be the commentary media data display area 22a in the embodiment corresponding to fig. 2.
Step S103, in the commentary media data presentation region, displaying the target commentary media data associated with the target media data.
The region size of the commentary media data display region is matched with the media size of the target commentary media data. Wherein the media size of the target commentary media data is determined by the text length of the target commentary media data. For example, when the target comment media data is "the person is inland-! "when, the target commentary media data includes: "this", "person", "good", "inland" and "++! By way of example, if the text length of the target commentary media data is 6, i.e., the media size of the target commentary media data is 6, then the region size of the commentary media data display region is 6. For another example, when the target commentary media data is "Java, good hearing-! "when, the target commentary media data includes: "Java", "good", "listen" and "+|! By way of example, if the text length of the target commentary media data is 5, i.e., the media size of the target commentary media data is 5, then the region size of the commentary media data display region is 5. Thus, the commentary media data presentation area of region size 6 is larger than the commentary media data presentation area of region size 5.
It should be appreciated that the application client may display the commentary media data update control associated with the commentary media data update function in the media play interface. Further, the application client may display N presentation duration controls in response to a trigger operation for the commentary media data update control. Where N may be a positive integer. Further, the application client may respond to a trigger operation for a target presentation duration control in the N presentation duration controls, and determine a comment media data presentation duration indicated by the target presentation duration control as a comment media data presentation duration corresponding to the target media data.
The application client can respond to the triggering operation for the media playing interface, display a media control interface independent of the media playing interface, and display a comment media data updating control associated with the comment media data updating function in the media control interface.
The triggering operation may include a touch operation such as clicking, long pressing, sliding, or a non-touch operation such as voice or gesture, which is not limited herein.
Accordingly, the application client may display the target commentary media data associated with the target media data in the commentary media data display region based on the commentary media data display duration corresponding to the target media data. In other words, the application client may continuously display the virtual object and the commentary media data display region along with the playing progress of the target media data, and synchronously and continuously display the virtual object and the commentary media data display region in the target region through the commentary media data display time period (i.e., the display time of the virtual object and the commentary media data display region is determined by the commentary media data display time period), and simultaneously display the target commentary media data in the commentary media data display region through the commentary media data display time period. Compared with the related technology, the method and the device can reduce the display speed and quantity of the comment media data, and are more convenient for auditing and management and control.
For ease of understanding, please refer to fig. 5, fig. 5 is a schematic diagram of a scenario for determining a comment media data presentation duration according to an embodiment of the present application. The media playing interface in the application client may be a media playing interface 50a as shown in fig. 5, and the media playing interface 50a may have target media data displayed therein. As shown in fig. 5, the application client may display a media control interface 50b independent of the media play interface 50a in response to a triggering operation for the media play interface 50a, the media control interface 50b may include a commentary media data update control 51a associated with a commentary media data update function.
As shown in fig. 5, the application client may respond to the triggering operation of the comment media data update control 51a, and display N display duration controls, where N may be a positive integer, and N is equal to 5 for illustration, and the 5 display duration controls may specifically include: presentation duration control 52b, presentation duration control 52c, presentation duration control 52d, presentation duration control 52e, and presentation duration control 52f. The 5 presentation duration controls are used for indicating different comment type media data presentation durations (presentation duration for short), the presentation duration indicated by the presentation duration control 52b is 1 second, the presentation duration indicated by the presentation duration control 52c is 2 seconds, the presentation duration indicated by the presentation duration control 52d is 3 seconds, the presentation duration indicated by the presentation duration control 52e is 5 seconds, and the presentation duration indicated by the presentation duration control 52f is 8 seconds.
It can be appreciated that, when the presentation duration control 52f is taken as the target presentation duration control, the user corresponding to the application client may perform a triggering operation on the presentation duration control 52f, so that the application client may determine, in response to the triggering operation on the presentation duration control 52f, the comment media data presentation duration (i.e. 8 seconds) indicated by the presentation duration control 52f as the comment media data presentation duration corresponding to the target media data. Thus, the virtual object and the commentary media data presentation region will be displayed simultaneously in the target region for 8 seconds.
It should be appreciated that as the virtual object and the commentary media data presentation region continue to be displayed, the progress of the target media data in the media playback interface may continue to change until an auxiliary virtual object (i.e., a new virtual object) and an auxiliary commentary media data presentation region associated with the auxiliary virtual object (i.e., a new commentary media data presentation region) are displayed in the target region, in other words, the virtual object may automatically transform to an auxiliary virtual object and the commentary media data presentation region (i.e., a bubble) may automatically become larger or smaller to an auxiliary commentary media data presentation region. For example, with the continuous display of the virtual objects and the commentary media data display regions, the media playing interface 21c shown in fig. 2 is played to the media playing interface 60a shown in fig. 6, i.e. the virtual objects in the media playing interface 21c are switched to the auxiliary virtual objects in the media playing interface 60a, and the commentary media data display regions in the media playing interface 21c are switched to the auxiliary commentary media data display regions in the media playing interface 60 a.
It should be appreciated that when a user desires to turn off the commentary media data display function (e.g., the bullet screen function) of the application client, a turn-off operation may be performed for the commentary media data display function in the media playback interface. In this way, the application client may display the commentary media data control associated with the commentary media data display function in the media playback interface. Further, the application client may display the commentary media data closing control in response to a triggering operation of the commentary media data control. Further, the application client may cancel display of the auxiliary virtual object and the auxiliary commentary media data presentation region associated with the auxiliary virtual object in the target region of the media playback interface in response to a closing operation for the commentary media data closing control.
The closing operation may include a touch operation such as clicking, long pressing, sliding, or a non-touch operation such as voice or gesture, which is not limited herein.
For ease of understanding, fig. 6 is a schematic diagram of a scenario in which the commentary media data display function is turned off according to an embodiment of the present application. The media playing interface in the application client may be a media playing interface 60a as shown in fig. 6, and the media playing interface 60a may have target media data displayed therein. As shown in fig. 6, the application client may display a media control interface 60b independent of the media play interface 60a in response to a triggering operation for the media play interface 60a, the media control interface 60b may include a commentary media data control 61a associated with a commentary media data display function.
As shown in fig. 6, the application client may display a commentary media data close control 61b in response to a trigger operation for commentary media data control 61 a. In addition, the application client can also display a commentary media data starting control which defaults to an open state, and through which the commentary media data display function can be opened.
It may be appreciated that, the user corresponding to the application client may perform the closing operation for the commentary media data closing control 61b, so that the application client may cancel displaying the auxiliary virtual object and the auxiliary commentary media data display area associated with the auxiliary virtual object in the target area of the media playing interface 60a in response to the closing operation for the commentary media data closing control 61b, thereby obtaining the media playing interface 60c. Wherein, the auxiliary virtual object may be a virtual object displayed in the media playing interface 60a, and the auxiliary commentary media data display area may be a commentary media data display area displayed in the media playing interface 60 a.
Therefore, according to the embodiment of the application, the target media data can be displayed in the media playing interface, and the commentary media data display function is started for the target media data so as to display the virtual object and the commentary media data display area associated with the virtual object on the local media data in the target area of the media playing interface.
Further, referring to fig. 7, fig. 7 is a flow chart of a data processing method according to an embodiment of the present application. The method may be executed by a server, or may be executed by an application client, or may be executed by a server and an application client together, where the server may be the server 20a in the embodiment corresponding to fig. 2, and the application client may be the application client in the embodiment corresponding to fig. 2. For ease of understanding, embodiments of the present application are described in terms of this method being performed by an application client. Step S201 to step S204 are a specific embodiment of step S102 in the embodiment corresponding to fig. 3. The data processing method may include the following steps S201 to S207:
step S201, displaying comment media data control associated with comment media data display function in media play interface;
for a specific process of displaying the comment media data control by the application client, refer to the description of step S102 in the embodiment corresponding to fig. 3, which will not be described herein.
Step S202, responding to triggering operation of a comment media data control, and displaying a comment media data start control;
For a specific process of displaying the commentary media data start control by the application client, refer to the description of step S102 in the embodiment corresponding to fig. 3, which will not be described herein.
Step S203, responding to the starting operation of a starting control for commentary media data, and acquiring a first commentary media data set associated with target media data;
wherein the first set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on the first playback schedule; the first playing progress is the current playing progress of the target media data in the media playing interface.
In other words, the application client may acquire the current playing progress of the target media data in the media playing interface, determine the current playing progress as the first playing progress, and further send a comment media data acquisition request to the server corresponding to the application client based on the first playing progress, so that the server acquires comment media data from the initial comment media data set associated with the target media data based on the first playing progress, and forms the acquired comment media data into a first comment media data set, and further returns the first comment media data to the application client.
The first comment media data comprises comment media data in a to-be-requested playing progress interval, the minimum playing progress in the to-be-requested playing progress interval is a first playing progress, the interval length of the to-be-requested playing progress interval is a preset request interval duration, the request interval duration is smaller than the total playing duration of the target media data, namely, an application client can acquire comment media data associated with the target media data in a segmented mode, and the comment media data acquired by the application client at the first playing progress are comment media data in a first comment media data set.
Optionally, when the commentary media data display function defaults to the start state, the application client may directly obtain the first commentary media data set based on the first playing progress without executing a step of responding to a start operation of the commentary media data display function in the media playing interface. The first playing progress may be an initial playing progress of the target media data, a playing progress when the application client exits from playing of the target media data last time, or a playing progress after skipping of the playing progress of the target media data.
Optionally, before sending the request for obtaining the comment media data to the server based on the first playing progress, the application client may search a local memory (i.e., a device memory, where the device memory may be used to store comment media data associated with the application client) of the terminal device, so as to determine whether the local memory has the first comment media data set, and if the local memory has the first comment media data set, determine a storage time for storing the first comment media data set. Further, when the storage time is within the time validity period, the application client may directly use the first commentary media data; when the storage time is not within the time validity period, the application client may send a commentary media data acquisition request to the server. Therefore, when the application client views the same target media data in a similar time, the application client does not need to repeatedly request the application client to acquire the commentary media data, but directly uses the commentary media data stored in the local memory.
Step S204, based on the first commentary media data set, displaying the virtual object and the commentary media data display area associated with the virtual object in the target area of the media playing interface;
Specifically, the application client may perform a search process on the first set of commentary media data based on the first playing progress, to obtain a search result. Further, if the search result indicates that the comment media data corresponding to the first playing progress is found in the first comment media data set, the application client may determine the comment media data corresponding to the first playing progress as target comment media data associated with the target media data. Further, the application client may determine a virtual object that matches the target commentary media data, determining a media size of the target commentary media data. Further, the application client may determine, according to the media size, a region size of the commentary media data presentation region associated with the virtual object, and display the virtual object and the commentary media data presentation region having the region size in a target region of the media playback interface.
For a specific process of determining the media size of the target comment media data by the application client, refer to the description of step S103 in the embodiment corresponding to fig. 3, which will not be described herein.
In fact, the application client obtains the first comment media data set based on the first playing progress with time delay, after receiving the first comment media data, the application client searches comment media data corresponding to the current playing progress in the first comment media data according to the current playing progress (generally, the current playing progress is the playing progress after the first playing progress) of the target media data in the media playing interface, and for convenience of understanding, the embodiment of the application is described taking the current playing progress as the first playing progress as an example.
Optionally, if the search result indicates that the comment media data corresponding to the first playing progress is not found in the first comment media data, the application client may continue to search the first comment media data set based on the second playing progress when the playing progress of the target media data is played from the first playing progress to the second playing progress. The second playing progress may be a next playing progress of the first playing progress. Because the second playing progress is the next playing progress of the first playing progress, the application client can directly search the first set of commentary media data for commentary media data corresponding to the second playing progress. The specific process of searching the first comment media data set for the comment media data corresponding to the second playing progress by the application client may refer to the description of searching the first comment media data set for the comment media data corresponding to the first playing progress, which will not be described herein.
The playing progress may be seconds or frames, and the unit of the playing progress is not limited in this application. For example, when the play progress is in seconds, the first play progress may be 1 st second and the second play progress may be 2 nd second. For ease of understanding, the embodiments of the present application will be described with respect to a playback progress in seconds.
Since the application client is to segment the acquisition of commentary media data (e.g., bullet screen data), the application client always retains bullet screen data for a period of time from the current playing progress during the playing of target media data (e.g., target video data). The user can trigger the barrage display at any time period when the application client plays the target media data, and then regularly refresh the currently displayed barrage data according to the display duration set by the barrage (namely the comment media data display duration set by the user or the default comment media data display duration).
For ease of understanding, fig. 8 is a schematic flow chart of a commentary media data presentation provided in an embodiment of the present application. As shown in fig. 8, when a video starts playing (i.e., the video starts playing), assuming that the application client has already turned on the comment media data display function, the application client may read the current playing time (e.g., the first playing progress) and further determine whether a bullet screen exists at the time (i.e., read the current time bullet screen), if the bullet screen exists at the time, read the bullet screen display time (i.e., the comment media data display duration), and display the bullet screen corresponding to the first playing progress in the application client for a certain time.
Alternatively, as shown in fig. 8, if the bullet screen does not exist at the time, waiting for the next second (for example, the second playing progress), reading the playing time of the next second, further determining whether the bullet screen exists at the time, and pushing so on until the first set of comment media data does not exist comment media data associated with the playing progress interval with the minimum playing progress of the next second.
It may be appreciated that the application client may determine, through an artificial intelligence model, a virtual object that matches the target commentary media data, where the artificial intelligence model may be a media data analysis model or a commentary media data analysis model. It should be understood that embodiments of the present application do not limit the model types of the media data analysis model and the commentary media data analysis model.
The application client can input the target media data into the media data analysis model, and the scene type analysis is carried out on the target media data through the media data analysis model to obtain the media scene types of the target media data in different playing progress. Further, the application client may obtain the media scene type of the target media data at the first playing progress from the media scene types of the target media data at different playing progress. Further, the application client may determine a virtual object that matches the target commentary media data according to the media scene type of the target media data at the first play schedule. The object attribute type of the virtual object is matched with the media scene type of the first playing progress.
Optionally, the application client may input the target comment media data to a comment media data analysis model, and perform semantic analysis on the target comment media data through the comment media data analysis model to obtain a semantic type of the target comment media data. Further, the application client may determine a virtual object matching the target commentary media data according to the semantic type. Wherein the object attribute type of the virtual object matches the semantic type.
Step S205, a third playing progress corresponding to the target media data is obtained;
the third playing progress is the playing progress after the first playing progress.
It may be appreciated that, if the third playing progress belongs to the playing progress interval to be requested, the application client may perform the search processing on the first set of commentary media data based on the third playing progress. Optionally, if the third playing progress does not belong to the playing progress interval to be requested, the application client may execute step S206 described below. For ease of understanding, the embodiment of the present application is described by taking an example in which the third playing progress does not belong to the playing progress interval to be requested.
The third playing progress may be a playing progress obtained by sequentially playing the target media data, and optionally, the third playing progress may also be a playing progress obtained by skip playing the target media data.
Step S206, if the comment media data associated with the target playing progress interval does not exist in the equipment memory, acquiring a second comment media data set associated with the target media data based on the third playing progress;
wherein the second set of commentary media data comprises commentary media data obtained from an initial set of commentary media data associated with the target media data based on the target play progress interval; and the minimum playing progress in the target playing progress interval is the third playing progress.
The specific process of the application client obtaining the second set of commentary media data associated with the target media data based on the third playing progress may refer to the above description of obtaining the first set of commentary media data associated with the target media data based on the first playing progress, which will not be described herein.
The interval length of the target playing progress interval is a preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data. The interval length of the target playing progress is equal to the interval length of the playing progress interval to be requested, and the embodiment of the application does not limit the interval length here.
It can be appreciated that, because the target media data is played sequentially in the media playing interface, if the third playing progress does not belong to the playing progress interval to be requested, the third playing progress can be determined to be the minimum playing progress of the target playing progress interval. The target playing progress interval is the next playing progress interval of the playing progress interval to be requested.
In step S207, when the playing progress of the target media data played in the media playing interface is the third playing progress, based on the second set of commentary media data, the updated virtual object and the updated commentary media data display area associated with the updated virtual object are displayed in the target area of the media playing interface.
The specific process of displaying the updated virtual object and the updated comment media data display area associated with the updated virtual object in the target area of the media playing interface by the application client based on the second comment media data set may be referred to above based on the first comment media data set, and description of displaying the virtual object and the comment media data display area associated with the virtual object in the target area of the media playing interface will not be repeated herein. Wherein updating the virtual object may be used to update the virtual object and updating the commentary media data presentation area may be used to update the commentary media data presentation area.
The specific process of determining to update the virtual object by the application client may refer to the description of determining the virtual object, the specific process of determining to update the commentary media data display area by the application client may refer to the description of determining the commentary media data display area, and will not be described in detail herein.
It will be appreciated that given that a target media data (e.g., a movie) may take up to 2-3 hours, even if there is one bullet screen data per second for the movie, tens of thousands of bullet screen data may appear, and it is impractical to return tens of thousands of bullet screen data to the client at once. Therefore, the bullet screen data can be loaded through client segmentation, and the client needs to be preloaded to meet the purpose that bullet screen data are not omitted.
The application client does not acquire the second set of commentary media data based on the third playing progress when the playing progress of the target media data played in the media playing interface is the third playing progress, but acquires the second commentary media data before the third playing progress. The application client may obtain the second set of commentary media data based on the third playing progress when the playing progress of the target media data played in the media playing interface is a fourth playing progress, where the fourth playing progress is a playing progress between the third playing progress. For example, the application client may obtain the comment media data from the 6 th second to the 15 th second in the 5 th second, where the 5 th second may be the fourth playing progress, the 6 th second may be the third playing progress, and the comment media data from the 6 th second to the 15 th second may be the comment media data in the second comment media data set.
For ease of understanding, fig. 9 is a schematic flow chart of a method for preloading a set of commentary media data in segments according to an embodiment of the present application. As shown in fig. 9, when a video starts playing (i.e., the video starts playing), the application client may read the current playing time (e.g., the third playing progress) and further determine whether a bullet screen exists at the time (i.e., determine whether there is comment media data associated with the target playing progress interval in the device memory), and if the bullet screen exists at the time, display the bullet screen through the flowchart shown in fig. 8. In fact, the current playing time is not equal to the third playing progress, but may be the previous playing progress of the third playing progress.
Alternatively, as shown in fig. 9, if the bullet screen does not exist at the time, the application client may request the background (i.e., the server) to load bullet screen data (i.e., the second set of commentary media data) of a segment time (e.g., 10 minutes), further continue to read the current latest playing time (e.g., the next playing progress of the third playing progress), determine whether the bullet screen exists at the time, and so on until the read playing time is the last playing progress of the target media data. In fact, the current latest play time is not equal to the next play progress of the third play progress, but may be the third play progress.
For ease of understanding, please refer to fig. 10a and 10b, fig. 10a and 10b are schematic diagrams of a scenario for switching comment media data according to an embodiment of the present application. As shown in fig. 10a, the media playing interface 10a may be a media playing interface of the application client at a first playing progress (e.g., time 1), the media playing interface 10b may be a media playing interface of the application client at a second playing progress (e.g., time 2), and the media playing interface 10c may be a media playing interface of the application client at a fifth playing progress (e.g., time n). Here, the nth time may be any time after the 2 nd time.
As shown in fig. 10a, the application client may search for the commentary media data corresponding to the 1 st moment, and when the commentary media data corresponding to the 1 st moment is not found, play the playing progress of the target media data from the 1 st moment to the 2 nd moment in the media playing interface 10 a. Further, the application client may search for the commentary media data corresponding to the 2 nd moment, and when the commentary media data corresponding to the 2 nd moment is not found, play the playing progress of the target media data from the 2 nd moment to the 3 rd moment (not shown in the figure) in the media playing interface 10 b.
And so on, until the application client searches the commentary media data corresponding to the nth moment, when the commentary media data corresponding to the nth moment is searched, the virtual object 11b and the commentary media data display area 11c are displayed in the target area 11a of the media playing interface 10c. The commentary media data display area 11c includes target commentary media data (i.e. corresponding to the nth time), which may be "the person is inland! The target commentary media data may be associated with virtual object 11b and commentary media data presentation region 11c may be associated with virtual object 11 b.
Alternatively, as shown in fig. 10b, the media playing interface 10d may be a media playing interface of the application client at a first playing progress (e.g., time 1), the media playing interface 10e may be a media playing interface of the application client at a second playing progress (e.g., time 2), and the media playing interface 10c may be a media playing interface of the application client at a fifth playing progress (e.g., time n). The media playing interface 10c shown in fig. 10b may be the media playing interface 10c shown in fig. 10 a. Here, the nth time may be any time after the 2 nd time.
As shown in fig. 10b, the application client may search for the commentary media data corresponding to the 1 st moment, and when searching for the commentary media data corresponding to the 1 st moment, display the virtual object 12b and the commentary media data display area 12c in the target area 12a of the media playing interface 10 d. Wherein the commentary media data presentation region 12c includes target commentary media data, which may be Java, good hearing-! The target commentary media data may be associated with virtual object 12b and commentary media data presentation region 12c may be associated with virtual object 12 b. Further, the application client may continue to display the virtual object 12b and the commentary media data presentation region 12c at the media playback interface 10e at time 2 according to the commentary media data presentation duration.
And so on, until the application client searches for the commentary media data corresponding to the nth time, when the commentary media data corresponding to the nth time is found, the virtual object 11b (see fig. 10 a) and the commentary media data display area 11c (see fig. 10 a) are displayed in the target area 11a of the media playing interface 10 c. Optionally, the application client may also continue to display the virtual object 11b (see fig. 10 a) and the commentary media data presentation region 11c (see fig. 10 a) at the media playing interface 10c at the nth moment in accordance with the commentary media data presentation duration at any moment (e.g., the (n-1) th moment (not shown in the figure)) before the nth moment.
Therefore, the embodiment of the application can start the commentary media data display function in the media playing interface, acquire the commentary media data set associated with the target media data based on the playing progress of the target media data in the media playing interface, and further display the virtual object and the commentary media data display area associated with the virtual object in the target area of the media playing interface based on the acquired commentary media data set. Based on this, the present application proposes an atypical commentary media data display scheme, when watching target media data, an avatar can be constructed in the target area, and the commentary media data is displayed according to the current playing progress by displaying the avatar with bubbles similar to cartoon, so as to provide a feeling of the avatar in commenting on the scenario, thereby indirectly displaying the commentary media data in the media playing interface, further improving the display effect of the commentary media data, and reducing the influence of the commentary media data on the display effect of the target media data.
Further, referring to fig. 11, fig. 11 is a flow chart of a data processing method according to an embodiment of the present application. The method may be executed by a server, or may be executed by an application client, or may be executed by a server and an application client together, where the server may be the server 20a in the embodiment corresponding to fig. 2, and the application client may be the application client in the embodiment corresponding to fig. 2. For ease of understanding, embodiments of the present application will be described in terms of this method being performed by a server. The data processing method may include the following steps S301 to S302:
Step S301, receiving a comment media data acquisition request for target media data in a media playing interface sent by an application client, and acquiring a first comment media data set associated with the target media data according to the comment media data acquisition request;
the commentary media data acquisition request is sent by the application client after responding to the starting operation of the commentary media data display function in the media playing interface.
It may be appreciated that the server may receive a request for obtaining commentary media data for target media data in the media playing interface sent by the application client, and obtain, according to a first playing progress carried by the commentary media data obtaining request, a first commentary media data set associated with the target media data from the initial commentary media data set. The first playing progress is the current playing progress of the target media data in a media playing interface of the application client, and the first commentary media data set comprises commentary media data starting from the first playing progress.
Step S302, the first set of commentary media data is returned to the application client.
The application client may display the virtual object and the commentary media data display area associated with the virtual object in a target area of the media playing interface based on the first commentary media data set after receiving the first commentary media data set, and display target commentary media data associated with the target media data in the commentary media data display area. The virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data; the first set of commentary media data includes target commentary media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
It may be appreciated that after receiving the first comment media data set, the application client may perform a search process on the first comment media data set based on the first playing progress, so as to determine, when comment media data corresponding to the first playing progress is found in the first comment media data set, comment media data corresponding to the first playing progress as target comment media data. Further, the application client may determine a virtual object that matches the target media data, determine a media size of the target media data, and further determine, based on the media size, a region size of a commentary media data presentation region associated with the virtual object, where the virtual object and the commentary media data presentation region having the region size are displayed in the target region of the media playback interface.
Optionally, when the first set of commentary media data associated with the target media data is acquired, the server may determine a virtual object matching each of the first set of commentary media data, determine a media size of each of the commentary media data, and further return the virtual object, the media size and the first set of commentary media data to the application client together, so that the application client determines, directly according to the media size, a region size of a commentary media data display region associated with the virtual object, and displays the virtual object and the commentary media data display region having the region size in a target region of the media playing interface.
Therefore, when receiving the commentary media data acquisition request for the target media data in the media playing interface, the server in the embodiment of the application can acquire the commentary media data set associated with the target media data according to the commentary media data acquisition request, and then return the acquired commentary media data set to the application client. Therefore, according to the embodiment of the application, the commentary media data set associated with the playing progress can be obtained in a segmented mode according to the playing progress of the target media data in the media playing interface, so that the obtaining efficiency of the commentary media data can be improved, and the displaying efficiency of the application client for displaying the commentary media data is improved.
Further, referring to fig. 12, fig. 12 is a flowchart of a data processing method according to an embodiment of the present application. The method may be executed by a server, or may be executed by an application client, or may be executed by a server and an application client together, where the server may be the server 20a in the embodiment corresponding to fig. 2, and the application client may be the application client in the embodiment corresponding to fig. 2. For ease of understanding, embodiments of the present application will be described in terms of this method being performed by a server. The data processing method may include the following steps S401 to S407:
step S401, acquiring a comprehensive commentary media data set associated with target media data, and acquiring key text filtering data for filtering the comprehensive commentary media data set;
the comprehensive comment media data set may include all comment media data associated with the target media data, and the key text filtering data may be used to filter all comment media data to obtain filtered comment media data.
It should be appreciated that the set of overall commentary media data associated with the target media data is continuously transformed, and thus, the set of overall commentary media data associated with the target media data is continuously transformed, i.e., the target media data has different sets of overall commentary media data at different times, and the server may periodically obtain the latest set of overall commentary media data and then filter the latest set of overall commentary media data. For example, at T 1 At a moment in time, the aggregate set of commentary media data associated with the target media data may include L 1 (e.g., 2) pieces of commentary media data, at T 2 At a moment in time, the aggregate set of commentary media data associated with the target media data may include L 2 (e.g., 5) pieces of commentary media data, here L 1 And L 2 May be a positive integer.
It will be appreciated that the key text filtering data may be sensitive words (i.e., keywords), which may represent words that are not suitable for display in the application client, where sensitive words may represent low-custom words, and may represent politically sensitive words, where sensitive words are not listed one by one.
Step S402, filtering the comment media data containing the key text filtering data in the comprehensive comment media data set, and determining the filtered comprehensive comment media data set as a filtered comment media data set;
when all the comment media data contain the key text filtering data, all the comment media data containing the key text filtering data are removed to obtain filtered comment media data, and none of the filtered comment media data contains the key text filtering data. Further, the server may filter the commentary media data to form a filtered set of commentary media data.
Step S403, aggregation processing is carried out on the comment media data in the filtered comment media data set to obtain an initial comment media data set associated with the target media data;
specifically, the server may obtain the comment media time indicated by the comment media data in the filtered comment media data set, and divide the comment media data in the filtered comment media data set according to the comment media time, so as to obtain a divided comment media data set. Wherein the comment media time indicated by each comment media data in one of the divided comment media data sets is the same. Further, the server may obtain the comprehensive semantic similarity corresponding to each piece of commentary media data in the set of divided commentary media data, and use the commentary media data with the largest comprehensive semantic similarity in the set of divided commentary media data as the commentary media data with a mapping relationship with the commentary media time. Wherein, the comprehensive semantic similarity corresponding to one piece of commentary media data is determined by the semantic similarity between the commentary media data and each piece of commentary media data in the affiliated divided commentary media data set. Further, the server may construct the commentary media data having a mapping relationship with the commentary media time into an initial set of commentary media data associated with the target media data.
It may be appreciated that, for each set of partitioned commentary media data, the server may analyze the commentary media data in each set of partitioned commentary media data to obtain a semantic similarity between each commentary media data (e.g., commentary media data K) and the remaining commentary media data in the set of partitioned commentary media data, thereby determining a respective corresponding comprehensive semantic similarity for each commentary media data. The remaining commentary media data may represent commentary media data except for the commentary media data K in the divided commentary media data set where the commentary media data K is located.
It will be appreciated that the server may construct an initial set of commentary media data from the commentary media data having mapping keys with each commentary media time. The comment media data with mapping key with each comment media time can represent the comment media data with the most pertinence at each comment media time. For example, the target media data may include 3 moments, and the 3 moments may include T in particular 11 Time of day, T 12 Time and T 13 At the moment, if the comment media time indicated by the comment media data in the comment media data set is filtered is T 12 Time and T 13 At the moment, the server may aggregate the comment media data in the filtered comment media data set with T 12 Time and T 13 And the comment media data with mapping relation at the moment respectively form an initial comment media data set.
Since the commentary media data (e.g., bullet screen data) is strongly correlated with the media data playing time (e.g., video playing time), the server typically employs the following structure as shown in table 1 when storing the commentary media data:
TABLE 1
Commentary media data identification Sender Media data identification Time point Commentary media data
DM0001 UID0001 VID0002 600 Java, good hearing-!
DM0002 UID0002 VID0002 608 The person is good and inland-!
As shown in table 1, DM represents a commentary media data identification (i.e., bullet screen ID), UID represents a sender, VID represents a media data identification (i.e., movie ID), and a point in time may represent a commentary media time. Optionally, the commentary media data may also include other information, such as speed, color, etc., where speed may represent a display time of the commentary media data in the application client (i.e., different commentary media data have different commentary media data presentation durations), and where color may represent a display color of the commentary media data in the application client.
Wherein DM0001 represents the time of 600 seconds (600 seconds) from the video VID0002, which is required to show the "Java, good hearing-! "commentary media data; DM0002 indicates that when the movie VID0002 is played for 608 seconds (i.e. 608 s), the user needs to show the "the person is fierce! "commentary media data.
Optionally, the server may further obtain a comprehensive commentary media data set associated with the target media data, and divide commentary media data in the comprehensive commentary media data set to obtain an aggregate commentary media data set associated with the target media data. Further, the server may obtain key text filtering data for filtering the aggregate comment media data set, filter comment media data including the key text filtering data in the aggregate comment media data set, and determine the aggregate comment media data set after the filtering process as a divided comment media data set.
For ease of understanding, fig. 13 is a schematic flow chart of determining an initial set of commentary media data provided in an embodiment of the present application. As shown in fig. 13, the server may obtain a comprehensive commentary media data set 130a associated with the target media data, where the comprehensive commentary media data set 130a may include a plurality of commentary media data, and here, taking the example that the number of commentary media data in the comprehensive commentary media data set 130a is 5, the 5 commentary media data may specifically include: commentary media data 13a, commentary media data 13b, commentary media data 13c, commentary media data 13d and commentary media data 13e.
It can be appreciated that, according to the time point and the movie ID, all the commentary media data (for example, all the shots) that can be displayed by the same movie at a specific time can be found, and a plurality of aggregation commentary media data sets can be obtained by dividing the shots. Wherein, the comment media time corresponding to each of the comment media data 13a and the comment media data 13b may be T 1 At this time, the comment media time corresponding to each of the comment media data 13c, the comment media data 13d, and the comment media data 13e may be T 2 By dividing 5 pieces of commentary media data at a time, the commentary media data 13a and the commentary media data 13b can be formed into an aggregate commentary media data set 130b, and the commentary media data 13c, commentary media data 13d and commentary media data 13e, constitute an aggregate commentary media data set 130c.
As shown in fig. 13, the server may perform filtering processing on the comment media data including the key text filtering data in the aggregate comment media data set 130b and the aggregate comment media data set 130c, to obtain a divided comment media data set 130d corresponding to the aggregate comment media data set 130b, and a divided comment media data set 130e corresponding to the aggregate comment media data set 130c. Wherein, the set of divided comment media data 130d includes comment media data 13a, and the set of divided comment media data 130e includes comment media data 13c and comment media data 13e.
It will be appreciated that the server may directly divide the commentary media data 13a in the commentary media data set 130d as a function of T 1 Comment media data of a specific mapping relation at the moment; when the integrated semantic similarity corresponding to the commentary media data 13c is greater than the integrated semantic similarity corresponding to the commentary media data 13e, the server may divide the commentary media data 13c in the commentary media data set 130e as a sum T 2 Comment media data of a specific mapping relation at a moment. Further, the server may construct the commentary media data 13a and the commentary media data 13c into an initial set of commentary media data associated with the target media data. Optionally, the server may also randomly obtain commentary media data (e.g., commentary media data 13 c) from the partitioned set of commentary media data 130e to determine the obtained commentary media data as being associated with T 2 Comment media data of a specific mapping relation at a moment.
Optionally, the server may directly perform filtering processing on the comment media data including the key text filtering data in the integrated comment media data set 130a to obtain a filtered comment media data set (not shown in the figure), where the filtered comment media data set (not shown in the figure) may include the comment media data 13a, the comment media data 13c, and the comment media data 13e. Further, the server may divide the commentary media data 13a, the commentary media data 13c, and the commentary media data 13e, resulting in a divided commentary media data set 130d and a divided commentary media data set 130e.
Step S404, the initial commentary media data set is sent to a management device for monitoring an application client;
it should be appreciated that in order to ensure that the application client loads the commentary media data (e.g., bullet screen data) from the background more quickly, and also to be able to provide an initial set of commentary media data to the management device for auditing in advance, it is necessary to periodically generate and then store the initial set of commentary media data, and periodically send the newly generated initial set of commentary media data to the management device for auditing. When the terminal device is an intelligent Television, the intelligent Television is provided with a video client, the management device can be a license plate party corresponding to a TV (i.e. Television) end (i.e. intelligent Television), and the license plate party can be an integrated playing control platform with an internet Television license plate for playing control of internet content accessed to the intelligent Television. It should be appreciated that presenting the commentary media data in an atypical manner provided by embodiments of the present application may be through a license plate party audit.
It can be appreciated that after receiving the initial set of commentary media data, the management device may audit the initial set of commentary media data to obtain an audit result, and then return the audit result to the server.
Step S405, receiving an audit result returned by the management equipment;
it may be appreciated that, if the auditing result is an auditing passing result (i.e. the auditing result indicates that the auditing is successful), the server may execute the step of acquiring the first set of commentary media data associated with the target media data from the initial set of commentary media data according to the first playing progress carried by the commentary media data acquisition request in step S406.
Optionally, if the auditing result is an auditing failed result (i.e., the auditing result indicates that the auditing failed), the server may re-execute steps S401-S403 described above, thereby determining an initial set of commentary media data associated with the target media data.
Terminal devices are typically classified into TV (i.e., smart TV) and non-TV (e.g., notebook) terminals, but the playing data (i.e., target media data, e.g., target video data) is actually kept in one copy for the server, as is the case for commentary media data (e.g., bullet screen data). Obviously, the general bullet screen data (i.e. all bullet screen data) has hundreds or even more data at the same time, and for the atypical bullet screen provided by the application, only one piece of the atypical bullet screen needs to be displayed every few seconds, and the atypical bullet screen can be used for being displayed at the TV end. In view of the foregoing, there is a need for a scheme that can generate atypical barrage data (i.e., commentary media data in an initial set of commentary media data) from generic barrage data.
For ease of understanding, fig. 14 is a schematic flow chart of generating an initial set of commentary media data provided in an embodiment of the present application. As shown in fig. 14, the server may load the general barrage data (i.e., the comment media data in the comprehensive comment media data set), and perform filtering processing on the general barrage data to obtain the general barrage data after filtering processing. Further, the server may read the first second of the filtered general barrage data, determine whether there is barrage data in the second, if so, read the second of data, count and classify the second of data, and obtain the most typical barrage data (i.e. barrage data with the greatest comprehensive semantic similarity) corresponding to the second. Further, the server may read the next second of data in the filtered general bullet screen data, and sequentially execute the above steps until the whole movie has been processed.
Optionally, if the bullet screen data does not exist in the second, judging whether the server has processed the whole film, and if the server has processed the whole film, ending the flow. In order to solve the problem that when all the bullet screen data are not processed, the bullet screen data do not exist in a certain second, the server can continue to read the bullet screen data in the next second, and the steps are sequentially executed until the whole film is processed.
Step S406, receiving a comment media data acquisition request for target media data in a media playing interface sent by an application client, and acquiring a first comment media data set associated with the target media data from an initial comment media data set according to a first playing progress carried by the comment media data acquisition request;
specifically, the server may receive a request for commentary media data acquisition for target media data in the media playing interface sent by the application client. Further, the server may determine a playing progress interval to be requested corresponding to the first playing progress according to the first playing progress carried by the commentary media data obtaining request. The minimum playing progress in the to-be-requested playing progress interval is the first playing progress; the interval length of the progress interval to be requested is a preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data. Further, the server may obtain, from the initial set of commentary media data, commentary media data within a progress interval to be requested to be played, and form a first set of commentary media data associated with the target media data.
Step S407, the first set of commentary media data is returned to the application client.
In this way, the embodiment of the application can acquire the comprehensive commentary media data set associated with the target media data, generate the initial commentary media data set associated with the target media data based on the comprehensive commentary media data set, and then send the initial commentary media data set to the management device for auditing. In this way, after receiving the request for acquiring the commentary media data sent by the application client, the server may acquire a first commentary media data set associated with the target media data from the initial commentary media data set, and then return the first commentary media data set to the application client. Based on the method, the embodiment of the application can realize segmented acquisition of the commentary media data so as to realize simplified display of the commentary media data in the application client.
Further, referring to fig. 15, fig. 15 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, where the data processing apparatus 10 may include: a first display module 101, a second display module 102, a third display module 103;
A first display module 101, configured to display target media data in a media playing interface;
a second display module 102, configured to display, in response to an activation operation for a commentary media data display function in the media playing interface, a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data;
a third display module 103 for displaying, in the commentary media data presentation region, target commentary media data associated with the target media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
The specific implementation manner of the first display module 101, the second display module 102, and the third display module 103 may be referred to the description of step S101 to step S103 in the embodiment corresponding to fig. 3, and will not be described herein. In addition, the description of the beneficial effects of the same method is omitted.
Further, referring to fig. 16, fig. 16 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, where the data processing apparatus 1 may include: a first display module 11, a second display module 12, a third display module 13; further, the data processing apparatus 1 may further include: a fourth display module 14, a fifth display module 15, and a duration determination module 16;
a first display module 11 for displaying target media data in a media playing interface;
a second display module 12, configured to display, in response to an activation operation for a commentary media data display function in the media playing interface, a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data;
wherein the second display module 12 comprises: a first display unit 121, a second display unit 122, and a third display unit 123;
a first display unit 121 for displaying a commentary media data control associated with a commentary media data display function in a media playback interface;
A second display unit 122, configured to respond to a triggering operation of the commentary media data control, and display a commentary media data start control;
the third display unit 123 is configured to display, in response to a start operation of a start control for the commentary media data, a virtual object and a commentary media data presentation area associated with the virtual object in a target area of the media playing interface.
Wherein the third display unit 123 includes: a first acquisition subunit 1231, a first display subunit 1232; optionally, the third display unit 123 may further include: a progress acquisition subunit 1233, a second acquisition subunit 1234, a second display subunit 1235;
a first obtaining subunit 1231, configured to obtain a first set of commentary media data associated with the target media data in response to a start operation of the commentary media data start control; the first set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on the first playback schedule; the first playing progress is the current playing progress of the target media data in the media playing interface;
The first display subunit 1232 is configured to display, based on the first set of commentary media data, the virtual object and a commentary media data presentation area associated with the virtual object in the target area of the media playback interface.
Wherein the first display subunit 1232 comprises: a search processing subunit 12321, a first search subunit 12322, a region display subunit 12323; optionally, the first display sub-unit 1232 may further include: a second lookup subunit 12324;
a search processing subunit 12321, configured to perform a search process on the first set of commentary media data based on the first playing progress, to obtain a search result;
the first searching subunit 12322 is configured to determine, if the searching result indicates that the comment media data corresponding to the first playing progress is found in the first comment media data set, the comment media data corresponding to the first playing progress as target comment media data associated with the target media data;
a region display subunit 12323, configured to determine a virtual object that matches the target commentary media data, and determine a media size of the target commentary media data;
the region display subunit 12323 is configured to determine, according to the media size, a region size of the commentary media data display region associated with the virtual object, and display, in the target region of the media playing interface, the virtual object and the commentary media data display region having the region size.
The area display subunit 12323 is specifically configured to input target media data into the media data analysis model, and perform scene type analysis on the target media data through the media data analysis model to obtain media scene types of the target media data at different playing schedules;
the area display subunit 12323 is specifically configured to obtain, from media scene types of the target media data at different playing schedules, a media scene type of the target media data at the first playing schedule;
the area display subunit 12323 is specifically configured to determine, according to a media scene type of the target media data on the first playing progress, a virtual object that matches the target commentary media data; the object attribute type of the virtual object is matched with the media scene type of the first playing progress.
The area display subunit 12323 is specifically configured to input the target comment media data to a comment media data analysis model, perform semantic analysis on the target comment media data through the comment media data analysis model, and obtain a semantic type of the target comment media data;
the area display subunit 12323 is specifically configured to determine, according to the semantic type, a virtual object that matches the target comment media data; the object attribute type of the virtual object matches the semantic type.
Optionally, the second searching subunit 12324 is configured to, if the searching result indicates that the comment media data corresponding to the first playing progress is not found in the first comment media data, continue to search the first comment media data set based on the second playing progress when the playing progress of the target media data is played from the first playing progress to the second playing progress; the second playing progress is the next playing progress of the first playing progress.
For specific implementation manners of the search processing subunit 12321, the first search subunit 12322, the area display subunit 12323, and the second search subunit 12324, reference may be made to the description of step S204 in the embodiment corresponding to fig. 7, and the details will not be repeated here.
Optionally, the progress obtaining subunit 1233 is configured to obtain a third playing progress corresponding to the target media data; the third playing progress is the playing progress after the first playing progress;
a second obtaining subunit 1234, configured to obtain, if there is no commentary media data associated with the target playing progress interval in the device memory, a second set of commentary media data associated with the target media data based on the third playing progress; the second set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on the target play progress interval; the minimum playing progress in the target playing progress interval is the third playing progress;
The interval length of the target playing progress interval is a preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data.
The second display subunit 1235 is configured to display, when the playing progress of the target media data played in the media playing interface is the third playing progress, the updated virtual object and the updated comment media data display area associated with the updated virtual object in the target area of the media playing interface based on the second comment media data set.
For specific implementation manners of the first obtaining subunit 1231, the first display subunit 1232, the progress obtaining subunit 1233, the second obtaining subunit 1234 and the second display subunit 1235, reference may be made to the description of step S203 to step S207 in the embodiment corresponding to fig. 7, which will not be repeated here.
For the specific implementation manner of the first display unit 121, the second display unit 122 and the third display unit 123, refer to the description of step S102 in the embodiment corresponding to fig. 3 and the description of step S201 to step S207 in the embodiment corresponding to fig. 7, which will not be repeated here.
A third display module 13 for displaying, in the commentary media data presentation region, target commentary media data associated with the target media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
Optionally, the fourth display module 14 is configured to display, in the media playing interface, a commentary media data update control associated with the commentary media data update function;
the fifth display module 15 is configured to respond to a triggering operation for the comment media data update control, and display N display duration controls; n is a positive integer;
the duration determining module 16 is configured to determine, in response to a trigger operation for a target presentation duration control in the N presentation duration controls, a comment media data presentation duration indicated by the target presentation duration control as a comment media data presentation duration corresponding to the target media data;
the third display module 13 is specifically configured to display, in the comment media data display area, the target comment media data associated with the target media data based on the comment media data display duration corresponding to the target media data.
For specific implementation manners of the first display module 11, the second display module 12, the third display module 13, the fourth display module 14, the fifth display module 15 and the duration determination module 16, reference may be made to the description of the step S101 to the step S103 in the embodiment corresponding to fig. 3 and the description of the step S201 to the step S207 in the embodiment corresponding to fig. 7, which will not be repeated here. In addition, the description of the beneficial effects of the same method is omitted. The first display module 11 corresponds to the first display module 101 in the embodiment corresponding to fig. 15, the second display module 12 corresponds to the second display module 102 in the embodiment corresponding to fig. 15, and the third display module 13 corresponds to the third display module 103 in the embodiment corresponding to fig. 15.
Further, referring to fig. 17, fig. 17 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, where the data processing apparatus 20 may include: a request receiving module 201, a set returning module 202;
a request receiving module 201, configured to receive a comment media data acquisition request for target media data in a media playing interface sent by an application client, and acquire a first comment media data set associated with the target media data according to the comment media data acquisition request; the comment media data acquisition request is sent by the application client after responding to the starting operation of the comment media data display function in the media playing interface;
a set return module 202, configured to return the first set of commentary media data to the application client, so that the application client displays, based on the first set of commentary media data, the virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface, and displays, in the commentary media data display area, target commentary media data associated with the target media data; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data; the first set of commentary media data includes target commentary media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
The specific implementation manner of the request receiving module 201 and the integrated returning module 202 may refer to the description of step S301 to step S302 in the embodiment corresponding to fig. 11, which will not be described herein. In addition, the description of the beneficial effects of the same method is omitted.
Further, referring to fig. 18, fig. 18 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, where the data processing apparatus 2 may include: a request receiving module 21, a set returning module 22; further, the data processing apparatus 2 may further include: the system comprises a data acquisition module 23, a filtering processing module 24, an aggregation processing module 25, a set auditing module 26 and a result receiving module 27;
a request receiving module 21, configured to receive a request for obtaining commentary media data for target media data in a media playing interface sent by an application client, and obtain a first set of commentary media data associated with the target media data according to the request for obtaining commentary media data; the comment media data acquisition request is sent by the application client after responding to the starting operation of the comment media data display function in the media playing interface;
A set returning module 22, configured to return the first set of commentary media data to the application client, so that the application client displays, based on the first set of commentary media data, the virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface, and displays, in the commentary media data display area, target commentary media data associated with the target media data; the virtual object and the commentary media data display area are displayed on the local media data in a covering mode, wherein the local media data are media data in a target area of a media playing interface in target media data; the first set of commentary media data includes target commentary media data; the region size of the commentary media data presentation region matches the media size of the target commentary media data.
Optionally, the data obtaining module 23 is configured to obtain a comprehensive commentary media data set associated with the target media data, and obtain key text filtering data for filtering the comprehensive commentary media data set;
the filtering processing module 24 is configured to perform filtering processing on the comment media data including the key text filtering data in the integrated comment media data set, and determine the integrated comment media data set after the filtering processing as a filtered comment media data set;
The aggregation processing module 25 is configured to aggregate the comment media data in the filtered comment media data set to obtain an initial comment media data set associated with the target media data;
wherein the aggregation processing module 25 includes: a data dividing unit 251, a time mapping unit 252, a first constructing unit 253;
the data dividing unit 251 is configured to obtain a comment media time indicated by comment media data in the filtered comment media data set, and divide the comment media data in the filtered comment media data set according to the comment media time, so as to obtain a divided comment media data set; the comment media time indicated by each comment media data in one divided comment media data set is the same;
the time mapping unit 252 is configured to obtain comprehensive semantic similarity corresponding to each piece of commentary media data in the set of divided commentary media data, and use the commentary media data with the largest comprehensive semantic similarity in the set of divided commentary media data as commentary media data with a mapping relationship with commentary media time; the comprehensive semantic similarity corresponding to one piece of commentary media data is determined by the semantic similarity between the commentary media data and each piece of commentary media data in the affiliated divided commentary media data set;
The first composing unit 253 is configured to compose the commentary media data having a mapping relation with the commentary media time into an initial commentary media data set associated with the target media data.
For specific implementation manners of the data dividing unit 251, the time mapping unit 252 and the first constructing unit 253, reference may be made to the description of step S403 in the embodiment corresponding to fig. 12, and the description thereof will not be repeated here.
The request receiving module 21 is specifically configured to obtain, from the initial set of commentary media data, a first set of commentary media data associated with the target media data according to a first playing progress carried by the commentary media data obtaining request.
Wherein the request receiving module 21 includes: a section determining unit 211, a second constructing unit 212;
the interval determining unit 211 is configured to determine a playing progress interval to be requested corresponding to the first playing progress according to the first playing progress carried by the commentary media data obtaining request; the minimum playing progress in the playing progress interval to be requested is the first playing progress; the interval length of the progress interval to be requested is a preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data;
The second construction unit 212 is configured to acquire, from the initial set of commentary media data, commentary media data in a progress interval of playing to be requested, and construct the commentary media data in the progress interval of playing to be requested into a first set of commentary media data associated with the target media data.
For a specific implementation manner of the interval determining unit 211 and the second configuring unit 212, reference may be made to the description of step S406 in the embodiment corresponding to fig. 12, which will not be repeated here.
Optionally, the aggregate auditing module 26 is configured to send the initial set of commentary media data to a management device for monitoring the application client, so that the management device audits the initial set of commentary media data to obtain an audit result;
the result receiving module 27 is configured to receive an audit result returned by the management device, and if the audit result is an audit passing result, execute a step of acquiring a first set of commentary media data associated with the target media data from the initial set of commentary media data according to a first playing progress carried by the commentary media data acquisition request.
The specific implementation manners of the request receiving module 21, the aggregate returning module 22, the data obtaining module 23, the filtering processing module 24, the aggregation processing module 25, the aggregate checking module 26 and the result receiving module 27 may be referred to the description of the step S301 to the step S302 in the embodiment corresponding to fig. 11 and the step S401 to the step S407 in the embodiment corresponding to fig. 12, which will not be described herein again. In addition, the description of the beneficial effects of the same method is omitted. The request receiving module 21 corresponds to the request receiving module 201 in the embodiment corresponding to fig. 17, and the set returning module 22 corresponds to the set returning module 202 in the embodiment corresponding to fig. 17.
Further, referring to fig. 19, fig. 19 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 19, the computer device 1000 may be the terminal device 20b shown in fig. 2 or the server 20a shown in fig. 2, and the computer device 1000 may include: processor 1001, network interface 1004, and memory 1005, and in addition, the above-described computer device 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. In some embodiments, the user interface 1003 may include a Display (Display), a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface, among others. Alternatively, the network interface 1004 may include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory 1005 may also be at least one memory device located remotely from the aforementioned processor 1001. As shown in fig. 19, an operating system, a network communication module, a user interface module, and a device control application program may be included in a memory 1005, which is one type of computer-readable storage medium.
In the computer device 1000 shown in FIG. 19, the network interface 1004 may provide network communication functions; while user interface 1003 is primarily used as an interface for providing input to a user; and the processor 1001 may be used to invoke device control applications stored in the memory 1005.
It should be understood that the computer device 1000 described in the embodiments of the present application may perform the description of the data processing method in the embodiments corresponding to fig. 3, 7, 11 or 12, and may also perform the description of the data processing device 10 in the embodiments corresponding to fig. 15, the data processing device 1 in the embodiments corresponding to fig. 16, the data processing device 20 in the embodiments corresponding to fig. 17 or the data processing device 2 in the embodiments corresponding to fig. 18, which are not described herein. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiments of the present application further provide a computer readable storage medium, in which the aforementioned computer program executed by the data processing apparatus 10, the data processing apparatus 1, the data processing apparatus 20, or the data processing apparatus 2 is stored, and the computer program includes program instructions, when executed by a processor, can execute the description of the data processing method in the embodiment corresponding to fig. 3, fig. 7, fig. 11, or fig. 12, and therefore, will not be described herein in detail. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application.
In addition, it should be noted that: embodiments of the present application also provide a computer program product or computer program that may include computer instructions that may be stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor may execute the computer instructions, so that the computer device performs the description of the data processing method in the embodiment corresponding to fig. 3, fig. 7, fig. 11, or fig. 12, which will not be described herein. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the computer program product or the computer program embodiments related to the present application, please refer to the description of the method embodiments of the present application.
Further, referring to fig. 20, fig. 20 is a schematic structural diagram of a data processing system according to an embodiment of the present application. Data processing system 300 may include a data processing device 300a and a data processing device 300b. The data processing apparatus 300a may be the data processing apparatus 10 in the embodiment corresponding to fig. 15 or the data processing apparatus 1 in the embodiment corresponding to fig. 16, and it is understood that the data processing apparatus 300a may be integrated with the terminal device 20b in the embodiment corresponding to fig. 2, and therefore, a detailed description thereof will not be provided here. The data processing device 300b may be the data processing device 20 in the embodiment corresponding to fig. 17 or the data processing device 2 in the embodiment corresponding to fig. 18, and it is understood that the data processing device 300b may be integrated with the server 20a in the embodiment corresponding to fig. 2, and therefore, a detailed description thereof will not be provided here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the data processing system according to the present application, please refer to the description of the method embodiments of the present application.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (20)

1. A method of data processing, comprising:
displaying target media data in a media playing interface;
responding to the starting operation of the commentary media data display function in the media playing interface, and displaying a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface; the virtual object and the commentary media data display area are displayed on local media data in an overlaying mode, wherein the local media data are media data in a target area of the media playing interface in the target media data;
Displaying, in the commentary media data presentation region, target commentary media data associated with the target media data; the region size of the commentary media data display region matches the media size of the target commentary media data.
2. The method of claim 1, wherein the displaying a virtual object and a commentary media data presentation area associated with the virtual object in a target area of the media playback interface in response to an initiation of a commentary media data display function in the media playback interface, comprises:
displaying a commentary media data control associated with a commentary media data display function in the media play interface;
responding to the triggering operation of the comment media data control, and displaying a comment media data starting control;
and responding to the starting operation of the starting control for the commentary media data, and displaying a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface.
3. The method according to claim 1, wherein the method further comprises:
Displaying a commentary media data update control associated with a commentary media data update function in the media play interface;
responding to the triggering operation of the evaluation media data updating control, and displaying N display duration controls; the N is a positive integer;
responding to the triggering operation of a target display duration control in the N display duration controls, and determining the display duration of the commentary media data indicated by the target display duration control as the commentary media data display duration corresponding to the target media data;
displaying, in the commentary media data presentation region, target commentary media data associated with the target media data, comprising:
and displaying target comment media data associated with the target media data in the comment media data display area based on the comment media data display duration corresponding to the target media data.
4. The method of claim 2, wherein the responding to the actuation of the actuation control for the commentary media data displays a virtual object and a commentary media data presentation region associated with the virtual object in a target region of the media playback interface, comprising:
Responsive to an actuation operation for the commentary media data actuation control, obtaining a first set of commentary media data associated with the target media data; the first set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on a first playback schedule; the first playing progress is the current playing progress of the target media data in the media playing interface;
based on the first set of commentary media data, a virtual object and a commentary media data presentation area associated with the virtual object are displayed in a target area of the media playback interface.
5. The method of claim 4, wherein the displaying a virtual object and a commentary media data presentation area associated with the virtual object in a target area of the media playback interface based on the first set of commentary media data comprises:
searching the first comment media data set based on the first playing progress to obtain a searching result;
if the searching result indicates that the commentary media data corresponding to the first playing progress is searched in the first commentary media data set, determining the commentary media data corresponding to the first playing progress as target commentary media data associated with the target media data;
Determining a virtual object matched with the target commentary media data, and determining the media size of the target commentary media data;
and determining the area size of the commentary media data display area associated with the virtual object according to the media size, and displaying the virtual object and the commentary media data display area with the area size in the target area of the media playing interface.
6. The method of claim 5, wherein the determining a virtual object that matches the target commentary media data comprises:
inputting the target media data into a media data analysis model, and analyzing scene types of the target media data through the media data analysis model to obtain media scene types of the target media data at different playing progress;
acquiring the media scene type of the target media data in the first playing progress from the media scene types of the target media data in different playing progress;
determining a virtual object matched with the target comment media data according to the media scene type of the target media data in the first playing progress; the object attribute type of the virtual object is matched with the media scene type of the first playing progress.
7. The method of claim 5, wherein the determining a virtual object that matches the target commentary media data comprises:
inputting the target comment media data into a comment media data analysis model, and carrying out semantic analysis on the target comment media data through the comment media data analysis model to obtain the semantic type of the target comment media data;
determining a virtual object matched with the target commentary media data according to the semantic type; the object attribute type of the virtual object is matched with the semantic type.
8. The method of claim 5, wherein the method further comprises:
if the search result indicates that the comment media data corresponding to the first playing progress is not found in the first comment media data, continuing to search the first comment media data set based on the second playing progress when the playing progress of the target media data is played from the first playing progress to the second playing progress; the second playing progress is the next playing progress of the first playing progress.
9. The method according to claim 4, wherein the method further comprises:
acquiring a third playing progress corresponding to the target media data; the third playing progress is the playing progress after the first playing progress;
if the comment media data associated with the target playing progress interval does not exist in the equipment memory, acquiring a second comment media data set associated with the target media data based on the third playing progress; the second set of commentary media data includes commentary media data obtained from an initial set of commentary media data associated with the target media data based on the target play progress interval; the minimum playing progress in the target playing progress interval is the third playing progress;
and when the playing progress of the target media data played in the media playing interface is the third playing progress, displaying an updated virtual object and an updated comment media data display area associated with the updated virtual object in a target area of the media playing interface based on the second comment media data set.
10. The method of claim 9, wherein the interval length of the target playing progress interval is a preconfigured request interval duration, and the request interval duration is less than a total playing duration of the target media data.
11. A method of data processing, comprising:
receiving a comment media data acquisition request for target media data in a media playing interface sent by an application client, and acquiring a first comment media data set associated with the target media data according to the comment media data acquisition request; the comment media data acquisition request is sent by the application client after responding to the starting operation of the comment media data display function in the media playing interface;
returning the first set of commentary media data to the application client to cause the application client to display a virtual object and a commentary media data presentation area associated with the virtual object in a target area of the media playing interface based on the first set of commentary media data, in which commentary media data presentation area target commentary media data associated with the target media data is displayed; the virtual object and the commentary media data display area are displayed on local media data in an overlaying mode, wherein the local media data are media data in a target area of the media playing interface in the target media data; the first set of commentary media data includes the target commentary media data; the region size of the commentary media data display region matches the media size of the target commentary media data.
12. The method of claim 11, wherein the method further comprises:
acquiring a comprehensive commentary media data set associated with target media data, and acquiring key text filtering data for filtering the comprehensive commentary media data set;
filtering the comment media data containing the key text filtering data in the comprehensive comment media data set, and determining the comprehensive comment media data set after the filtering process as a filtered comment media data set;
aggregating the comment media data in the filtered comment media data set to obtain the initial comment media data set associated with the target media data;
the acquiring, according to the commentary media data acquisition request, a first commentary media data set associated with the target media data, including:
and acquiring a first commentary media data set associated with the target media data from the initial commentary media data set according to a first playing progress carried by the commentary media data acquisition request.
13. The method of claim 12, wherein aggregating the commentary media data in the filtered set of commentary media data to obtain an initial set of commentary media data associated with the target media data, comprises:
The method comprises the steps of obtaining comment media time indicated by comment media data in the filtered comment media data set, and dividing the comment media data in the filtered comment media data set according to the comment media time to obtain a divided comment media data set; the comment media time indicated by each comment media data in one divided comment media data set is the same;
the method comprises the steps of obtaining comprehensive semantic similarity corresponding to each piece of comment media data in the comment media data dividing set, and taking comment media data with the largest comprehensive semantic similarity in the comment media data dividing set as comment media data with a mapping relation with comment media time; the comprehensive semantic similarity corresponding to one piece of commentary media data is determined by the semantic similarity between the commentary media data and each piece of commentary media data in the affiliated divided commentary media data set;
and constructing the commentary media data with the mapping relation with the commentary media time into an initial commentary media data set associated with the target media data.
14. The method of claim 12, wherein the obtaining the first set of commentary media data associated with the target media data from the initial set of commentary media data in accordance with the first progress of playback carried by the commentary media data obtaining request comprises:
determining a playing progress interval to be requested corresponding to the first playing progress according to the first playing progress carried by the commentary media data acquisition request; the minimum playing progress in the playing progress interval to be requested is the first playing progress; the interval length of the progress interval to be played is preset request interval duration, and the request interval duration is smaller than the total playing duration of the target media data;
and acquiring the commentary media data in the progress interval to be requested from the initial commentary media data set, and forming a first commentary media data set associated with the target media data by the commentary media data in the progress interval to be requested.
15. The method according to claim 12, wherein the method further comprises:
Transmitting the initial commentary media data set to a management device for monitoring the application client so that the management device can audit the initial commentary media data set to obtain an audit result;
and receiving an auditing result returned by the management equipment, and executing the step of acquiring a first commentary media data set associated with the target media data from the initial commentary media data set according to the first playing progress carried by the commentary media data acquisition request if the auditing result is an auditing passing result.
16. A data processing apparatus, comprising:
the first display module is used for displaying target media data in the media playing interface;
the second display module is used for responding to the starting operation of the commentary media data display function in the media playing interface, and displaying a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface; the virtual object and the commentary media data display area are displayed on local media data in an overlaying mode, wherein the local media data are media data in a target area of the media playing interface in the target media data;
A third display module for displaying, in the commentary media data presentation region, target commentary media data associated with the target media data; the region size of the commentary media data display region matches the media size of the target commentary media data.
17. A data processing apparatus, comprising:
the request receiving module is used for receiving a comment media data acquisition request which is sent by an application client and aims at target media data in a media playing interface, and acquiring a first comment media data set associated with the target media data according to the comment media data acquisition request; the comment media data acquisition request is sent by the application client after responding to the starting operation of the comment media data display function in the media playing interface;
a set return module, configured to return the first set of commentary media data to the application client, so that the application client displays, based on the first set of commentary media data, a virtual object and a commentary media data display area associated with the virtual object in a target area of the media playing interface, and displays, in the commentary media data display area, target commentary media data associated with the target media data; the virtual object and the commentary media data display area are displayed on local media data in an overlaying mode, wherein the local media data are media data in a target area of the media playing interface in the target media data; the first set of commentary media data includes the target commentary media data; the region size of the commentary media data display region matches the media size of the target commentary media data.
18. A computer device, comprising: a processor and a memory;
the processor is connected to the memory, wherein the memory is configured to store a computer program, and the processor is configured to invoke the computer program to cause the computer device to perform the method of any of claims 1-15.
19. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1-15.
20. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium and adapted to be read and executed by a processor to cause a computer device with the processor to perform the method of any of claims 1-15.
CN202111303768.2A 2021-11-05 2021-11-05 Data processing method, device, computer equipment and readable storage medium Pending CN116095381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111303768.2A CN116095381A (en) 2021-11-05 2021-11-05 Data processing method, device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111303768.2A CN116095381A (en) 2021-11-05 2021-11-05 Data processing method, device, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116095381A true CN116095381A (en) 2023-05-09

Family

ID=86199658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111303768.2A Pending CN116095381A (en) 2021-11-05 2021-11-05 Data processing method, device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116095381A (en)

Similar Documents

Publication Publication Date Title
KR102299379B1 (en) Determining search queries to obtain information during the user experience of an event
CN109118290B (en) Method, system, and computer-readable non-transitory storage medium
CN110149558A (en) A kind of video playing real-time recommendation method and system based on content recognition
US11023100B2 (en) Methods, systems, and media for creating and updating a group of media content items
US11899907B2 (en) Method, apparatus and device for displaying followed user information, and storage medium
CN110168541B (en) System and method for eliminating word ambiguity based on static and time knowledge graph
CN110475140B (en) Bullet screen data processing method and device, computer readable storage medium and computer equipment
US10958973B2 (en) Deriving and identifying view preferences of a user consuming streaming content
US8744240B2 (en) Video distribution system, information providing device, and video information providing method for distributing video to a plurality of receiving terminals
CN111966909B (en) Video recommendation method, device, electronic equipment and computer readable storage medium
WO2022257683A1 (en) Method and apparatus for searching for content, device, and medium
US10674183B2 (en) System and method for perspective switching during video access
CN111615002B (en) Video background playing control method, device and system and electronic equipment
CN113253880B (en) Method and device for processing pages of interaction scene and storage medium
CN111279709A (en) Providing video recommendations
US20230285854A1 (en) Live video-based interaction method and apparatus, device and storage medium
CN109245989A (en) A kind of processing method, device and computer readable storage medium shared based on information
CN113852767B (en) Video editing method, device, equipment and medium
US10701164B2 (en) Engaged micro-interactions on digital devices
KR102316822B1 (en) Method, apparatus, and computer program for providing content based on user reaction related to video
CN116095381A (en) Data processing method, device, computer equipment and readable storage medium
JP2022082453A (en) Method, computer system and computer program for media consumption gap filling (gap filling using personalized injectable media)
CN112533032B (en) Video data processing method and device and storage medium
CN116647735A (en) Barrage data processing method, barrage data processing device, electronic equipment, medium and program product
CN116192788A (en) Information processing method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination