CN116225580A - Data processing method, apparatus, device, storage medium, and program product - Google Patents

Data processing method, apparatus, device, storage medium, and program product Download PDF

Info

Publication number
CN116225580A
CN116225580A CN202111467023.XA CN202111467023A CN116225580A CN 116225580 A CN116225580 A CN 116225580A CN 202111467023 A CN202111467023 A CN 202111467023A CN 116225580 A CN116225580 A CN 116225580A
Authority
CN
China
Prior art keywords
subtitle
font data
character
style
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111467023.XA
Other languages
Chinese (zh)
Inventor
田驰
郑吉剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111467023.XA priority Critical patent/CN116225580A/en
Publication of CN116225580A publication Critical patent/CN116225580A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a data processing method, a device, equipment, a storage medium and a program product, wherein the method comprises the following steps: the method comprises the steps that a client acquires a target subtitle style and sends a font data request to a server, wherein the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return the font data set matched with the target subtitle style and the subtitle character set; the server receives the font data request, responds to the font data request and generates a font data set, and sends the font data set to the client; and the client receives the font data set, and displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video. According to the embodiment of the invention, the style of the caption can be adjusted according to the requirement, and the processing efficiency is high.

Description

Data processing method, apparatus, device, storage medium, and program product
Technical Field
The present application relates to the field of computer technology, and in particular, to a data processing method, a data processing apparatus, a computer device, a computer readable storage medium, and a computer program product.
Background
The subtitle (subtitles of motion picture) refers to non-visual contents such as conversations in television, movies, and stage works, which are presented in the form of characters, and also refers to characters processed later in the movie works. The explanatory characters and various characters appearing in the movie screen or video playing interface, such as film names, staffs tables, gramophone, dialects, explanatory words, and character introduction, place names, and ages, are called subtitles.
At present, for a video playing terminal or a video playing application program, the style of a caption displayed in a video playing interface is fixed, and adjustment and setting cannot be performed according to the requirement of a user.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, computer equipment and a storage medium, which can realize the adjustment of the style of subtitles according to the needs and have high processing efficiency.
In one aspect, an embodiment of the present application provides a data processing method, where the method includes:
acquiring a target subtitle style;
sending a font data request to a server; the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set;
And receiving the font data set returned by the server, and displaying the subtitle corresponding to the subtitle file according to the font data set in the process of playing the target video.
In one aspect, an embodiment of the present application provides another data processing method, where the method includes:
receiving a font data request sent by a client; the font data request is determined according to a target subtitle style and a subtitle file of a target video, wherein the subtitle file comprises a subtitle character set;
responding to the font data request, and acquiring a font data set matched with the target subtitle style and the subtitle character set;
and sending the font data set to the client so that the client displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
In one aspect, an embodiment of the present application provides a subtitle processing apparatus, including:
the processing unit is used for acquiring a target subtitle style;
the receiving and transmitting unit is used for transmitting a font data request to the server; the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set;
The receiving and transmitting unit is also used for receiving the font data set returned by the server;
and the display unit is used for displaying the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
In an embodiment, the processing unit is specifically configured to: displaying a video playing interface, and displaying one or more subtitle style options in the video playing interface;
and responding to a selection operation for the one or more subtitle style options, and determining a subtitle style corresponding to the subtitle style option selected by the selection operation as the target subtitle style.
In an embodiment, the processing unit is further configured to: displaying a subtitle style setting control in the video playing interface;
and displaying the one or more subtitle style options in the video playing interface when the triggering operation of the subtitle style setting control is detected.
In an embodiment, the processing unit is further configured to: determining a request character set according to the subtitle character set;
determining the font data request according to the target subtitle style and the request character set;
Wherein the glyph data request is for requesting the server to return a glyph data set that matches the target subtitle style and the requested character set.
In an embodiment, the processing unit is further configured to: if redundant characters exist in the caption character set, performing redundancy elimination processing on the caption character set;
and determining the request character set according to the subtitle character set subjected to redundancy removal processing.
In an embodiment, the processing unit is further configured to: the caption character set comprises one or more sections of caption characters, and the caption file also comprises a display rule of each section of caption characters;
wherein the displaying the subtitle of the target video according to the glyph data set and the subtitle file includes:
for a first caption character string, acquiring the font data of each character in the first caption character string from the font data set, and generating a second caption character string according to the font data of each character; the style of the characters in the second caption character string is the target caption style, and the first caption character string is any caption character in the one or more sections of caption characters;
And displaying the second caption character string in a video playing interface according to a display rule corresponding to the first caption character string included in the caption file.
In an embodiment, the processing unit is further configured to: if a historical font data set corresponding to the target subtitle style is recorded, determining a request character set according to the historical font data set and the subtitle character set;
wherein, the request characters in the request character set are contained in the caption character set, and the historical font data set does not contain font data corresponding to the request characters;
wherein, in the process of playing the target video, displaying the caption of the target video according to the font data set and the caption file comprises:
and displaying the subtitles of the target video according to the font data set returned by the server, the history font data set and the subtitle file in the process of playing the target video.
In an embodiment, the processing unit is further configured to: after the font data set matched with the target subtitle style and the request character set is acquired, updating the historical font data set according to the font data set matched with the target subtitle style and the request character set.
In one aspect, an embodiment of the present application provides another subtitle processing apparatus, including:
a receiving and transmitting unit: the method comprises the steps of receiving a font data request sent by a client; the font data request is determined according to a target subtitle style and a subtitle file of a target video, wherein the subtitle file comprises a subtitle character set;
and a processing unit: the font data processing unit is used for responding to the font data request and acquiring a font data set matched with the target subtitle style and the subtitle character set;
the receiving and transmitting unit: and the method is also used for sending the font data set to the client so that the client displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
In an embodiment, the processing unit is specifically configured to: inquiring a font database corresponding to the target subtitle style, and acquiring font data of each subtitle character in the subtitle character set;
and generating the font data set matched with the target subtitle style and the subtitle character set according to the font data of each subtitle character.
In an embodiment, the processing unit is further configured to: before acquiring the font data set matched with the target subtitle style and the subtitle character set, if redundant characters exist in the subtitle character set, performing redundancy elimination processing on the subtitle character set to acquire a new subtitle character set, and responding to the font data request, acquiring the font data set matched with the target subtitle style and the new subtitle character set.
In one aspect, embodiments of the present application provide a computer device, including: the data processing method comprises a processor, a communication interface and a memory, wherein the processor, the communication interface and the memory are connected with each other, executable program codes are stored in the memory, and the processor is used for calling the executable program codes to realize the data processing method provided by the embodiment of the application.
Accordingly, the embodiment of the application also provides a computer readable storage medium, wherein instructions are stored in the computer readable storage medium, when the computer readable storage medium runs on a computer, the computer is enabled to realize the data processing method provided by the embodiment of the application.
Accordingly, embodiments of the present application also provide a computer program product comprising a computer program or computer instructions which, when executed by a processor, implement the steps of the data processing method provided by the embodiments of the present application.
Accordingly, the embodiment of the application further provides a computer program, the computer program includes computer instructions, the computer instructions are stored in a computer readable storage medium, a processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device realizes the data processing method provided by the embodiment of the application.
In the embodiment of the application, a client acquires a target subtitle style, and sends a font data request to a server, wherein the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set; the server receives the font data request, responds to the font data request and generates a font data set, and sends the font data set to the client; and the client receives the font data set, and displays the caption corresponding to the caption file according to the font data set in the process of playing the target video. By adopting the mode, the caption of the target video can be displayed according to the target caption style, the style of the caption can be adjusted according to the requirement, in addition, only the font data corresponding to the caption characters is required to be downloaded, the caption characters of the target video are usually only part of characters in the characters included in the character library, thus downloading of all the characters in the character library is not required, the problems of long downloading time and high flow consumption caused by large downloading amount of the character library are solved, and the efficiency of caption processing is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a Bessel conic diagram provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a data processing scheme provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of another data processing scheme provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a library download guide interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of yet another data processing scheme provided by an embodiment of the present application;
fig. 6 is a schematic display diagram of a character of a dot matrix picture and a character of a vector picture according to an embodiment of the present application;
FIG. 7 is a network architecture to which the data processing method according to the embodiment of the present application is applicable;
FIG. 8 is a schematic diagram of yet another data processing scheme provided by an embodiment of the present application;
FIG. 9 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 10a is a schematic diagram of a video playing interface according to an embodiment of the present application;
FIG. 10b is a schematic diagram of another video playback interface according to an embodiment of the present disclosure;
FIG. 11a is a schematic diagram of yet another video playback interface according to an embodiment of the present application;
FIG. 11b is a schematic diagram of yet another video playback interface according to an embodiment of the present application;
FIG. 11c is a schematic diagram of yet another video playback interface according to an embodiment of the present application;
FIG. 11d is a schematic diagram of yet another video playback interface according to an embodiment of the present application;
FIG. 11e is a schematic diagram of yet another video playback interface according to an embodiment of the present application;
FIG. 12 is a schematic diagram of features of a vector character format in the x-axis direction;
FIG. 13 is a schematic view of a feature of a vector character format in the y-axis direction;
FIG. 14 is a flowchart of another data processing method according to an embodiment of the present disclosure;
FIG. 15 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 16 is a schematic diagram of another data processing apparatus according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a computer device according to an embodiment of the present application;
Fig. 18 is a schematic structural diagram of another computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
For a better understanding of embodiments of the present application, some terms related to embodiments of the present application are described below:
character data: the data used for rendering the text is usually a description of vector graphics, such as drawing a transverse line from a point A to a point B, drawing a Bezier curve from a point C to a point D, and the parameters are X, Y, Z and the like. It is understood that each glyph is described by a series of points on the grid. Although two points on the curve are sufficient to describe a straight line, adding a third off-curve point between the two points on the curve may describe a parabola. In this case, the point on each curve represents the end point of the curve, and the point outside the curve is the control point. Altering the position of any one of the three points alters the shape of the defined curve. Fig. 1 is a bezier conic diagram provided in the embodiment of the present application, where the definition of the conic curve is shown in fig. 1: given three points p0, p1, p2, they define a curve from point p0 to point p2, where p1 is a point offset from the curve. The control point p1 is located at the intersection of the tangents to the curves at points p0 and p 2. Thus p0, p1 are tangent to the curve at point p 0. Similarly, p2, p1 is tangent to the curve at point p 2. The curves specified by these three points are defined by parametric equations. For t, p (t) positions in the range of 0 to 1 are as follows:
p(t)=(1-t)^2p0+2t(1-t)p1+t^2p2
Word stock: the character data set containing a certain character set can be divided into a dot matrix character library and a vector character library, wherein the dot matrix character library is used for dividing each Chinese character into 16 multiplied by 16 or 24 multiplied by 24 points, and then the outline of the Chinese character is represented by the virtual reality of each point, and the dot matrix character library is commonly used as a display character library, and the biggest defect of the Chinese character in the dot matrix character library is that the dot matrix character library cannot be amplified, and saw teeth at the edges of the characters can be found once the dot matrix character library is amplified. The vector character library is to decompose the strokes of each character into various straight lines and curves, record the parameters of the straight lines and curves, and draw the lines according to the specific size when the parameters are displayed, so that the original character is restored. Its advantages are random enlargement and reduction without distortion, and no relation between required memory and character size. There are many types of vector word libraries, differing in that they employ different mathematical models to describe the lines that make up a character. As described above, glyph data is often stored by vector word libraries, such as the common True Type Font (TTF) format word library.
In order to achieve the adjustment of subtitle style according to needs, the embodiments of the present application provide several ways, including:
mode one: FIG. 2 is a schematic diagram of a data processing scheme provided in this embodiment of the present application, as shown in FIG. 2, after a system (Android/ios, etc.) is started, performing self-checking on an existing word stock of a device, or performing self-checking on a system word stock when an APP is played and started, judging whether a new word stock needs to be configured, if a new word stock needs to be configured, the new word stock may be some word stock necessary for the APP to start, downloading a new word stock file, or may be a word stock already packaged by the APP when the APP is installed, configuring the new word stock, and configuring the new word stock to the system or a position where the APP can be called; if no new word stock needs to be configured, the word stock is directly configured to the system or the position which can be called by the APP. For example, fonts are installed in the Android system: font pre-installation of an android system needs to be performed in the compiling stage of the android system, the pre-installed fonts are provided for the app after the system is started, and if the app needs to use fonts outside the system fonts, the app needs to walk through the app's own font download parsing logic.
Mode two: fig. 3 is a schematic diagram of another data processing scheme provided in the embodiment of the present application, as shown in fig. 3, when playing video, a subtitle file is parsed first, a required word stock is checked, whether a current system has a required word stock file is determined, and if the current system has the required word stock file, a system interface (Android/ios) is directly called to display the subtitle file; if the required word stock file does not exist, the user is guided to download the word stock file, or the user is guided to download the required word stock file by himself or herself to a designated server, and after the downloading and configuration are completed, the subtitle file is displayed through a system interface of the device.
The user is guided to download the font library file, as shown in fig. 4, and the font "microsoft ja black" is displayed on the video playing interface, where the font size of "microsoft ja black" is 3.7M. The user can select the operation key of "skip" or "download" to perform the next operation.
Mode three: FIG. 5 is a schematic diagram of another data processing scheme provided in the embodiment of the present application, as shown in FIG. 5, when playing video, the caption file is parsed first, the required word stock is checked, whether the current system has the required word stock file is determined, and if so, the system interface is directly called to display the caption; if the required font library file does not exist, the resources of the current subtitle of the server are requested, the resources can be the packaged font files or pictures of the required characters, the resources are downloaded to the local, and the subtitle is displayed by calling a system interface.
The characters of the picture type comprise two forms of dot matrix pictures and vector pictures, as shown in fig. 6, the left character of the dot matrix pictures in fig. 6, and the right character of the vector pictures. If the characters of the dot matrix picture are adjusted such as scaling, there is a problem of distortion, and if the characters of the vector picture are adjusted such as scaling, there is no problem of distortion.
The problems with the above several approaches are:
mode 1 needs to guide the user to install the required fonts in the operating system, the flow is too complex, and the user experience is poor.
And the method 2 is used for temporarily downloading, delays the watching time of the positive film and has large downloading amount.
The mode 3 server has large calculation pressure and large downloading amount, can not adjust word sizes and colors in real time, and can not adjust display effects in real time, such as effects of hollow words, inclination and the like.
Based on this, the embodiment of the application provides a data processing method which can be applied to the network architecture shown in fig. 7. The server 70 shown in fig. 7 may be a server having a data (such as font data and text data) processing function, and the server may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The client 71 shown in fig. 7 may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a vehicle-mounted terminal, or the like, but is not limited thereto.
Referring to fig. 8, fig. 8 is a schematic diagram of yet another data processing scheme provided in the embodiment of the present application, and in combination with the network architecture shown in fig. 7, the specific operation of the data processing scheme shown in fig. 8 is as follows:
the client 71 parses the subtitle to obtain a target subtitle style, where the subtitle style may include one or more of a subtitle font, a subtitle font size, a subtitle character inclination, a subtitle character color, a subtitle character effect, and the like. The client 71 determines a font data request from a target subtitle style and a subtitle file of the target video, wherein the subtitle file includes a set of subtitle characters, the font data request being for requesting the server to return a font data set matching the target subtitle style and the set of subtitle characters. The client 71 transmits a font data request to the server 70, the server 70 receives the font data request, then generates a font data set according to the target subtitle style and the subtitle character set, and returns the font data set to the client. The client 71 downloads the font data set returned from the server 70 to the local, and displays the subtitle corresponding to the subtitle file of the target video according to the font data set during the playing of the target video. By adopting the mode, the captions of the target video can be displayed according to the target caption style, and the style of the captions can be adjusted according to the need; in addition, only the font data corresponding to the subtitle characters are needed to be downloaded, the subtitle characters of the target video are usually only part of the characters included in the word stock, so that downloading of all the characters in the word stock is not needed, the problems of long downloading time and high flow consumption caused by large downloading amount of the word stock are solved, and the subtitle processing efficiency is improved.
In a possible embodiment, the data processing method provided in the embodiment of the present application may be implemented based on Cloud technology (Cloud technology). In particular, the method can relate to one or more of Cloud storage (Cloud storage), cloud Database (Cloud Database) and Big data (Big data) in Cloud technology. For example, data (e.g., font data, etc.) required to perform the data processing method is acquired from a cloud database.
The data processing method provided by the embodiment of the application is briefly described above, and a specific implementation manner of the data processing method is described in detail below.
Referring to fig. 9, fig. 9 is a flowchart of a data processing method according to an embodiment of the present application. The data processing method described in the embodiments of the present application may be applied to the network architecture shown in fig. 7, where the data processing method includes, but is not limited to, the following steps:
s901, the client acquires a target subtitle style. The caption style may include one or more of caption font, caption font size, caption character inclination, caption character color, caption character effect, and the like.
In this embodiment of the present application, a client may display a video playing interface, where one or more subtitle style options are displayed on the video playing interface, where the one or more subtitle style options may correspond to one subtitle style (such as a subtitle font), or may correspond to multiple subtitle styles (such as a subtitle font, a subtitle font size, etc.); and responding to a selection operation for the one or more subtitle style options, and determining a subtitle style corresponding to the subtitle style option selected by the selection operation as a target subtitle style. For example, taking a subtitle style as a subtitle font, as shown in fig. 10a, subtitle font options displayed in the video playing interface include a "regular script" option, a "Song Ti" option, a "simulated song" option, and a "bold" option, and when the "regular script" option is selected in the selection operation, the "regular script" is determined as a target subtitle font, that is, a target subtitle style. As another example, the subtitle style includes a subtitle font, a subtitle size, and a subtitle character gradient, and the subtitle style options displayed in the video playback interface include a subtitle font option, a subtitle size option, and a subtitle character gradient option, as shown in fig. 10b, and the target subtitle style is determined according to the selection operation, and the finally determined target subtitle style as shown in fig. 10b includes "Song Ti (i.e., selected subtitle font)", "small four (i.e., selected subtitle size)", "0 ° (i.e., selected subtitle character gradient)".
In an embodiment, the client may display a subtitle style setting control in the video playing interface; and displaying one or more subtitle style options in the video playing interface when the triggering operation of the subtitle style setting control is detected. The triggering operation may be before the target video is played by the video playing interface, or may be during the process of playing the target video by the video playing interface.
Optionally, the client may display the subtitle style setting control in the video playing interface before the target video is played on the video playing interface, or may display the subtitle style setting control in the video playing interface during the process of playing the target video on the video playing interface.
In another embodiment, one or more subtitle style options may be displayed in the video playback interface when the target gesture operation is detected; the target gesture operation may be, for example, a gesture operation of "finger swipe right up". Or displaying one or more subtitle style options in the video playing interface when the first voice command is detected; the first voice command may be, for example, a voice command of "subtitle style set"; further, a second voice command can be input to determine a target subtitle style; the second voice command may be, for example, a voice command of "set subtitle style as regular script"; the second voice command may be, for example, "set caption style as regular script, small four, and inclined by 5 °". The requirement of inputting the voice command twice is that the user may not know which types of subtitle styles are included for selection in advance, and the types of subtitle styles which are selectable can be displayed to the user through the first voice command.
In other possible embodiments, when the third voice command is detected, the style category included in the third voice command may be determined as the target subtitle style. For example, the third voice command may be a voice command of "set subtitle style as regular script", and at this time, the style category "regular script" included in the third voice command is directly determined as the target subtitle style. For another example, the third voice command may be "set caption style as regular script, small four, and inclined 5 °", and at this time, the caption style categories "regular script, small four, 5 °" included in the third voice command are directly determined as the target caption style.
In one embodiment, the caption style includes caption font, caption size, caption character inclination, caption character color, and caption character effect (e.g., open character). The client can display a subtitle style setting control in the video playing interface; and when the triggering operation of the subtitle style setting control is detected, displaying a subtitle style option area in the video playing interface, wherein the option area comprises a font option, a font size option, an inclination option and the like. For example, as shown in fig. 11a, a video playing area and a subtitle style setting control located at the lower right corner of the video playing area are included in the video playing interface, and if a user operation for the subtitle style setting control is detected in the video playing process, an option area is displayed in the video playing interface, where the option area includes a font option, a font size option, and an inclination option; when a trigger operation for the font option is detected, as shown in fig. 11b, a plurality of font options are displayed, such as: song Ti, regular script, blackbody, imitation Song, etc. for user selection; when a trigger operation for this font size option is detected, as shown in fig. 11c, a plurality of font size options are displayed, such as: small three, four, small four, five, etc. for user selection; when a trigger operation for the inclination option is detected, as shown in fig. 11d, a plurality of inclination options are displayed, such as: 0 °,5 °,10 °, etc., for user selection; the fonts, the font sizes and the fonts have no relevance and can be combined arbitrarily; if the user needs to set more subtitle styles, the right-most omission control of the option area may be triggered, and when the triggering operation for the omission control is detected, as shown in fig. 11e, the options of the subtitle styles are displayed, for example: the method comprises the steps of selecting a Chinese font, a western font, a font size, a font style, a character color, a character gradient, a character effect and the like, simultaneously, taking the set options as default values, executing videos played at the back of a client according to the default values, and directly clicking a determination or cancel button to finish the operation.
In a possible implementation manner, the client displays a subtitle style setting control in the video playing interface; when the triggering operation of the subtitle sample setting control is not detected, the client defaults to a target subtitle style. The default target subtitle style may be a style default to the client; may be a style of the target video subtitle file itself; the style set by the user in advance for the client can also be used.
For ease of understanding, the above-described subtitle style selection is similar to the manner in which font selection is performed in office software. For example, font selection in word, such as: regular script, black body, imitation Song, etc.
The format of the subtitle file may include both graphic data format and text data format.
The graphic data format means that the characters in the subtitle file are in an image format, and the text data format means that the characters in the subtitle file are in a text format.
For example, take text format Subripper (SRT), with extension of. srt, the composition is: a line of caption serial numbers, a line of time codes, and a line of caption data. For example, the subtitle sequence number "1" corresponds to a time code of "00:00:0, 500 to 00:00:3, 700", and the corresponding subtitle data is "the most popular 'dancing'" subject matter today; the corresponding time code of the caption serial number '2' is '00:00:4, 000 to 00:00:7, 300', and the corresponding caption data is 'too good'; the caption serial number "3" corresponds to a time code of "00:00:8, 600 to 00:00:12,000", and the corresponding caption data is "i want to dance every day"; the caption serial number "4" corresponds to a time code of "00:00:12, 700 to 00:00:13, 500", and the corresponding caption data is "Teng Gu is"; the caption serial number "5" corresponds to a time code of "00:00:13, 700 to 00:00:16, 500", and the corresponding caption data is "also remembers what the tune of the first hop was since you joined in jinesis", and the expression is that the first caption is in a time of 0.5 seconds to 3.7 seconds, the caption display data is "dancing" which is the most good subject of today, and the display rules of the second caption to the fifth caption are the same.
In one embodiment, the subtitle file of the target video is acquired, address information of a storage location of related data (including the subtitle file, the video image file and the audio file) of the target video is first determined, and the subtitle file of the target video is acquired according to the address information, where the address information may be a uniform resource locator (uniform resource locator, URL), for example: http://127.0.0.1:8080, or local disk storage locations, such as: test.mp4/sdcard/test.flv.
S902, the client sends a font data request to the server.
In the embodiment of the application, a client sends a font data request determined according to a target subtitle style and a subtitle file of a target video to a server, wherein the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return the font data set matched with the target subtitle style and the subtitle character set.
S903, the server receives the font data request sent by the client.
In the embodiment of the application, after receiving the font data request sent by the client, the server determines whether a redundant character exists in the caption character set. If redundant characters exist in the caption character set, redundancy elimination processing is carried out on the caption character set, and a new caption character set is determined according to the redundancy elimination processing. If there is no redundant character in the caption character set, step S904 is performed.
S904, the server responds to the font data request to acquire a font data set matched with the target subtitle style and the subtitle character set.
In the embodiment of the application, a server responds to a font data request, queries a font database corresponding to a target subtitle style, and acquires font data of each subtitle character in the subtitle character set according to the font database; a font data set (hereinafter may be simply referred to as a matching font data set) matching the target subtitle style and the subtitle character set is generated from the font data of each subtitle character. Or the server responds to the font data request, inquires a font database corresponding to the font style of the target subtitle, and acquires the font data of each subtitle character in the new subtitle character set according to the font database; a font data set matching the target subtitle font style and the new subtitle character set is generated from the font data of each subtitle character (the font data set and the matching font data set are matched).
In one embodiment, the glyph database includes standard glyph data for the character corresponding to the target font from which the character of the target font may be generated and which satisfies one or more of the following conditions: the character size is a preset character size, the inclination is a preset angle, the color is a preset color, the character effect is a preset character effect, and the like. For example, standard regular script characters having a character size of four small, an inclination of 0 °, a color of black, and a character effect of a hollow character can be generated from the standard font data. The target font is consistent with the subtitle font included in the target subtitle style.
When the target subtitle style comprises a subtitle font, the server acquires standard font data of each subtitle character corresponding to the subtitle font from the font database, so as to obtain a font data set matched with the target subtitle style and the subtitle character set. When the target subtitle style includes a reference subtitle style in addition to the subtitle font, the reference subtitle style includes one or more of a subtitle word size, a subtitle character gradient, a subtitle character color, and a subtitle character effect, the server acquires standard font data of each subtitle character in the subtitle character set corresponding to the subtitle font from the font database, and then adjusts the standard font data of each subtitle character corresponding to the subtitle font based on the reference subtitle style, thereby obtaining a font data set matching the target subtitle style and the subtitle character set. For example, the font data in the matching font data set may generate standard regular script characters with a font size of four, an inclination of 5 °, a red color, and a character effect of a hollow character. The target font is consistent with the subtitle font included in the target subtitle style. The font data in the matched font data set is vector data, that is, the matched font data set is a vector data set, so that whether the matched font data set is an android system or an ios system, the subtitle required by the subtitle file can be generated according to the vector data set. The vector data features that all characters are stored in mathematical formula mode and different point data, and the different systems have the same processing flow and flexible size. For example, there is an O of a capital character, which requires 7 features for vector control in total, and fig. 12 is a schematic view of a feature of a vector character format in the x-axis direction, as shown in fig. 12, the feature to be controlled has a space 1 on the left side of the font, a round stem 2, a forward width 3, and a body width 4; fig. 13 is a schematic view of a feature of a vector character format in the y-axis direction, as shown in fig. 13, with cap height overlap 5, base line overlap 6 and horizontal stem 7, and the character format of vector data is controlled by controlling these 7 features.
In one embodiment, the font data in the font library is ordered according to the ordering order of the corresponding characters in the dictionary, such as: for Chinese, the dictionary may be arranged according to the order of the characters, which refers to the capitalization of the first letter of syllables, that is, in the order of Chinese Pinyin. For English, the words in the dictionary are all arranged in alphabetical order. After the request data character set is obtained, the request characters in the request data character set can be firstly ordered according to the ordering sequence of the request characters in the dictionary, and the corresponding ordered matching font data set is obtained according to the ordered request character set. Thus, when the word stock is queried, the word stock can be queried sequentially, which is beneficial to improving the query efficiency, thereby improving the acquisition efficiency of the font data.
S905, the server sends the font data set to the client.
In one embodiment, the server may compress the glyph data set and send it to the client, which facilitates rapid transmission of the glyph data.
S906, the client receives the returned font data set, and in the process of playing the target video, the subtitle corresponding to the subtitle file is displayed according to the font data set.
In one embodiment, a returned set of matching glyph data is received that is a compressed package that can be more quickly downloaded by the client. The storage position of the compressed packet of the matched font data set is not in the system storage, so that the operation efficiency of the system can be improved.
In one embodiment, the matched font data set returned by the server includes font data corresponding to each character in the subtitle character set. The caption character set comprises one or more sections of caption characters, and the caption file further comprises a display rule of each section of caption characters, wherein the display rule comprises a display time, a display position, a display size, a display color, a bold display, a highlight display and other character display modes. The client acquires font data of each character in the first subtitle character string from the matched font data set aiming at the first subtitle character string, and generates a second subtitle character string according to the font data of each character; the style of the characters in the second caption character string is a target caption style, and the first caption character string is any caption character in one or more sections of caption characters; and displaying the second caption character string in the video playing interface according to the display rule corresponding to the first caption character string included in the caption file.
In an embodiment, during the process of playing the target video, the client analyzes the characters to be displayed at present, takes each character as an index one by one, searches the matching font data set, retrieves the corresponding font data and displays the corresponding font data. The display method can be that a system in the client calls a drawing interface to display. For example, in Android, drawing and displaying can be performed through onDraw of the View interface, and the drawing procedure is as follows:
protected void onDraw(Canvas canvas){
canvas.drawline (0,0,0,0,new Paint ()); /(and/or drawing a straight line)
canvas.drawarc (); arc/(and drawing)
canvas.drawOval (); drawing ellipses
…}
By adopting the data processing method provided by the embodiment of the application, a client acquires a target subtitle style and sends a font data request to a server; wherein the font data request is determined according to the target subtitle style and a subtitle file of the target video, the subtitle file includes a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set; the server responds to the font data request, generates a font data set matched with the target subtitle style and the subtitle character set, and sends the matched font data set to the client; and the client receives the matched font data set, and displays the subtitle corresponding to the subtitle file according to the matched font data set in the process of playing the target video. By adopting the mode, only the font data corresponding to the subtitle characters in the word stock corresponding to the target subtitle style is required to be downloaded, and all the word stocks are not required to be downloaded, so that the problems of long downloading time and high flow consumption caused by large downloading amount of the word stocks are solved, and the subtitle processing efficiency is improved. Meanwhile, as the characters required by the subtitles are preprocessed, the collection of required files is counted, font data are acquired from the server according to the requirement and rendered to the picture, the purposes of being used at present can be achieved, the subtitle style can be adjusted in any time period, and the experience of a user is improved.
Referring to fig. 14, fig. 14 is a flowchart of another data processing method according to an embodiment of the present application. The data processing method includes, but is not limited to, the steps of:
s1401, the client acquires a target subtitle style and a subtitle file of the target video. The subtitle style may include one or more of a subtitle font, a subtitle font size, a subtitle character inclination, a subtitle character color, a subtitle character effect, and the like, and the subtitle file includes a subtitle character set of the target video.
The relevant description of step S1401 can be referred to the relevant content of step S901 in the data processing method shown in fig. 9, which is not described in detail herein.
S1402, the client determines a request character set according to the subtitle character set included in the subtitle file, and determines a font data request according to the target subtitle style and the request character set.
In the embodiment of the application, after acquiring the target subtitle style and the subtitle file of the target video, the client acquires the subtitle character set of the target video from the subtitle file, and then determines the request character set according to the subtitle character set. After obtaining the request character set, determining a font data request according to the target subtitle style and the request character set, wherein the font data request is used for requesting a server to return the font data set matched with the target subtitle style and the request character set.
In a possible embodiment, the client determining the request character set according to the subtitle character set may include the following two ways:
mode one: after acquiring the caption character set, the client determines whether redundant characters exist in the caption character set, and if the redundant characters do not exist in the caption character set, the caption character set is used as a request character set. If redundant characters exist in the caption character set, performing redundancy elimination processing on the caption character set; wherein the redundancy elimination process includes eliminating repeated characters in the caption character set. For example, one segment of character string in the caption character set is "please you go out", and the other segment of character string is "please you fast go out", wherein the characters such as "please", "you", "go out" and the like have duplicates, and the duplicates are removed. Further, the client may directly determine the subtitle character set after the redundancy removal process as the request character set.
In a second mode, in another embodiment, the client first queries whether to record a historical font data set corresponding to the target subtitle style, if so, determines a target character according to the historical font data set and the subtitle character set, wherein the target character is contained in the subtitle character set, and the historical font data set contains font data corresponding to the target character; deleting the target character in the caption character set to obtain a new caption character set, and determining a request character set according to the new caption character set. When the request character set is a non-empty set, the request characters in the request character set are contained in the subtitle character set, and the historical font data set does not contain font data corresponding to the request characters.
According to the request character set determined in the two modes, according to the font data request determined by the request character set and the target subtitle style, the font data request is used for requesting a server to return the font data set matched with the target subtitle style and the request character set, the font data in the font data set are contained in the subtitle character set, the number of font data required by a client to acquire from the server is reduced, and the data processing efficiency is improved.
In one embodiment, before determining the target character according to the history font data set and the caption character set, the caption character set may be subjected to redundancy elimination processing, and at this time, the new caption character set is directly determined to be the request character set.
In another embodiment, before the target character is determined according to the history font data set and the caption character set, the redundancy removing process is not performed on the caption character set, the redundancy removing process is performed on the new caption character set, and the new caption character set after the redundancy removing process is determined as the request character set.
For example, the caption character set includes 5000 characters, wherein "regular script (i.e. target caption style)" font data corresponding to 3000 characters are included in the history font data set corresponding to "regular script" characters, and at this time, the client may delete the 3000 characters from the caption character set to obtain a new caption character set, and if there are redundant characters in the new caption character set, perform redundancy removal processing on the new caption character set to obtain a request character set; and if the new subtitle character set does not have redundant characters, taking the new subtitle character set as a request character set. At the moment, only the 'regular script' font data of the remaining 2000 characters are acquired from the server at most, so that the font data acquisition time is effectively reduced and the efficiency is higher compared with the process of acquiring the 'regular script' font data of all 2000 characters from the server.
For another example, when the number of font data in the historical font data set corresponding to the target subtitle style is sufficiently large, the font data corresponding to the characters in the subtitle character set may be all contained in the historical font data set, and at this time, the client determines that the request character set is an empty set, and may be directly obtained from the historical font data set without obtaining the font data from the server. This reduces the process of retrieving glyph data from the server and is more efficient.
S1403, the client sends a font data request carrying the target subtitle style and the request character set to the server.
S1404, the server receives the font data request sent by the client.
S1405, the server acquires a font data set (hereinafter, simply referred to as a matching font data set) matching the target subtitle style and the requested character set in response to the font data request. In the embodiment of the application, a server responds to the font data request, queries a font database corresponding to a target subtitle style, and acquires font data of each request character in the request character set according to the font database; and generating a font data set matched with the target subtitle style and the request character set according to the font data of each request character.
In one embodiment, the glyph database includes standard glyph data for the character corresponding to the target font from which the character of the target font may be generated and which satisfies one or more of the following conditions: the character size is a preset character size, the inclination is a preset angle, the color is a preset color, the character effect is a preset character effect, and the like. For example: according to the standard font data, standard regular script characters with a character size of four small, inclination of 0 degrees, black color and hollow character effect can be generated. The target font is consistent with the subtitle font included in the target subtitle style.
When the target subtitle style includes a subtitle font, the server acquires standard font data of each request character corresponding to the subtitle font from the font database, thereby obtaining a font data set matched with the target subtitle style and the request character set. When the target subtitle style includes a reference subtitle style including one or more of a subtitle word size, a subtitle character gradient, a subtitle character color, and a subtitle character effect in addition to the subtitle font, the server acquires standard font data of each request character in the request character set corresponding to the subtitle font from the font database, and then adjusts the standard font data of each subtitle character acquired based on the reference subtitle style corresponding to the subtitle font, thereby obtaining a font data set matching the target subtitle style and the request character set.
S1406, the server sends the font data set to the client.
In one embodiment, the server may compress the glyph data set and send it to the client, which facilitates rapid transmission of the glyph data.
S1407, the client receives the returned font data set, and displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
In an embodiment, when the step S1402 is a request character set determined in the above manner, the matching font data set returned by the server includes font data corresponding to each character in the subtitle character set. The caption character set comprises one or more sections of caption characters, and the caption file further comprises a display rule of each section of caption characters, wherein the display rule comprises a display time, a display position, a display size, a display color, a bold display, a highlight display and other character display modes. The client acquires font data of each character in the first subtitle character string from the matched font data set aiming at the first subtitle character string, and generates a second subtitle character string according to the font data of each character; the font style of the characters in the second caption character string is a target caption font style, and the first caption character string is any caption character in one or more sections of caption characters; and displaying the second caption character string in the video playing interface according to the display rule corresponding to the first caption character string included in the caption file.
In another embodiment, when the request character set determined in the second manner is in step S1402 and the target character (the target character is included in the subtitle character set and the historical font data set includes the font data corresponding to the target character) exists, the matching font data set returned by the server only includes the font data corresponding to a part of the characters in the subtitle character set, and at this time, for each target character in the subtitle character set, the font data corresponding to the target subtitle style of each target character needs to be obtained from the historical font data set. The caption character set comprises one or more sections of caption characters, and the caption file further comprises a display rule of each section of caption characters, wherein the display rule comprises a display time, a display position, a display size, a display color, a bold display, a highlight display and other character display modes. The client acquires font data of each character in the first subtitle character string from the matching font data set and the history font data set returned by the server aiming at the first subtitle character string, and generates a second subtitle character string according to the font data of each character; the style of the characters in the second caption character string is a target caption style, and the first caption character string is any caption character in one or more sections of caption characters; and displaying the second caption character string in the video playing interface according to the display rule corresponding to the first caption character string included in the caption file.
In a feasible implementation manner, the font data of each target character corresponding to the target subtitle style can be obtained from the historical font data set first, so as to obtain the font data set of the target character. The client may obtain, for the first caption string, font data of each character in the first caption string from the matching font data set and the font data set of the target character returned by the server. When the first caption character string comprises a certain target character, the font data of the certain target character can be directly obtained from the font data set of the target character, and compared with the font data of the certain target character obtained from the historical font data set, the font data of the certain target character can be obtained more efficiently.
In an embodiment, during the process of playing the target video, the client analyzes the characters to be displayed at present, takes each character as an index one by one, searches the matching font data set and the history font data set, searches the corresponding font data and displays the corresponding font data.
In another embodiment, the client analyzes the characters currently required to be displayed in the process of playing the target video, takes each character as an index one by one, searches the matched font data set and the font data set of the target character, retrieves the corresponding font data and displays the corresponding font data.
In an embodiment, after acquiring the font data set matching the target subtitle style and the request character set, the client updates the historical font data set corresponding to the target subtitle style if the historical font data set exists. And if the historical font data set corresponding to the target subtitle style does not exist, storing the matching font data set. Therefore, when the font data corresponding to the target subtitle style is obtained later, the overlapped part of the font data corresponding to the target subtitle style can be directly obtained from the historical font data set corresponding to the target subtitle style, and further, when the client downloads the font data corresponding to the target subtitle style, the client downloads a part of the font data, so that the data processing efficiency and the user experience are improved.
The description of the caption display in step S1407 may be referred to as the description of the caption display in step S906 in the data processing method shown in fig. 9, which is not described in detail herein.
According to the data processing method provided by the embodiment of the application, on one hand, the client side requests the server to return the font data set matched with the target subtitle style and the request character set according to the request character set obtained after redundancy removal processing of the subtitle character set and the font data request determined by the target subtitle style, the font data in the font data set is not repeated, the number of the required font data acquired by the client side from the server is reduced, and the data processing efficiency is improved. On the other hand, according to the new subtitle character set (the target character is determined according to the historical font data set and the subtitle character set, the target character is contained in the subtitle character set, the historical font data set contains font data corresponding to the target character, the target character in the subtitle character set is deleted to obtain the new subtitle character set) and the font data request determined by the target subtitle style, the font data set returned by the server according to the font data request contains less required font data, the quantity of the required font data acquired by the client from the server is further reduced, and the data processing efficiency is improved.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing apparatus described in the embodiments of the present application, corresponding to the client described above, includes:
a processing unit 1502, configured to obtain a target subtitle style;
a transceiver 1501 for transmitting a font data request to a server; the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set;
the transceiver 1501 is further configured to receive the glyph data set returned by the server;
and a display unit 1503, configured to display, according to the font data set, a subtitle corresponding to the subtitle file during the playing of the target video.
In an embodiment, the processing unit 1502 is specifically configured to, when acquiring the target subtitle style: displaying a video playing interface, and displaying one or more subtitle style options in the video playing interface;
And when a selection operation for one or more subtitle style options is detected, determining a subtitle style corresponding to the subtitle style option selected by the selection operation as the target subtitle style.
In an embodiment, when the processing unit 1502 displays one or more subtitle style options in the video playing interface, the processing unit is specifically configured to: displaying a subtitle style setting control in the video playing interface;
and in response to a triggering operation of the subtitle style setting control, displaying the one or more subtitle style options in the video playing interface.
In an embodiment, the processing unit 1502 is further configured to: determining a request character set according to the subtitle character set;
determining the font data request according to the target subtitle style and the request character set;
wherein the glyph data request is for requesting the server to return a glyph data set that matches the target subtitle style and the requested character set.
In an embodiment, when the processing unit 1502 determines the request character set according to the subtitle character set, the processing unit is specifically configured to: if redundant characters exist in the caption character set, performing redundancy elimination processing on the caption character set;
And determining the request character set according to the subtitle character set subjected to redundancy removal processing.
In an embodiment, the set of caption characters includes one or more sections of caption characters, and the caption file further includes a display rule for each section of caption characters. The processing unit 1502 is specifically configured to, when displaying the subtitle corresponding to the subtitle file according to the font data set:
for a first caption character string, acquiring the font data of each character in the first caption character string from the font data set, and generating a second caption character string according to the font data of each character; the style of the characters in the second caption character string is the target caption style, and the first caption character string is any caption character in the one or more sections of caption characters;
and displaying the second caption character string in a video playing interface according to a display rule corresponding to the first caption character string included in the caption file.
In an embodiment, when the processing unit 1502 determines the request character set according to the subtitle character set, it is further configured to: if a historical font data set corresponding to the target subtitle style is recorded, determining a request character set according to the historical font data set and the subtitle character set;
Wherein, the request characters in the request character set are contained in the caption character set, and the historical font data set does not contain font data corresponding to the request characters;
wherein, in the process of playing the target video, displaying the subtitle corresponding to the subtitle file according to the font data set includes:
and displaying the subtitles corresponding to the subtitle files according to the font data set returned by the server and the historical font data set in the process of playing the target video.
In an embodiment, the processing unit 1502 is further configured to: after the font data set matched with the target subtitle style and the request character set is acquired, updating the historical font data set according to the font data set matched with the target subtitle style and the request character set.
It may be understood that the functions of each functional unit of the data processing apparatus provided in the embodiments of the present application may be specifically implemented according to the method in the embodiments of the foregoing method, and the specific implementation process may refer to the description related to the client in the embodiments of the foregoing method, which is not repeated herein.
According to the data processing device provided by the embodiment of the application, one or more caption pattern options are displayed in a video playing interface by responding to the triggering operation of the caption pattern control in the playing process of a target video or before playing, a target caption pattern is obtained according to the caption pattern options, and a font data request is sent to a server; the font data request is determined according to a target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, the font data request is used for requesting a server to return a font data set matched with the target subtitle style and the subtitle character set, the server responds to the font data request, generates a font data set matched with the target subtitle style and the subtitle character set, returns the font data set to a client, and the client displays a subtitle corresponding to the subtitle file according to the font data set, so that style options of the subtitle can be flexibly adjusted according to the intention of a user in the video playing process or before playing, the subtitle is rendered, and the experience of the user is improved.
Referring to fig. 16, fig. 16 is a schematic structural diagram of another data processing apparatus according to an embodiment of the present application. The data processing apparatus described in the embodiments of the present application corresponds to the server described above, and includes:
Transceiver unit 1601: the method comprises the steps of receiving a font data request sent by a client; the font data request is determined according to a target subtitle style and a subtitle file of a target video, wherein the subtitle file comprises a subtitle character set;
a processing unit 1602: the font data processing unit is used for responding to the font data request and acquiring a font data set matched with the target subtitle style and the subtitle character set;
the transceiver unit 1601: and the method is also used for sending the font data set to the client so that the client displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
In an embodiment, the processing unit 1602 is specifically configured to: inquiring a font database corresponding to the target subtitle style, and acquiring font data of each subtitle character in the subtitle character set;
and generating the font data set matched with the target subtitle style and the subtitle character set according to the font data of each subtitle character.
In an embodiment, the processing unit 1602 is further configured to:
before acquiring the font data set matched with the target subtitle style and the subtitle character set, if redundant characters exist in the subtitle character set, performing redundancy elimination processing on the subtitle character set to acquire a new subtitle character set, and responding to the font data request, acquiring the font data set matched with the target subtitle style and the new subtitle character set.
It may be understood that the functions of each functional unit of the data processing apparatus provided in the embodiments of the present application may be specifically implemented according to the method in the embodiments of the foregoing method, and the specific implementation process may refer to the relevant description of the server in the embodiments of the foregoing method, which is not repeated herein.
In other possible embodiments, the data processing apparatus provided in the embodiments of the present application may also be implemented in combination of software and hardware, and by way of example, the data processing apparatus provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the data processing method provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic component.
Referring to fig. 17, fig. 17 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 100 described in the embodiments of the present application includes: a processor 1701, a user interface 1702, a communication interface 1703, and a memory 1704. The processor 1701, the user interface 1702, the communication interface 1703, and the memory 1704 may be connected by a bus or other means, for example, in the embodiment of the present application. Among them, the processor 1701 (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of the computer device, which can parse various instructions in the computer device and process various data of the computer device, for example: the CPU can be used for analyzing a startup and shutdown instruction sent by a user to the computer equipment and controlling the computer equipment to perform startup and shutdown operation; and the following steps: the CPU may transmit various types of interaction data between internal structures of the computer device, and so on. The user interface 1702 is a medium for implementing interaction and information exchange between a user and a computer device, and may specifically include a Display screen (Display) for output, a speaker, a Keyboard (Keyboard) for input, a touch screen, a sound pickup device, and so on, where the Keyboard may be a physical Keyboard, a virtual Keyboard of a touch screen, or a Keyboard that combines a physical Keyboard with a virtual Keyboard of a touch screen. The communication interface 1703 may optionally include a standard wired interface, a wireless interface (e.g., wi-Fi, mobile communication interface, etc.), controlled by the processor 1701 for transceiving data. The communication interface 1703 may also enable internal communication of the computer device. Memory 1604 (Memory) is a Memory device in a computer device for storing programs and data. It is understood that the memory 1704 herein may include a built-in memory of the computer device, or may include an expansion memory supported by the computer device. Memory 1704 provides storage space that stores the operating system of the computer device, which may include, but is not limited to: android systems, iOS systems, windows Phone systems, etc., which are not limiting in this application.
In the present embodiment, the processor 1701 executes the following operations by executing the executable program code in the memory 1704:
acquiring a target subtitle style; sending a font data request to a server; the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set; and receiving the font data set returned by the server, and displaying the subtitle corresponding to the subtitle file according to the font data set in the process of playing the target video.
In one embodiment, the processor 1701 displays a video playback interface through the user interface 1702 in which one or more subtitle style options are displayed; and when a selection operation for one or more subtitle style options is detected, determining a subtitle style corresponding to the subtitle style option selected by the selection operation as the target subtitle style.
In one embodiment, the processor 1701 is configured to, when displaying one or more subtitle style options in the video playback interface via the user interface 1702: displaying a subtitle style setting control in the video playing interface; and displaying the one or more subtitle style options in the video playing interface when the triggering operation of the subtitle style setting control is detected.
In one embodiment, the processor 1701 is configured to: determining a request character set according to the subtitle character set; determining the font data request according to the target subtitle style and the request character set; wherein the glyph data request is for requesting the server to return a glyph data set that matches the target subtitle style and the requested character set.
In an embodiment, the processor 1701 is further configured to: if redundant characters exist in the caption character set, performing redundancy elimination processing on the caption character set; and determining the request character set according to the subtitle character set subjected to redundancy removal processing.
In an embodiment, the set of caption characters includes one or more sections of caption characters, and the caption file further includes a display rule for each section of caption characters. The processor 1701 is specifically configured to, when displaying, according to the font data set, a subtitle corresponding to the subtitle file through the user interface 1702: for a first caption character string, acquiring the font data of each character in the first caption character string from the font data set, and generating a second caption character string according to the font data of each character; the style of the characters in the second caption character string is the target caption style, and the first caption character string is any caption character in the one or more sections of caption characters;
And displaying the second caption character string in a video playing interface according to a display rule corresponding to the first caption character string included in the caption file.
In one embodiment, the processor 1701 is specifically configured to, when determining a request character set according to the subtitle character set: if a historical font data set corresponding to the target subtitle style is recorded, determining a request character set according to the historical font data set and the subtitle character set; wherein, the request characters in the request character set are contained in the caption character set, and the historical font data set does not contain font data corresponding to the request characters; wherein, in the process of playing the target video, displaying the subtitle corresponding to the subtitle file according to the font data set includes:
and displaying the subtitles corresponding to the subtitle files according to the font data set returned by the server and the historical font data set in the process of playing the target video.
In an embodiment, the processor 1701 is further configured to: after the font data set matched with the target subtitle style and the request character set is acquired, updating the historical font data set according to the font data set matched with the target subtitle style and the request character set.
In a specific implementation, the processor 1701, the user interface 1702, the communication interface 1703 and the memory 1704 described in the embodiments of the present application may execute an implementation of the client described in the data processing method provided in the embodiments of the present application, or may execute an implementation described in the data processing apparatus 100 shown in fig. 15, which is not described herein again.
According to the computer equipment provided by the embodiment of the application, one or more caption pattern options are displayed in a video playing interface by responding to the triggering operation of the caption pattern control in the playing process of the target video or before playing, the target caption pattern is obtained according to the caption pattern options, and a font data request is sent to a server; the font data request is determined according to a target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, the font data request is used for requesting a server to return a font data set matched with the target subtitle style and the subtitle character set, the server responds to the font data request, generates a font data set matched with the target subtitle style and the subtitle character set, returns the font data set to a client, and the client displays a subtitle corresponding to the subtitle file according to the font data set, so that style options of the subtitle can be flexibly adjusted according to the intention of a user in the video playing process or before playing, the subtitle is rendered, and the experience of the user is improved.
Referring to fig. 18, fig. 18 is a schematic structural diagram of another computer device according to an embodiment of the present application. The computer device 200 described in the embodiment of the present application includes: a processor 1801, a communication interface 1802, and a memory 1803. The processor 1801, the communication interface 1802, and the memory 1803 may be connected by a bus or other means, which is exemplified in the present embodiment.
Among them, the processor 1801 (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of the computer device, which can parse various instructions in the computer device and process various data of the computer device, for example: the CPU can be used for analyzing a startup and shutdown instruction sent by a user to the computer equipment and controlling the computer equipment to perform startup and shutdown operation; and the following steps: the CPU may transmit various types of interaction data between internal structures of the computer device, and so on. The communication interface 1802 may optionally include a standard wired interface, a wireless interface (e.g., wi-Fi, mobile communication interface, etc.), controlled by the processor 1801 for transceiving data. The Memory 1803 (Memory) is a Memory device in the computer device for storing programs and data. It will be appreciated that the memory 1803 herein may include both built-in memory of the computer device and extended memory supported by the computer device. Memory 1803 provides storage space that stores the operating system of the computer device, which may include, but is not limited to: android systems, iOS systems, windows Phone systems, etc., which are not limiting in this application. In the present embodiment, the processor 1801 performs the following operations by executing executable program codes in the memory 1803:
Receiving a font data request sent by a client; the font data request is determined according to a target subtitle style and a subtitle file of a target video, wherein the subtitle file comprises a subtitle character set; responding to the font data request, and acquiring a font data set matched with the target subtitle style and the subtitle character set; and sending the font data set to the client so that the client displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
In an embodiment, the processor 1801 is further configured to: inquiring a font database corresponding to the target subtitle style, and acquiring font data of each subtitle character in the subtitle character set; and generating the font data set matched with the target subtitle style and the subtitle character set according to the font data of each subtitle character.
In an embodiment, the processing unit 1801 is further configured to: before acquiring the font data set matched with the target subtitle style and the subtitle character set, if redundant characters exist in the subtitle character set, performing redundancy elimination processing on the subtitle character set to acquire a new subtitle character set, and responding to the font data request, acquiring the font data set matched with the target subtitle style and the new subtitle character set.
In a specific implementation, the processor 1801, the communication interface 1802, and the memory 1703 described in the embodiments of the present application may execute an implementation of a server described in the data processing method provided in the embodiments of the present application, or may execute an implementation described in the data processing apparatus 200 shown in fig. 16, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to implement the data processing method according to the embodiments of the present application. The specific implementation manner may refer to the foregoing description, and will not be repeated here.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device implements the data processing method according to the embodiments of the present application. The specific implementation manner may refer to the foregoing description, and will not be repeated here.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the described order of action, as some steps may take other order or be performed simultaneously according to the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing disclosure is only illustrative of some of the embodiments of the present application and is not, of course, to be construed as limiting the scope of the appended claims, and therefore, all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (13)

1. A method of data processing, the method comprising:
acquiring a target subtitle style;
sending a font data request to a server; the font data request is determined according to the target subtitle style and a subtitle file of a target video, the subtitle file comprises a subtitle character set, and the font data request is used for requesting the server to return a font data set matched with the target subtitle style and the subtitle character set;
and receiving the font data set returned by the server, and displaying the subtitle corresponding to the subtitle file according to the font data set in the process of playing the target video.
2. The method of claim 1, wherein the acquiring the target subtitle style comprises:
displaying a video playing interface, and displaying one or more subtitle style options in the video playing interface;
and responding to a selection operation for the one or more subtitle style options, and determining a subtitle style corresponding to the subtitle style option selected by the selection operation as the target subtitle style.
3. The method of claim 2, wherein displaying one or more subtitle style options in the video playback interface comprises:
Displaying a subtitle style setting control in the video playing interface;
and displaying the one or more subtitle style options in the video playing interface when the triggering operation of the subtitle style setting control is detected.
4. A method according to any one of claims 1-3, wherein the method further comprises:
determining a request character set according to the subtitle character set;
determining the font data request according to the target subtitle style and the request character set;
wherein the glyph data request is for requesting the server to return a glyph data set that matches the target subtitle style and the requested character set.
5. The method of claim 4, wherein said determining a set of request characters from said set of caption characters comprises:
if redundant characters exist in the caption character set, performing redundancy elimination processing on the caption character set;
and determining the request character set according to the subtitle character set subjected to redundancy removal processing.
6. The method of any of claims 1-3, wherein the set of caption characters includes one or more sections of caption characters, the caption file further including a display rule for each section of caption characters;
The displaying the subtitle corresponding to the subtitle file according to the font data set includes:
for a first caption character string, acquiring the font data of each character in the first caption character string from the font data set, and generating a second caption character string according to the font data of each character; the style of the characters in the second caption character string is the target caption style, and the first caption character string is any caption character in the one or more sections of caption characters;
and displaying the second caption character string in a video playing interface according to a display rule corresponding to the first caption character string included in the caption file.
7. The method of claim 4, wherein said determining a set of request characters from said set of caption characters comprises:
if a historical font data set corresponding to the target subtitle style is recorded, determining the request character set according to the historical font data set and the subtitle character set; wherein, the request characters in the request character set are contained in the caption character set, and the historical font data set does not contain font data corresponding to the request characters;
Wherein, in the process of playing the target video, displaying the subtitle corresponding to the subtitle file according to the font data set includes:
and displaying the subtitles corresponding to the subtitle files according to the font data set returned by the server and the historical font data set in the process of playing the target video.
8. The method of claim 7, wherein the method further comprises:
after the font data set matched with the target subtitle style and the request character set is acquired, updating the historical font data set according to the font data set matched with the target subtitle style and the request character set.
9. A method of data processing, the method comprising:
receiving a font data request sent by a client; the font data request is determined according to a target subtitle style and a subtitle file of a target video, wherein the subtitle file comprises a subtitle character set;
responding to the font data request, and acquiring a font data set matched with the target subtitle style and the subtitle character set;
And sending the font data set to the client so that the client displays the subtitles corresponding to the subtitle files according to the font data set in the process of playing the target video.
10. A subtitle processing apparatus, characterized in that the apparatus comprises means for implementing the data processing method according to any one of claims 1-8, or means for implementing the data processing method according to claim 9.
11. A computer device, comprising: a processor, a communication interface and a memory, the processor, the communication interface and the memory being interconnected, wherein the memory stores executable program code, the processor being adapted to invoke the executable program code to implement the data processing method according to any of claims 1-8 or to implement the data processing method according to claim 9.
12. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to implement the data processing method of any one of claims 1-8 or the data processing method of claim 9.
13. A computer program product, characterized in that it comprises a computer program or computer instructions which, when executed by a processor, implement the data processing method according to any of claims 1-8 or the data processing method according to claim 9.
CN202111467023.XA 2021-12-02 2021-12-02 Data processing method, apparatus, device, storage medium, and program product Pending CN116225580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111467023.XA CN116225580A (en) 2021-12-02 2021-12-02 Data processing method, apparatus, device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111467023.XA CN116225580A (en) 2021-12-02 2021-12-02 Data processing method, apparatus, device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN116225580A true CN116225580A (en) 2023-06-06

Family

ID=86575520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111467023.XA Pending CN116225580A (en) 2021-12-02 2021-12-02 Data processing method, apparatus, device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN116225580A (en)

Similar Documents

Publication Publication Date Title
US11281465B2 (en) Non-transitory computer readable recording medium, computer control method and computer device for facilitating multilingualization without changing existing program data
CN109947512B (en) Text adaptive display method, device, server and storage medium
CN108924622B (en) Video processing method and device, storage medium and electronic device
US20100100904A1 (en) Comment distribution system, comment distribution server, terminal device, comment distribution method, and recording medium storing program
JPH1124980A (en) Device and method for managing medium accessing plural medium types
CN107133199B (en) Acquiring font parts using a compression mechanism
US20190187860A1 (en) Information input method and apparatus
US9766860B2 (en) Dynamic source code formatting
US9262399B2 (en) Electronic device, character conversion method, and storage medium
ES2717462T3 (en) Collaborative edition
KR102176655B1 (en) Server and method for generating a font file
CN109299425A (en) Amending method, device, server, terminal and the storage medium of content are issued
CN111506300A (en) Applet generation method, device, equipment and storage medium
US20140095988A1 (en) Creating and consuming streaming e-book content
CN107330087B (en) Page file generation method and device
US11436764B2 (en) Dynamic generation and delivery of scalable graphic images in web applications
CN111934985A (en) Media content sharing method, device and equipment and computer readable storage medium
CN116225580A (en) Data processing method, apparatus, device, storage medium, and program product
CN109905753B (en) Corner mark display method and device, storage medium and electronic device
CN107704536B (en) Software content display method and system
CN111045674B (en) Interaction method and device of player
CN112559034A (en) Method and system for generating skeleton screen file
CN109726382B (en) Typesetting method and device
AU2016266083A1 (en) Method, system and apparatus for displaying an electronic document
CN111782992A (en) Display control method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40086930

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination