CN111491184B - Method and device for generating situational subtitles, electronic equipment and storage medium - Google Patents
Method and device for generating situational subtitles, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111491184B CN111491184B CN201910078246.3A CN201910078246A CN111491184B CN 111491184 B CN111491184 B CN 111491184B CN 201910078246 A CN201910078246 A CN 201910078246A CN 111491184 B CN111491184 B CN 111491184B
- Authority
- CN
- China
- Prior art keywords
- situational
- subtitles
- subtitle
- currently played
- played video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention discloses a method and a device for generating situational subtitles, electronic equipment and a storage medium; the method comprises the following steps: receiving a generation request of a situational subtitle sent by different requesters through a client in the playing process of a currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles; and adding the situational subtitles to the currently played video. The situational subtitles are more diversified, the use by a user is facilitated, and the listening experience of the user is improved.
Description
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method and an apparatus for generating a situational subtitle, an electronic device, and a storage medium.
Background
With the popularization of the internet, a user can acquire a video to be watched through a convenient channel. Video carrying media such as video tapes and discs have gradually faded out of view of the public, and video transmission modes such as network downloading and online watching are accepted by more and more users. In the process of generating the situational subtitles, the situational subtitles can be added into the playing content, the situational subtitles are used for explaining and explaining the video content, and the added subtitles are fused with the original video file to form a part of the video, so that the watching experience of a user can be increased, and the user experience can be improved.
At present, the following two methods for generating a contextualized subtitle are generally used: first, an artificial generation mode. Thus, the subjectivity is too strong, the accuracy is poor, and improper pairing is easy to generate; second, a server generation mode. The method comprises the steps that firstly, the server determines the situational subtitles corresponding to each playing content in a target video, and then the situational subtitles corresponding to each playing content are fixedly added to each playing content. Since the server fixedly adds the contextualized subtitle to each of the broadcast contents, the contextualized subtitle displayed for different viewing users is fixed and unchangeable, and the contextualized subtitle cannot be dynamically added to each of the broadcast contents.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
in the first method for generating the situational subtitles in the prior art, the subjectivity is too strong, the accuracy is poor, and improper pairing is easy to generate; in the second method for generating the contextualized caption in the prior art, the server fixedly adds the contextualized caption to each playing content, and the contextualized caption displayed for different viewing users is fixed and unchanged, so that the viewing effect of the user is poor, and the viewing experience of the user is influenced.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present invention provide a method and an apparatus for generating a situational subtitle, an electronic device, and a storage medium, where the situational subtitle is more diversified, and is convenient for a user to use, and improves the listening experience of the user.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for generating a episodic subtitle, where the method includes:
receiving a generation request of a situational subtitle sent by different requesters through a client in the playing process of a currently played video;
determining the situational subtitles corresponding to the generation request of the situational subtitles;
and adding the situational subtitles to the currently played video.
In the above embodiment, the determining the contextual subtitle corresponding to the request for generating the contextual subtitle includes:
extracting the situational subtitles preset by the current user from the situational subtitle generation request; determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles; alternatively, the first and second liquid crystal display panels may be,
performing image recognition on the currently played video in response to the generation request of the situational subtitles to acquire an image recognition result corresponding to the currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video; or converting the user comment determined by the strategy into a situational subtitle corresponding to the generation request of the situational subtitle; or converting the characteristics of the current user browsing page into the situational subtitles corresponding to the situational subtitle generation request; or, determining the subtitle selected by the current user in the current playing video through the consumption data as the situational subtitle corresponding to the generation request of the situational subtitle.
In the above embodiment, the performing image recognition on the currently played video in response to the request for generating the contextualized subtitle and acquiring an image recognition result corresponding to the currently played video includes:
responding to the generation request of the situational subtitles to perform object identification on the currently played video, and acquiring an object identification result corresponding to the currently played video; alternatively, the first and second liquid crystal display panels may be,
performing face recognition on the currently played video in response to the generation request of the situational subtitles to obtain a face recognition result corresponding to the currently played video; alternatively, the first and second electrodes may be,
and responding to the generation request of the situational subtitles to perform scene recognition on the currently played video, and acquiring a scene recognition result corresponding to the currently played video.
In the above embodiment, the adding the episodic subtitle to the currently playing video includes:
converting the situational subtitles into target situational subtitles in a preset format; wherein the preset format comprises: presetting fonts, sizes and colors;
and adding the target situational subtitles to the currently played video.
In a second aspect, an embodiment of the present invention provides an apparatus for generating a episodic subtitle, where the apparatus includes: the device comprises a receiving module, a determining module and an adding module; wherein the content of the first and second substances,
the receiving module is used for receiving the generation request of the situational subtitles sent by different requesters through the client in the playing process of the current playing video;
the determining module is used for determining the situational subtitles corresponding to the generation request of the situational subtitles;
the adding module is used for adding the situational subtitles to the currently played video.
In the above embodiment, the determining module is specifically configured to extract a scenario subtitle preset by a current user from the generation request of the scenario subtitle; determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles; or, performing image recognition on the currently played video in response to the generation request of the situational subtitles to obtain an image recognition result corresponding to the currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video; or converting the user comment determined by the strategy into the contextual subtitle corresponding to the generation request of the contextual subtitle; or converting the characteristics of the current user browsing page into the situational subtitles corresponding to the situational subtitle generation request; or, determining the subtitle selected by the current user in the current playing video through the consumption data as the situational subtitle corresponding to the generation request of the situational subtitle.
In the above embodiment, the determining module is specifically configured to perform object identification on the currently played video in response to the request for generating the episodic subtitles, and obtain an object identification result corresponding to the currently played video; or performing face recognition on the currently played video in response to the generation request of the situational subtitles to obtain a face recognition result corresponding to the currently played video; or performing scene recognition on the currently played video in response to the generation request of the situational subtitles, and acquiring a scene recognition result corresponding to the currently played video.
In the above embodiment, the adding module is specifically configured to convert the episodic subtitles into target episodic subtitles in a preset format; wherein the preset format comprises: presetting fonts, sizes and colors; and adding the target scene subtitle to the currently played video.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for generating the contextualized caption according to any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement a method for generating a contextualized caption as described in any embodiment of the present invention.
The embodiment of the invention provides a method and a device for generating situational subtitles, electronic equipment and a storage medium, which can receive the generation requests of the situational subtitles sent by different requesters through a client in the playing process of a currently played video; then determining the situational subtitles corresponding to the generation request of the situational subtitles; and adding the scene subtitles to the currently played video. That is to say, in the technical solution of the present invention, a request for generating a situational subtitle sent by a client by different requesters can be received in the playing process of a currently played video; and then determining the contextual subtitles corresponding to the different generation requests of the contextual subtitles. In the first method for generating the situational subtitles in the prior art, subjectivity is too strong, accuracy is poor, and improper pairing is easily generated; in the second method for generating contextual subtitles in the prior art, a server fixedly adds the contextual subtitles to each playing content, and the contextual subtitles displayed for different viewing users are fixed and unchangeable, so that the viewing effect of the user is poor, and the viewing experience of the user is affected. Therefore, compared with the prior art, the method, the device, the electronic equipment and the storage medium for generating the situational subtitles, which are provided by the embodiment of the invention, have the advantages that the situational subtitles are more diversified, the use by a user is facilitated, and the listening experience of the user is improved; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
Drawings
Fig. 1 is a schematic flowchart of a method for generating a situational subtitle according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for generating a situational subtitle according to a second embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for generating a situational subtitle according to a third embodiment of the present invention;
fig. 4 is a schematic flowchart of a method for generating a situational subtitle according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a scene subtitle generating device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
Example one
Fig. 1 is a flowchart illustrating a method for generating a episodic subtitle according to an embodiment of the present invention. As shown in fig. 1, the method for generating the episodic subtitle may include the following steps:
In a specific embodiment of the present invention, the electronic device may receive, during the playing process of the currently played video, a request for generating a contextualized subtitle, which is sent by a client by a different requestor. Specifically, the electronic device may receive a request for generating a situational subtitle, which is sent by an editing user of a currently playing video through a client; or, the electronic device may also receive a request for generating a contextualized subtitle, which is sent by a viewing user of a currently playing video through a client; or, the electronic device may further receive a request for generating the contextualized caption, which is sent by the machine understanding module through the client; or, the electronic device may further receive a request for generating the contextualized caption, which is sent by the video understanding module through the client.
And step 102, determining the situational subtitles corresponding to the generation request of the situational subtitles.
In a specific embodiment of the present invention, the electronic device may extract the contextual subtitles preset by the current user from the request for generating the contextual subtitles; determining the situational subtitles preset by a current user as the situational subtitles corresponding to the generation request of the situational subtitles; or, the electronic device may also perform image recognition on the currently played video in response to the generation request of the episodic subtitles, and acquire an image recognition result corresponding to the currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video; or, the electronic device may further convert the user comment determined by the policy into a contextual subtitle corresponding to the generation request of the contextual subtitle; or, the electronic device may further convert the feature of the current user browsing page into a contextual subtitle corresponding to the request for generating the contextual subtitle; or, the electronic device may further determine the subtitle selected by the current user in the currently played video through the consumption data as the episodic subtitle corresponding to the generation request of the episodic subtitle.
And 103, adding the scene subtitles to the currently played video.
In a specific embodiment of the present invention, the electronic device may add the episodic subtitles to the currently playing video. Specifically, the electronic device may convert the episodic subtitles into target episodic subtitles in a preset format; wherein, the preset format comprises: presetting fonts, sizes and colors; and then adding the target scene subtitle to the currently played video.
The method for generating the situational subtitles, provided by the embodiment of the invention, can receive the generation requests of the situational subtitles sent by different requesters through the client in the playing process of the current playing video; then determining the situational subtitles corresponding to the generation request of the situational subtitles; and adding the situational subtitles to the currently played video. That is to say, in the technical solution of the present invention, a request for generating a situational subtitle sent by a client from different requesters can be received in the process of playing a currently played video; and then determining the contextual subtitles corresponding to the different generation requests of the contextual subtitles. In the first method for generating the situational subtitles in the prior art, subjectivity is too strong, accuracy is poor, and improper pairing is easily generated; in the second method for generating contextual subtitles in the prior art, a server fixedly adds the contextual subtitles to each playing content, and the contextual subtitles displayed for different viewing users are fixed and unchangeable, so that the viewing effect of the user is poor, and the viewing experience of the user is affected. Therefore, compared with the prior art, the generation method of the situational subtitles, provided by the embodiment of the invention, has the advantages that the situational subtitles are more diversified, the use by a user is facilitated, and the listening experience of the user is improved; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
Example two
Fig. 2 is a flowchart illustrating a method for generating a episodic subtitle according to a second embodiment of the present invention. As shown in fig. 2, the method for generating the episodic subtitle may include the following steps:
In a specific embodiment of the present invention, the electronic device may receive, during the playing process of the currently played video, a request for generating a contextualized subtitle, which is sent by a client by a different requestor. Specifically, the electronic device may receive a request for generating a episodic subtitle, which is sent by an editing user of a currently playing video through a client; or, the electronic device may also receive a request for generating a contextualized subtitle, which is sent by a viewing user of a currently playing video through a client; or, the electronic device may further receive a request for generating the contextualized caption, sent by the machine understanding module through the client; or, the electronic device may further receive a request for generating the contextualized caption, which is sent by the video understanding module through the client.
In a specific embodiment of the present invention, the electronic device may extract the contextual subtitles preset by the current user from the request for generating the contextual subtitles; and determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles. Specifically, the current user can upload an editing page of a video, click a subtitle button below a word, and enter a subtitle adding page; clicking a certain subtitle on a subtitle adding page, enabling a subtitle input box to appear in a video preview box, and enabling a keyboard to be dropped to input a scene subtitle when the subtitle is clicked; supporting a plurality of subtitle styles and colors, and clicking other subtitle styles to change the styles and colors of the edited subtitles; one video can be added with a plurality of subtitles; the appearance time of the caption can be controlled manually, a lower time shaft can be operated, the time point of the beginning of the caption and the duration of the caption are selected, and the appearance time of the two captions can be completely or partially overlapped. In addition, in the process of browsing the video, the current user can click a caption adding button for the content which is interested by the user to enter a caption adding page; clicking a certain subtitle on a subtitle adding page, enabling a subtitle input box to appear in a video preview box, and clicking the subtitle to enable a keyboard to input a scene subtitle; supporting a plurality of subtitle styles and colors, and clicking other subtitle styles to change the styles and colors of the edited subtitles; one video can be added with a plurality of subtitles; the appearance time of the subtitles can be controlled manually, the lower time shaft can be operated, the time point of the subtitle beginning and the subtitle duration are selected, and the appearance time of the two subtitles can be completely or partially overlapped. Therefore, in the playing process of the currently played video, the electronic device can extract the contextual subtitle preset by the current user from the generation request of the contextual subtitle; and then determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles.
And step 203, adding the situational subtitles to the currently played video.
In a specific embodiment of the present invention, the electronic device may add the episodic subtitles to the currently playing video. Specifically, the electronic device may convert the episodic subtitles into target episodic subtitles in a preset format; wherein, the preset format comprises: presetting a font, a size and a color; and then adding the target scene subtitle to the currently played video.
The method for generating the situational subtitles, provided by the embodiment of the invention, can receive the generation requests of the situational subtitles sent by different requesters through the client in the playing process of the current playing video; then determining the situational subtitles corresponding to the generation request of the situational subtitles; and adding the scene subtitles to the currently played video. That is to say, in the technical solution of the present invention, a request for generating a situational subtitle sent by a client by different requesters can be received in the playing process of a currently played video; and then determining the contextual subtitles corresponding to the different generation requests of the contextual subtitles. In the first method for generating the situational subtitles in the prior art, subjectivity is too strong, accuracy is poor, and improper pairing is easily generated; in the second method for generating contextual subtitles in the prior art, a server fixedly adds the contextual subtitles to each playing content, and the contextual subtitles displayed for different viewing users are fixed and unchangeable, so that the viewing effect of the user is poor, and the viewing experience of the user is affected. Therefore, compared with the prior art, the generation method of the situational subtitles, provided by the embodiment of the invention, has the advantages that the situational subtitles are more diversified, the use by a user is facilitated, and the listening experience of the user is improved; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
EXAMPLE III
Fig. 3 is a flowchart illustrating a method for generating a episodic subtitle according to a third embodiment of the present invention. As shown in fig. 3, the method for generating the episodic subtitle may include the following steps:
In a specific embodiment of the present invention, the electronic device may receive, during a playing process of a currently played video, a request for generating a contextualized subtitle, which is sent by a client from a different requesting party. Specifically, the electronic device may receive a request for generating a episodic subtitle, which is sent by an editing user of a currently playing video through a client; or, the electronic device may also receive a request for generating a contextualized subtitle, which is sent by a viewing user of a currently playing video through a client; or, the electronic device may further receive a request for generating the contextualized caption, sent by the machine understanding module through the client; or, the electronic device may further receive a request for generating the contextualized caption, which is sent by the video understanding module through the client.
In a specific embodiment of the present invention, the electronic device may perform image recognition on the currently played video in response to the request for generating the episodic subtitle, and obtain an image recognition result corresponding to the currently played video. Specifically, the electronic device may perform object identification on the currently played video in response to the request for generating the situational subtitles, and obtain an object identification result corresponding to the currently played video; or, the electronic device may further perform face recognition on the currently played video in response to the generation request of the situational subtitle, and obtain a face recognition result corresponding to the currently played video; or, the electronic device may further perform scene recognition on the currently played video in response to the generation request of the episodic subtitles, and obtain a scene recognition result corresponding to the currently played video; and then determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video.
In a specific embodiment of the present invention, the electronic device may convert the episodic subtitles into target episodic subtitles in a preset format; wherein, the preset format comprises: a preset font, a preset size and a preset color. Specifically, the user may perform format conversion on the episodic subtitle and convert the episodic subtitle from a current format to a target format. For example, the user may font-convert the episodic subtitles; or, the size of the situational subtitles is converted; alternatively, the contextualized caption is color-converted.
And step 304, adding the target situational subtitles to the currently played video.
In a specific embodiment of the present invention, the electronic device may add the target contextualized caption to the currently playing video. In addition, the electronic equipment can also receive a subtitle selection instruction sent by a user and then select the corresponding episodic subtitle in the currently played video in response to the subtitle selection instruction.
The method for generating the situational subtitles, provided by the embodiment of the invention, can receive the generation requests of the situational subtitles sent by different requesters through the client in the playing process of the current playing video; then determining the situational subtitles corresponding to the generation request of the situational subtitles; and adding the scene subtitles to the currently played video. That is to say, in the technical solution of the present invention, a request for generating a situational subtitle sent by a client from different requesters can be received in the process of playing a currently played video; and then determining the contextual subtitles corresponding to the different generation requests of the contextual subtitles. In the first method for generating the situational subtitles in the prior art, subjectivity is too strong, accuracy is poor, and improper pairing is easily generated; in the second method for generating the contextualized caption in the prior art, the server fixedly adds the contextualized caption to each playing content, and the contextualized caption displayed for different viewing users is fixed and unchanged, so that the viewing effect of the user is poor, and the viewing experience of the user is influenced. Therefore, compared with the prior art, the generation method of the situational subtitles, provided by the embodiment of the invention, has the advantages that the situational subtitles are more diversified, the use by a user is facilitated, and the listening experience of the user is improved; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
Example four
Fig. 4 is a flowchart illustrating a method for generating a episodic subtitle according to a fourth embodiment of the present invention. As shown in fig. 4, the method for generating the episodic subtitle may include the following steps:
In a specific embodiment of the present invention, the electronic device may receive, during the playing process of the currently played video, a request for generating a contextualized subtitle, which is sent by a client by a different requestor. Specifically, the electronic device may receive a request for generating a episodic subtitle, which is sent by an editing user of a currently playing video through a client; or, the electronic device may also receive a request for generating a contextualized subtitle, which is sent by a viewing user of a currently playing video through a client; or, the electronic device may further receive a request for generating the contextualized caption, which is sent by the machine understanding module through the client; or, the electronic device may further receive a request for generating the contextualized caption, which is sent by the video understanding module through the client.
In a specific embodiment of the present invention, the electronic device may convert the user comment determined by the policy into a contextual subtitle corresponding to the request for generating the contextual subtitle; or, the characteristics of the current user browsing page can be converted into the situational subtitles corresponding to the generation request of the situational subtitles; or, the subtitle selected by the current user in the currently played video through the consumption data may be determined as the episodic subtitle corresponding to the generation request of the episodic subtitle. Specifically, the electronic device may rank all the user comments first, and then convert a plurality of user comments ranked first into the contextual subtitles corresponding to the request for generating the contextual subtitles.
In a specific embodiment of the present invention, the electronic device may convert the episodic subtitles into target episodic subtitles in a preset format; wherein, the preset format comprises: a preset font, a preset size and a preset color. Specifically, the user may perform format conversion on the episodic subtitle and convert the episodic subtitle from a current format to a target format. For example, the user may font-convert the contextualized caption; or, performing size conversion on the situational subtitles; alternatively, the episodic subtitles are color-converted.
And step 404, adding the target scene subtitle to the currently played video.
In a specific embodiment of the present invention, the electronic device may add the target contextualized caption to the currently playing video. In addition, the electronic equipment can also receive a subtitle selection instruction sent by a user and then select the corresponding episodic subtitle in the currently played video in response to the subtitle selection instruction.
The method for generating the situational subtitles, provided by the embodiment of the invention, can receive the generation requests of the situational subtitles sent by different requesters through the client in the playing process of the current playing video; then determining the situational subtitles corresponding to the generation request of the situational subtitles; and adding the scene subtitles to the currently played video. That is to say, in the technical solution of the present invention, a request for generating a situational subtitle sent by a client by different requesters can be received in the playing process of a currently played video; and then determining the contextual subtitle corresponding to the generation request of different contextual subtitles. In the first method for generating the situational subtitles in the prior art, subjectivity is too strong, accuracy is poor, and improper pairing is easily generated; in the second method for generating contextual subtitles in the prior art, a server fixedly adds the contextual subtitles to each playing content, and the contextual subtitles displayed for different viewing users are fixed and unchangeable, so that the viewing effect of the user is poor, and the viewing experience of the user is affected. Therefore, compared with the prior art, the generation method of the situational subtitles, provided by the embodiment of the invention, has the advantages that the situational subtitles are more diversified, the use by a user is facilitated, and the listening experience of the user is improved; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a scene subtitle generating device according to a fifth embodiment of the present invention. As shown in fig. 5, the apparatus for generating a contextualized caption according to the embodiment of the present invention may include: a receiving module 501, a determining module 502 and an adding module 503; wherein the content of the first and second substances,
the receiving module 501 is configured to receive a request for generating a situational subtitle, which is sent by a client by a different requestor in a playing process of a currently played video;
the determining module 502 is configured to determine a contextual subtitle corresponding to the request for generating the contextual subtitle;
the adding module 503 is configured to add the episodic subtitle to the currently playing video.
Further, the determining module 502 is specifically configured to extract the contextual subtitle preset by the current user from the request for generating the contextual subtitle; determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles; or, performing image recognition on the currently played video in response to the generation request of the situational subtitles to obtain an image recognition result corresponding to the currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video; or converting the user comment determined by the strategy into the contextual subtitle corresponding to the generation request of the contextual subtitle; or converting the characteristics of the current user browsing page into the situational subtitles corresponding to the situational subtitle generation request; or, determining the subtitle selected by the current user in the currently played video through consumption data as the contextual subtitle corresponding to the generation request of the contextual subtitle.
Further, the determining module 502 is specifically configured to perform object identification on the currently played video in response to the request for generating the episodic subtitles, and obtain an object identification result corresponding to the currently played video; or, performing face recognition on the currently played video in response to the generation request of the situational subtitles, and acquiring a face recognition result corresponding to the currently played video; or performing scene recognition on the currently played video in response to the generation request of the situational subtitles, and acquiring a scene recognition result corresponding to the currently played video.
Further, the adding module 503 is specifically configured to convert the episodic subtitles into target episodic subtitles in a preset format; wherein the preset format comprises: presetting a font, a size and a color; and adding the target scene subtitle to the currently played video.
The generation device of the situational subtitles can execute the method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technique not described in detail in this embodiment, reference may be made to a method for generating a episodic subtitle according to any embodiment of the present invention.
Example six
Fig. 6 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 6, electronic device 12 is in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the method for generating the contextualized subtitles provided by the embodiment of the present invention.
EXAMPLE seven
The seventh embodiment of the invention provides a computer storage medium.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (8)
1. A method for generating a situational subtitle, applied to an electronic device, is characterized by comprising the following steps:
receiving a generation request of a situational subtitle sent by different requesters through a client in the playing process of a currently played video; wherein the requestor comprises a machine understanding module or a video understanding module;
determining the situational subtitles corresponding to the generation request of the situational subtitles;
adding the situational subtitles to the currently played video;
receiving a subtitle selection instruction sent by a user, and responding to the subtitle selection instruction to select corresponding situational subtitles in the currently played video;
wherein, the determining the contextual subtitle corresponding to the request for generating the contextual subtitle includes:
extracting the situational subtitles preset by the current user from the situational subtitle generation request; determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles; alternatively, the first and second electrodes may be,
performing image recognition on the currently played video in response to the generation request of the situational subtitles to acquire an image recognition result corresponding to the currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video; or converting the user comment determined by the strategy into a situational subtitle corresponding to the generation request of the situational subtitle; or converting the characteristics of the current user browsing page into the situational subtitles corresponding to the situational subtitle generation request; or, determining the subtitle selected by the current user in the currently played video through consumption data as the contextual subtitle corresponding to the generation request of the contextual subtitle.
2. The method of claim 1, wherein the performing image recognition on the currently playing video in response to the request for generating the scenario subtitle and obtaining an image recognition result corresponding to the currently playing video comprises:
responding to the generation request of the situational subtitles to perform object identification on the currently played video, and acquiring an object identification result corresponding to the currently played video; alternatively, the first and second electrodes may be,
performing face recognition on the currently played video in response to the generation request of the situational subtitles to obtain a face recognition result corresponding to the currently played video; alternatively, the first and second electrodes may be,
and responding to the generation request of the situational subtitles to perform scene recognition on the currently played video, and acquiring a scene recognition result corresponding to the currently played video.
3. The method of claim 1, wherein the adding the episodic agitation caption to the currently playing video comprises:
converting the situational subtitles into target situational subtitles in a preset format; wherein the preset format comprises: presetting fonts, sizes and colors;
and adding the target situational subtitles to the currently played video.
4. An apparatus for generating a situational subtitle, applied to an electronic device, the apparatus comprising: the device comprises a receiving module, a determining module and an adding module; wherein, the first and the second end of the pipe are connected with each other,
the receiving module is used for receiving the generation request of the situational subtitles sent by different requesters through the client in the playing process of the current playing video; wherein the requestor comprises a machine understanding module or a video understanding module;
the determining module is used for determining the situational subtitles corresponding to the generation request of the situational subtitles;
the adding module is used for adding the situational subtitles to the currently played video; receiving a subtitle selection instruction sent by a user, and responding to the subtitle selection instruction to select corresponding situational subtitles in the currently played video;
the determining module is specifically configured to extract a contextual subtitle preset by a current user from the request for generating the contextual subtitle; determining the situational subtitles preset by the current user as the situational subtitles corresponding to the generation request of the situational subtitles; or, performing image recognition on the currently played video in response to the generation request of the situational subtitles to obtain an image recognition result corresponding to the currently played video; determining the situational subtitles corresponding to the generation request of the situational subtitles according to the image recognition result corresponding to the currently played video; or converting the user comment determined by the strategy into a situational subtitle corresponding to the generation request of the situational subtitle; or, converting the characteristics of the current user browsing page into the situational subtitles corresponding to the generation request of the situational subtitles; or, determining the subtitle selected by the current user in the current playing video through the consumption data as the situational subtitle corresponding to the generation request of the situational subtitle.
5. The apparatus of claim 4, wherein:
the determining module is specifically configured to perform object identification on the currently played video in response to the request for generating the situational subtitles, and obtain an object identification result corresponding to the currently played video; or performing face recognition on the currently played video in response to the generation request of the situational subtitles to obtain a face recognition result corresponding to the currently played video; or performing scene recognition on the currently played video in response to the generation request of the situational subtitles, and acquiring a scene recognition result corresponding to the currently played video.
6. The apparatus of claim 4, wherein:
the adding module is specifically used for converting the situational subtitles into target situational subtitles in a preset format; wherein the preset format comprises: presetting fonts, sizes and colors; and adding the target scene subtitle to the currently played video.
7. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for generating a contextualized caption as defined in any one of claims 1 to 3.
8. A computer-readable storage medium on which a computer program is stored, the program realizing the generation method of the contextualized caption as defined in any one of claims 1 to 3 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910078246.3A CN111491184B (en) | 2019-01-25 | 2019-01-25 | Method and device for generating situational subtitles, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910078246.3A CN111491184B (en) | 2019-01-25 | 2019-01-25 | Method and device for generating situational subtitles, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111491184A CN111491184A (en) | 2020-08-04 |
CN111491184B true CN111491184B (en) | 2022-11-01 |
Family
ID=71813537
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910078246.3A Active CN111491184B (en) | 2019-01-25 | 2019-01-25 | Method and device for generating situational subtitles, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111491184B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112653919B (en) * | 2020-12-22 | 2023-03-14 | 维沃移动通信有限公司 | Subtitle adding method and device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106412681B (en) * | 2015-07-31 | 2019-12-24 | 腾讯科技(深圳)有限公司 | Live bullet screen video broadcasting method and device |
CN105847999A (en) * | 2016-03-29 | 2016-08-10 | 广州华多网络科技有限公司 | Bullet screen display method and display device |
CN107071506A (en) * | 2017-03-17 | 2017-08-18 | 武汉斗鱼网络科技有限公司 | A kind of method and system for pushing barrage |
-
2019
- 2019-01-25 CN CN201910078246.3A patent/CN111491184B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111491184A (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106303723B (en) | Video processing method and device | |
US11265614B2 (en) | Information sharing method and device, storage medium and electronic device | |
CN111447489A (en) | Video processing method and device, readable medium and electronic equipment | |
CN109255037B (en) | Method and apparatus for outputting information | |
WO2017181597A1 (en) | Method and device for video playback | |
CN111800668B (en) | Barrage processing method, barrage processing device, barrage processing equipment and storage medium | |
EP3905663A1 (en) | Multi-subtitle display method, intelligent terminal and storage medium | |
US20230169278A1 (en) | Video processing method, video processing apparatus, and computer-readable storage medium | |
US20230169275A1 (en) | Video processing method, video processing apparatus, and computer-readable storage medium | |
CN110062287B (en) | Target object control method and device, storage medium and electronic equipment | |
US20220375460A1 (en) | Method and apparatus for generating interaction record, and device and medium | |
JP2023539815A (en) | Minutes interaction methods, devices, equipment and media | |
CN113965809A (en) | Method and device for simultaneous interactive live broadcast based on single terminal and multiple platforms | |
CN111491184B (en) | Method and device for generating situational subtitles, electronic equipment and storage medium | |
CN106878773B (en) | Electronic device, video processing method and apparatus, and storage medium | |
WO2024002072A1 (en) | Information collection method and apparatus, and electronic device | |
CN110673886A (en) | Method and device for generating thermodynamic diagram | |
US20230300429A1 (en) | Multimedia content sharing method and apparatus, device, and medium | |
CN112492399B (en) | Information display method and device and electronic equipment | |
US11750876B2 (en) | Method and apparatus for determining object adding mode, electronic device and medium | |
CN116095388A (en) | Video generation method, video playing method and related equipment | |
CN117319736A (en) | Video processing method, device, electronic equipment and storage medium | |
CN115269920A (en) | Interaction method, interaction device, electronic equipment and storage medium | |
CN114780187A (en) | Prompting method and device | |
CN113891108A (en) | Subtitle optimization method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |