CN108419113B - Subtitle display method and device - Google Patents

Subtitle display method and device Download PDF

Info

Publication number
CN108419113B
CN108419113B CN201810509160.7A CN201810509160A CN108419113B CN 108419113 B CN108419113 B CN 108419113B CN 201810509160 A CN201810509160 A CN 201810509160A CN 108419113 B CN108419113 B CN 108419113B
Authority
CN
China
Prior art keywords
subtitle
multimedia resource
caption
playing
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810509160.7A
Other languages
Chinese (zh)
Other versions
CN108419113A (en
Inventor
何家俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201810509160.7A priority Critical patent/CN108419113B/en
Publication of CN108419113A publication Critical patent/CN108419113A/en
Application granted granted Critical
Publication of CN108419113B publication Critical patent/CN108419113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a subtitle display method and device, and belongs to the technical field of networks. The method comprises the following steps: receiving a media stream of a second terminal in a time period of live broadcasting of a user of the second terminal; when a subtitle message of a multimedia resource is acquired from the media stream, determining the playing state of the multimedia resource according to the subtitle message, wherein the subtitle message carries the playing state of the multimedia resource, a target subtitle and a display timestamp of each word in the target subtitle; and when the playing state of the multimedia resource is in playing, performing word-by-word display on the target caption according to the display timestamp of each word in the target caption. The invention enables the second terminal to realize the character-by-character display of the caption according to the caption message, enables the user to intuitively know which character in the caption corresponds to the current playing time of the multimedia resource, and improves the precision of caption display.

Description

Subtitle display method and device
Technical Field
The present invention relates to the field of network technologies, and in particular, to a method and an apparatus for displaying subtitles.
Background
With the development of network technology, a terminal can play a live video of a main broadcasting user on a webpage through a browser, and the terminal can display corresponding subtitles in the process of playing the live video, for example, when the terminal plays the live video of a song sung by the main broadcasting user, lyrics of the song are displayed, so that audience users can enjoy the song sung by the main broadcasting user better.
Currently, a terminal generally displays subtitles in a basic mode of sentence-by-sentence display. Taking the example of displaying lyrics corresponding to a song, when a spectator user wants to watch a live video of a song sung by a main broadcasting user, the spectator user can perform corresponding operations on the terminal to enter a live broadcasting room of the main broadcasting user. The terminal of the audience user can obtain the lyric file corresponding to the song according to the ID of the song performed by the main broadcasting user, the lyric file comprises the total playing time of the song and the display time stamp of each lyric, and the display time stamp of each lyric is used for indicating the time information of each lyric in the song. Within the total playing time of the song, the terminal can display the lyrics sentence by sentence according to the timestamp of each sentence of the lyrics through the browser.
In the process of implementing the invention, the inventor finds that the related art has at least the following problems:
the caption display method can only realize the sentence-by-sentence display of the caption, for example, the lyrics are displayed sentence-by-sentence, so that the user can not intuitively know which character in the lyrics corresponds to the current playing time of the song. Therefore, the method has the problem of low subtitle display precision.
Disclosure of Invention
The embodiment of the invention provides a subtitle display method and device, which can solve the problem of low subtitle display accuracy in the related technology. The technical scheme is as follows:
in a first aspect, a subtitle display method is provided, which is applied to a first terminal, and includes:
receiving a media stream of a second terminal in a time period of live broadcasting of a user of the second terminal;
when a subtitle message of a multimedia resource is acquired from the media stream, determining the playing state of the multimedia resource according to the subtitle message, wherein the subtitle message carries the playing state of the multimedia resource, a target subtitle and a display timestamp of each word in the target subtitle;
when the playing state of the multimedia resource is in playing, performing word-by-word display on the target caption according to the display timestamp of each word in the target caption;
the playing state of the multimedia resource comprises playing and stopping playing, and the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource.
In one possible implementation manner, the displaying the target subtitles word by word according to the display timestamp of each word in the target subtitles includes:
and displaying the target caption word by adopting a preset animation according to the display timestamp of each word in the target caption.
In one possible implementation manner, the displaying the target subtitles word by using a preset animation includes:
and drawing the preset animation for displaying the target caption word by word based on a canvas drawing board of a hypertext markup language HTML5 of the multimedia browsing application.
In a possible implementation manner, after determining the play state of the multimedia resource according to the subtitle packet, the method further includes:
and when the playing state of the multimedia resource is the playing stop state, stopping displaying the subtitles.
In one possible implementation manner, the displaying the target subtitles word by word includes:
displaying the target captions word by word in an interface of a multimedia browsing application;
correspondingly, after the target subtitles are displayed word by word in the interface of the multimedia browsing application, the method further comprises:
and when the interface window of the multimedia browsing application is closed, stopping displaying the subtitles.
In a second aspect, a subtitle display method is provided, which is applied to a second terminal, and includes:
when a user of the second terminal starts to broadcast directly, collecting a video frame and an audio frame when the user broadcasts directly;
generating a media stream based on the collected video frame and audio frame and sending the media stream to a server, wherein the server is used for forwarding the media stream to a first terminal;
in the time period of live broadcasting of the user of the second terminal, when multimedia resources are played, carrying the caption message of the multimedia resources in the media stream and sending the caption message to a server;
the caption message carries a playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource.
In a possible implementation manner, when a multimedia resource is played, the sending a caption message of the multimedia resource carried in the media stream to a server includes:
when the multimedia resource is played or continuously played, carrying a first caption message of the multimedia resource in the media stream and sending the first caption message to the server, wherein the playing state of the multimedia resource carried by the first caption message is playing;
and when the multimedia resource is stopped being played, carrying a second caption message of the multimedia resource in the media stream and sending the second caption message to the server, wherein the playing state of the multimedia resource carried by the second caption message is the playing stop.
In a possible implementation manner, when a multimedia resource is played, the sending a caption message of the multimedia resource carried in the media stream to a server includes:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, acquiring a target caption corresponding to the current playing time according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, and executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to a server.
In a third aspect, a caption display system is provided, which comprises a first terminal, a second terminal and a server,
the second terminal is used for collecting video frames and audio frames when a user broadcasts directly, generating media streams based on the collected video frames and audio frames and sending the media streams to the server;
the server is used for sending the media stream to the first terminal;
the first terminal is used for determining the playing state of the multimedia resource according to the subtitle message when the subtitle message of the multimedia resource is acquired from the media stream, and displaying the target subtitle word by word according to the display timestamp of each word in the target subtitle when the playing state of the multimedia resource is in playing;
the caption message carries a playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource.
In a possible implementation manner, the first terminal is configured to perform word-by-word display on the target subtitle by using a preset animation according to a display timestamp of each word in the target subtitle.
In one possible implementation, the second terminal is configured to:
when the multimedia resource is played or continuously played, carrying a first caption message of the multimedia resource in the media stream and sending the first caption message to the server, wherein the playing state of the multimedia resource carried by the first caption message is playing;
and when the multimedia resource is stopped being played, carrying a second caption message of the multimedia resource in the media stream and sending the second caption message to the server, wherein the playing state of the multimedia resource carried by the second caption message is the playing stop.
In one possible implementation, the second terminal is configured to:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, acquiring a target caption corresponding to the current playing time according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, and executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to a server.
In a fourth aspect, there is provided a subtitle display apparatus, the apparatus comprising:
the receiving module is used for receiving the media stream of the second terminal in the time period of live broadcasting of the user of the second terminal;
the determining module is used for determining the playing state of the multimedia resource according to the subtitle message when the subtitle message of the multimedia resource is acquired from the media stream, wherein the subtitle message carries the playing state of the multimedia resource, the target subtitle and the display timestamp of each word in the target subtitle;
the display module is used for displaying the target subtitles word by word according to the display timestamp of each word in the target subtitles when the playing state of the multimedia resource is in playing;
the playing state of the multimedia resource comprises playing and stopping playing, and the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource.
In a possible implementation manner, the display module is configured to display the target subtitles word by adopting a preset animation according to a display timestamp of each word in the target subtitles.
In one possible implementation manner, the display module is configured to draw the preset animation for displaying the target subtitle word by word based on a canvas drawing board of a hypertext markup language HTML5 of a multimedia browsing application.
In a possible implementation manner, the display module is further configured to stop displaying the subtitles when the playing status of the multimedia resource is stop playing.
In one possible implementation manner, the display module is configured to display the target subtitles word by word in an interface of a multimedia browsing application;
correspondingly, the display module is further configured to stop displaying the subtitles when the interface window of the multimedia browsing application is closed.
In a fifth aspect, a subtitle display apparatus is provided, which is applied to a second terminal, and includes:
the acquisition module is used for acquiring video frames and audio frames during live broadcasting of the user when the user of the second terminal starts live broadcasting;
the system comprises a sending module, a first terminal and a second terminal, wherein the sending module is used for generating a media stream based on collected video frames and audio frames and sending the media stream to the server;
the sending module is further configured to carry a subtitle message of the multimedia resource in the media stream and send the subtitle message to a server when the multimedia resource is played in a time period in which a user of the second terminal performs live broadcasting;
the caption message carries a playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource.
In one possible implementation, the sending module is configured to:
when the multimedia resource is played or continuously played, carrying a first caption message of the multimedia resource in the media stream and sending the first caption message to the server, wherein the playing state of the multimedia resource carried by the first caption message is playing;
and when the multimedia resource is stopped being played, carrying a second caption message of the multimedia resource in the media stream and sending the second caption message to the server, wherein the playing state of the multimedia resource carried by the second caption message is the playing stop.
In one possible implementation, the sending module is configured to:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, acquiring a target caption corresponding to the current playing time according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, and executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to a server.
In a sixth aspect, a terminal is provided that includes a processor and a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory to implement the method steps of any one of the above aspects or any one of the implementation manners of any one of the above aspects.
In a seventh aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, implements the method steps of any one of the above aspects or any one of the implementation manners of the above aspects.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the caption message of the multimedia resource currently played by the second terminal is provided for the first terminal through the media stream, and the caption message carries each caption corresponding to the current playing time of the multimedia resource and the accurate timestamp of each word in each caption, so that the second terminal can display the captions word by word according to the caption message, a user can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the caption display accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a subtitle display method according to an embodiment of the present invention.
Fig. 2 is a flowchart of a subtitle display method according to an embodiment of the present invention.
Fig. 3 is a flowchart of a subtitle display method according to an embodiment of the present invention.
Fig. 4 is a flowchart of a subtitle display method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a preset animation displayed word by word in a subtitle according to an embodiment of the present invention.
Fig. 6 is a flowchart of a lyric display method according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a subtitle display apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a subtitle display apparatus according to an embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a terminal 900 according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a subtitle display method according to an embodiment of the present invention. Referring to fig. 1, the implementation environment includes: a first terminal 101, a server 102 and a second terminal 103.
The second terminal 103 is a terminal that a host user performs live broadcast as a provider of a media stream. In the process of live broadcast by the anchor user, the second terminal 103 may collect a video frame and an audio frame of the anchor user during live broadcast, generate a media stream during live broadcast based on the collected video frame and audio frame, and send the media stream to the server 102.
The server 102 is configured to receive the media stream sent by the second terminal 103, and forward the received media stream to the first terminal 101. For example, the server 102 may be a streaming media server.
The first terminal 101 is a receiver of the media stream, and refers to a terminal where the viewer user plays the media stream. The first terminal 101 may obtain the video frame and the audio frame from the media stream for playing.
It should be noted that, in the embodiment of the present invention, one server is taken as an example for description, and actually, the implementation environment may further include a plurality of servers, which serve as relay servers and are responsible for forwarding the media stream sent by the second terminal 103 to the first terminal 101.
The second terminal 103 and the server 102, and the server 102 and the first terminal 101 may communicate with each other through a wireless network or a wired network.
Fig. 2 is a flowchart of a subtitle display method according to an embodiment of the present invention. Referring to fig. 2, the method includes:
201. and receiving the media stream of the second terminal in the live broadcast time period of the user of the second terminal.
202. When a caption message of a multimedia resource is acquired from the media stream, determining the playing state of the multimedia resource according to the caption message, wherein the caption message carries the playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption.
203. And when the playing state of the multimedia resource is in playing, performing word-by-word display on the target caption according to the display timestamp of each word in the target caption.
The playing state of the multimedia resource comprises playing and stopping playing, and the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource.
According to the method provided by the embodiment of the invention, the caption message of the multimedia resource currently played by the second terminal is provided for the first terminal through the media stream, and the caption message carries each caption corresponding to the current playing time of the multimedia resource and the precise timestamp of each word in each caption, so that the second terminal can realize the word-by-word display of the caption according to the caption message, a user can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the precision of caption display is improved.
In one possible implementation, the displaying the target caption word by word according to the display time stamp of each word in the target caption includes:
and displaying the target caption word by adopting a preset animation according to the display timestamp of each word in the target caption.
In one possible implementation manner, the displaying the target caption word by using a preset animation includes:
and drawing the preset animation for displaying the target caption word by word based on a canvas drawing board of a hypertext markup language HTML5 of the multimedia browsing application.
In a possible implementation manner, after determining the play status of the multimedia resource according to the subtitle packet, the method further includes:
and when the playing state of the multimedia resource is the playing stop state, stopping displaying the subtitles.
In one possible implementation, the displaying the target caption word by word includes:
displaying the target caption word by word in an interface of a multimedia browsing application;
correspondingly, after the target subtitles are displayed word by word in the interface of the multimedia browsing application, the method further comprises:
and when the interface window of the multimedia browsing application is closed, stopping displaying the subtitles.
Fig. 3 is a flowchart of a subtitle display method according to an embodiment of the present invention. Referring to fig. 3, the method includes:
301. and when the user of the second terminal starts the live broadcasting, acquiring the video frame and the audio frame when the user live broadcasts.
302. And generating a media stream based on the collected video frame and audio frame and sending the media stream to a server, wherein the server is used for forwarding the media stream to the first terminal.
303. And in the time period of live broadcasting of the user of the second terminal, when the multimedia resource is played, carrying the caption message of the multimedia resource in the media stream and sending the caption message to the server.
The caption message carries a playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource.
According to the method provided by the embodiment of the invention, the caption message of the multimedia resource currently played by the second terminal is provided for the first terminal through the media stream, and the caption message carries each caption corresponding to the current playing time of the multimedia resource and the precise timestamp of each word in each caption, so that the second terminal can realize the word-by-word display of the caption according to the caption message, a user can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the precision of caption display is improved.
In a possible implementation manner, when playing a multimedia resource, the sending a caption message of the multimedia resource carried in the media stream to a server includes:
when the multimedia resource is played or continuously played, a first caption message of the multimedia resource is carried in the media stream and is sent to the server, and the playing state of the multimedia resource carried by the first caption message is playing;
when the multimedia resource is stopped playing, a second caption message carrying the multimedia resource in the media stream is sent to the server, and the playing state of the multimedia resource carried by the second caption message is the stop playing.
In a possible implementation manner, when playing a multimedia resource, the sending a caption message of the multimedia resource carried in the media stream to a server includes:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, a target caption corresponding to the current playing time is obtained according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to a server.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
The subtitle display method provided by the embodiment of the invention can be applied to displaying the scenes of the subtitles when the live video is played on the webpage through multimedia browsing application. The multimedia browsing application is used for displaying multimedia data, such as live videos and subtitles of multimedia resources, in a webpage. For example, the multimedia browsing application may be a browser, and the browser may play video and dynamically display subtitles in a web page without installing any video plug-in (e.g., a flash plug-in).
Fig. 4 is a flowchart of a subtitle display method according to an embodiment of the present invention. Based on the interaction between the first terminal, the server and the second terminal, referring to fig. 4, the method includes:
401. when the anchor user starts the live broadcast, the second terminal sends the media stream to the server.
The media stream may include a video stream and an audio stream, among others.
In the embodiment of the invention, when the anchor user of the second terminal starts the live broadcast, the second terminal can collect the video frame and the audio frame when the anchor user starts the live broadcast; and generating a media stream based on the collected video frame and audio frame and sending the media stream to a server, wherein the server is used for forwarding the media stream to the first terminal. The first terminal may be a terminal used by any viewer user in the live room of the anchor user.
For example, when the anchor user starts live broadcasting, the second terminal may collect a video frame and an audio frame of the anchor user during live broadcasting through a camera provided by the second terminal or an external camera, encode, stream and encapsulate the collected video frame to obtain a video stream, and encode, stream and encapsulate the collected audio frame to obtain an audio stream. In addition, the second terminal can also stamp each captured video frame and audio frame, the timestamp of the video frame is used for indicating the time information of the video frame in the media stream, and the timestamp of the audio frame is used for indicating the time information of the audio frame in the media stream. The time stamps can be used for determining the sequence of the acquisition time of each video frame and the audio frame, so that the playing sequence of each frame can be determined according to the time stamps when the first terminal plays each video frame and each audio frame.
402. And in the time period of live broadcasting of the anchor user, when the multimedia resource is played, the second terminal carries the caption message of the multimedia resource in the media stream and sends the caption message to the server.
The caption message carries a playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource. For example, the multimedia asset may be a song and the subtitle may be lyrics.
In one possible implementation manner, a designated application may be installed on the second terminal, and the second terminal may play the multimedia resource through the designated application, for example, the designated application may be an accompaniment application for playing an accompaniment of a song. Correspondingly, when the multimedia resource is played through the designated application, the second terminal can acquire a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file, and the subtitle file comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and the display timestamp of each word in each sentence of subtitle. The display time stamp of each caption is used for indicating the time information of each caption in the multimedia resource, the display time stamp of each word is used for indicating the time information of each word in the multimedia resource, and the time information is the time information displayed when the multimedia resource is played, and comprises the starting display time and the continuous display time.
For example, if the anchor user wants to sing a certain song during the live broadcast, the anchor user may select a song (accompaniment) on the live broadcast interface displayed by the second terminal, and then perform a play operation on the song, such as clicking a play button, and when the play operation is detected, the second terminal may play the song. In addition, when the playing operation of the song is detected, the second terminal can acquire the lyric file corresponding to the song from the server providing the lyric file through the specified application. In order to implement the word-by-word playing of the lyrics, the second terminal may obtain a lyric file including a lyric per sentence of the song, a total playing time, a time stamp of the lyrics per sentence, and a display time stamp of each word in the lyrics per sentence.
The caption file which can provide the display time stamp of each word is generally encrypted by a private algorithm, the specified application has the function of decrypting the caption file by using a decryption algorithm, and the C + + code is used by the specified application and can keep the decryption algorithm secret. Correspondingly, the second terminal can decrypt the subtitle file through the designated application to obtain each sentence of subtitle of the multimedia resource, the total playing time length, the timestamp of each sentence of subtitle and the display timestamp of each word in each sentence of subtitle, and further obtain the target subtitle corresponding to the current playing time according to the current playing time of the multimedia resource and the timestamp of each sentence of subtitle in the subtitle file. For example, when the second terminal plays the multimedia resource through the designated application, the current playing time of the multimedia resource is recorded in real time, and after the current playing time is obtained, the second terminal may compare the current playing time with the timestamp of each subtitle in the subtitle file, determine which subtitle the current playing time is within the time period determined by the starting display time and the continuous display time of which subtitle the current playing time is within, and use the subtitle as the current target subtitle. The decryption of the subtitle file is realized through the designated application of the second terminal, the decryption algorithm does not need to be exposed to the first terminal, and the decryption algorithm is protected.
In the process of playing the multimedia resource, when a new target caption is obtained, the second terminal may generate a caption message of the multimedia resource according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, carry the caption message in a media stream, and send the caption message to the server. For example, the second terminal may represent the playing status of the multimedia asset in the form of a status number, with different status numbers corresponding to different playing statuses.
In a possible implementation manner, when the second terminal starts playing or continues playing the multimedia resource, the first caption message of the multimedia resource is carried in the media stream and sent to the server, and the playing state of the multimedia resource carried by the first caption message is in playing. By carrying the first caption message in the media stream, the first terminal can know the state of the multimedia resource in playing by analyzing the first caption message after receiving the media stream, that is, the anchor user sings the multimedia resource.
In a possible implementation manner, when the second terminal stops playing the multimedia resource, the second caption message of the multimedia resource is carried in the media stream and sent to the server, and the playing state of the multimedia resource carried by the second caption message is the stop playing. By carrying the second caption message in the media stream, the first terminal can know that the multimedia resource is in a playing stop state by analyzing the second caption message after receiving the media stream, that is, the anchor user does not currently perform live broadcasting based on the multimedia resource, for example, the anchor user pauses or finishes singing a song, or switches a sung song.
403. And when receiving the media stream of the second terminal, the server forwards the media stream to the first terminal.
In the embodiment of the present invention, the server, as a server for relaying between the first terminal and the second terminal, may receive the media stream of the second terminal, forward the media stream to the first terminal in real time,
it should be noted that, in the embodiment of the present invention, a server for forwarding is provided between the first terminal and the second terminal, and the server directly forwards the media stream of the second terminal to the first terminal, as an example, in a possible implementation manner, a plurality of servers may exist between the first terminal and the second terminal, and accordingly, the media stream of the second terminal may be forwarded through the plurality of servers until being forwarded to the first terminal.
404. And in the time period of live broadcasting of the anchor user of the second terminal, the first terminal receives the media stream of the second terminal.
In the embodiment of the invention, in the time period of live broadcasting of the anchor user, the first terminal can continuously receive the media stream of the second terminal and acquire the video frame and the audio frame from the media stream for playing. For example, the second terminal may obtain encoded video frames and audio frames from the media stream, decode the video frames and audio frames, and play according to the timestamps of the respective video frames and audio frames.
405. When the caption message of the multimedia resource is acquired from the media stream, the first terminal determines the playing state of the multimedia resource according to the caption message, wherein the multimedia resource is the multimedia resource played by the second terminal.
For the situation that the second terminal can carry the caption message in the media stream in step 402, when receiving the media stream, the first terminal can obtain the caption message from the media stream, and further, the first terminal can read the playing state of the multimedia resource from the caption message, for example, read the state number in the caption message, so as to obtain the playing state corresponding to the state number.
In a possible implementation manner, when the second terminal acquires data from the media stream, the second terminal may verify the data format, that is, verify whether the data format is correct, that is, verify whether the data is a subtitle packet, and if the data format is correct, that is, the data is a subtitle packet, execute the step of determining the play state; and if the data format is wrong, namely the data is not the subtitle message, ignoring the acquired data. By verifying the data format, the second terminal can be ensured to receive the correct caption message.
When the playing status of the multimedia asset is in playing, the first terminal may perform the following step 406, and when the playing status of the multimedia asset is stop, the first terminal may perform the following step 407.
406. And when the playing state of the multimedia resource is in playing, the first terminal displays the target caption word by word according to the display timestamp of each word in the target caption.
In the embodiment of the present invention, when the playing state of the multimedia resource is in playing, it indicates that the anchor user of the second terminal performs live broadcasting based on the multimedia resource, for example, the anchor user sings a song accompanied by the song played by the second terminal. At this time, the first terminal may display a sentence of the target subtitle carried by the currently received subtitle message first, and further, in order to facilitate a user to intuitively know which character in the target subtitle specifically corresponds to the current playing time of the multimedia resource, the first terminal may display the target subtitle word by word. In one possible implementation manner, the first terminal may perform word-by-word display on the target subtitle by using a preset animation according to a display timestamp of each word in the target subtitle
The first terminal can determine the display sequence of each word according to the display starting time indicated by the timestamp, and the earlier the display starting time is, the earlier the display sequence is; the first terminal may also determine a display duration for each word based on the duration of display indicated by the timestamp. By providing the animation effect of displaying the caption word by word for the audience users, the audience users can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the precision of caption display is improved.
Specifically, the first terminal may draw the preset animation for displaying the target subtitle word by word based on a canvas drawing board of HTML (HyperText Markup Language) 5 of a multimedia browsing application. The canvas drawing board is a newly added component of HTML5 which adopts multimedia browsing application, is just like a curtain, and can draw images, animations and the like on the canvas drawing board by using javascript. The animation displayed word by the subtitles is drawn on the canvas drawing board by using the javascript, any video plug-in is not needed, and compared with a flash plug-in, the video plug-in has smaller authority, is not easy to access a local file system of the first terminal for operation, and reduces the security vulnerability risk.
Referring to fig. 5, a schematic diagram of a preset animation of displaying caption word by word is provided, where the preset animation may be that each time a corresponding word is displayed, the word is distinguished from the non-displayed word by using a different color (indicated by black in the figure), and the words from the left of the word are gradually displayed word by word to the right according to the continuous display time, so that the viewer user can visually see the playing position of the multimedia resource. When the currently displayed words are distinguished in different colors, for each word, a part of the word may be distinguished in color first, and then the remaining part of the word may be distinguished in color gradually. Of course, the first terminal may also distinguish the currently displayed word from the non-displayed word in other manners, and play the animation displayed in the subtitle word by word, and the embodiment of the present invention does not limit the specific form of the preset animation.
The first terminal may perform picture rendering in the process of displaying the subtitles, for example, rendering a picture by using a canvas technology of HTML of a multimedia browsing application; drawing the animation in the word-by-word progress, for example, using a drawing board cleaning tool of a canvas drawing board to realize performance saving and good drawing effect; performing character-by-character rendering, for example, controlling animation to be played character-by-character through a display timestamp of each character in each sentence of subtitles provided by a subtitle message; frame-by-frame rendering, such as using the requestAnimationFrame technique, efficiently utilizes rendering opportunities for multimedia browsing applications. The actual rendering timing of the multimedia browsing application may be a timing when the multimedia browsing application is in a user use state, for example, when an interface (or a window) of the multimedia browsing application is opened, if the multimedia browsing application is not in the user use state, the first terminal may pause rendering of the subtitle animation.
The process sends the information of the caption message through the designated application of the second terminal, the designated application is a tool for the anchor user to play the multimedia resources, the caption message can be carried in the media stream and sent to the first terminal together, the caption message provides an accurate timestamp of each sentence of caption and each word, the first terminal can accurately acquire the display position of the caption through the caption message, and the multimedia resource playing accuracy of the anchor end is restored. And realizing subtitle animation of webpage rendering through the multimedia browsing application. The method comprises the steps of obtaining a display position of a subtitle based on an actual rendering opportunity of a multimedia browsing application, namely, determining the subtitle corresponding to the current playing time of a multimedia resource, and achieving accurate synchronization of the subtitle display progress of a first terminal and the multimedia resource playing progress of a second terminal.
It should be noted that, in the embodiment of the present invention, the animation is implemented by using a canvas drawing board as an example, in a possible implementation manner, the first terminal may also implement the preset animation of displaying the subtitles word by matching a DOM (document object model) of the multimedia browsing application with the css3, which is not limited in the embodiment of the present invention.
In the above steps 401 to 407, in the process of playing the multimedia resource, the second terminal continuously provides a sentence of subtitles for the first terminal through the media stream according to the current playing time of the multimedia resource, and by using the manner provided by each sentence of subtitles, the risk that the interface is stolen can be limited, and decryption of the subtitles is implemented through the specified application of the second terminal, without exposing the decryption algorithm to the first terminal, so that the first terminal obtains accurate subtitles on the premise of protecting the decryption algorithm and reducing the risk of being stolen. On the basis, accurate subtitle playing of the web end is achieved without depending on a plug-in (such as a flash plug-in).
407. And when the playing state of the multimedia resource is the playing stop state, the first terminal stops displaying the subtitles.
In the embodiment of the invention, when the playing state of the multimedia resource is the playing stop state, the anchor user of the second terminal is indicated to pause or finish the live broadcasting based on the multimedia resource, for example, the anchor user stops singing a song or switches the singing song. At this time, the first terminal may no longer display the subtitles, that is, no longer adopt the preset animation, and display the subtitles word by word.
Of course, the first terminal may stop displaying the subtitles at other times. For example, in step 406, the first terminal may display the target subtitles word by word in an interface of the multimedia browsing application, where the interface is an interface for displaying a live video of the anchor user. Accordingly, the first terminal may stop displaying the subtitles when the interface window of the multimedia browsing application is closed. By stopping the display of the subtitles at a proper time, the progress of displaying the subtitles at the first terminal and the progress of playing the multimedia resources at the second terminal can be accurately synchronized, and unnecessary resource consumption is avoided.
It should be noted that the step 407 is an optional step, that is, the subtitle display method provided in the embodiment of the present invention may only include the above steps 401 to 406, and the subtitle may be displayed word by word.
In the following, a description is given of the above technical solution by taking an example that a host user of a second terminal sings a song and a first terminal displays lyrics of the song, referring to fig. 6, a flowchart of a lyric display method is provided, as shown in fig. 6, the host user starts to sing a live broadcast of the song on the second terminal by using a specified application, and during the live broadcast, the host user can perform corresponding operations on the specified application, such as operations of playing the song, switching the song, stopping playing the song, and the like. The second terminal can also carry a lyric message in a media stream through the designated application and send the lyric message to the server, the server forwards the media stream carrying the lyric message to the first terminal used by the audience user, the first terminal obtains the lyric message from the received media stream, the data format is verified, if the lyric message is wrong, the data is ignored, if the lyric message is correct, the playing state of the song is determined, if the playing state is stop playing, lyric display is terminated, if the playing state is in playing, lyric animation is executed, and the lyric animation comprises picture rendering, word-by-word progress animation, word-by-word rendering and frame-by-frame rendering until the animation is played completely.
According to the method provided by the embodiment of the invention, the caption message of the multimedia resource currently played by the second terminal is provided for the first terminal through the media stream, and the caption message carries each caption corresponding to the current playing time of the multimedia resource and the precise timestamp of each word in each caption, so that the second terminal can realize the word-by-word display of the caption according to the caption message, a user can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the precision of caption display is improved.
Fig. 7 is a schematic structural diagram of a subtitle display apparatus according to an embodiment of the present invention. Referring to fig. 7, the apparatus includes:
a receiving module 701, configured to receive a media stream of a second terminal in a time period in which a user of the second terminal performs live broadcast;
a determining module 702, configured to determine, according to a subtitle message of a multimedia resource obtained from the media stream, a playing state of the multimedia resource, where the subtitle message carries the playing state of the multimedia resource, a target subtitle, and a display timestamp of each word in the target subtitle;
a display module 703, configured to display the target subtitle word by word according to a display timestamp of each word in the target subtitle when the playing state of the multimedia resource is in playing;
the playing state of the multimedia resource comprises playing and stopping playing, and the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource.
In a possible implementation manner, the display module 703 is configured to display the target subtitle word by using a preset animation according to a display timestamp of each word in the target subtitle.
In one possible implementation, the display module 703 is configured to draw the preset animation for displaying the target caption word by word based on a canvas drawing board of a hypertext markup language HTML5 of a multimedia browsing application.
In a possible implementation manner, the display module 703 is further configured to stop displaying the subtitles when the playing status of the multimedia asset is stop playing.
In a possible implementation manner, the display module 703 is configured to display the target subtitles word by word in an interface of a multimedia browsing application;
correspondingly, the display module 703 is further configured to stop displaying the subtitles when the interface window of the multimedia browsing application is closed.
In the embodiment of the invention, the caption message of the multimedia resource currently played by the second terminal is provided for the first terminal through the media stream, and the caption message carries each caption corresponding to the current playing time of the multimedia resource and the accurate timestamp of each word in each caption, so that the second terminal can realize the word-by-word display of the caption according to the caption message, a user can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the accuracy of caption display is improved.
Fig. 8 is a schematic structural diagram of a subtitle display apparatus according to an embodiment of the present invention. Referring to fig. 8, the apparatus includes:
an acquisition module 801, configured to acquire a video frame and an audio frame when a user of the second terminal starts live broadcasting;
a sending module 802, configured to generate a media stream based on the collected video frame and audio frame and send the media stream to a server, where the server is configured to forward the media stream to a first terminal;
the sending module 802 is further configured to, in a time period when the user of the second terminal performs live broadcasting, carry a subtitle message of the multimedia resource in the media stream and send the subtitle message to the server when the multimedia resource is played;
the caption message carries a playing state of the multimedia resource, a target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource.
In one possible implementation, the sending module 802 is configured to:
when the multimedia resource is played or continuously played, a first caption message of the multimedia resource is carried in the media stream and is sent to the server, and the playing state of the multimedia resource carried by the first caption message is playing;
when the multimedia resource is stopped playing, a second caption message carrying the multimedia resource in the media stream is sent to the server, and the playing state of the multimedia resource carried by the second caption message is the stop playing.
In one possible implementation, the sending module 802 is configured to:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, a target caption corresponding to the current playing time is obtained according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to a server.
In the embodiment of the invention, the caption message of the multimedia resource currently played by the second terminal is provided for the first terminal through the media stream, and the caption message carries each caption corresponding to the current playing time of the multimedia resource and the accurate timestamp of each word in each caption, so that the second terminal can realize the word-by-word display of the caption according to the caption message, a user can intuitively know which word in the caption corresponds to the current playing time of the multimedia resource, and the accuracy of caption display is improved.
It should be noted that: in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the subtitle display apparatus and the subtitle display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 9 is a schematic structural diagram of a terminal 900 according to an embodiment of the present invention. The terminal 900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 900 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, terminal 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement a subtitle display method provided by a method embodiment of the present invention.
In some embodiments, terminal 900 can also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, display screen 905, camera 906, audio circuitry 907, positioning component 908, and power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, providing the front panel of the terminal 900; in other embodiments, the number of the display panels 905 may be at least two, and each of the display panels is disposed on a different surface of the terminal 900 or is in a foldable design; in still other embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 900. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of the terminal 900. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The positioning component 908 is used to locate the current geographic Location of the terminal 900 for navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 909 is used to provide power to the various components in terminal 900. The power source 909 may be alternating current, direct current, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 900 can also include one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 900. For example, the acceleration sensor 911 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 901 can control the touch display 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 911. The acceleration sensor 911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal 900, and the gyro sensor 912 may cooperate with the acceleration sensor 911 to acquire a 3D motion of the user on the terminal 900. The processor 901 can implement the following functions according to the data collected by the gyro sensor 912: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 913 may be disposed on the side bezel of terminal 900 and/or underneath touch display 905. When the pressure sensor 913 is disposed on the side frame of the terminal 900, the user's holding signal of the terminal 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at a lower layer of the touch display 905, the processor 901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 914 is used for collecting a fingerprint of the user, and the processor 901 identifies the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 914 may be disposed on the front, back, or side of the terminal 900. When a physical key or vendor Logo is provided on the terminal 900, the fingerprint sensor 914 may be integrated with the physical key or vendor Logo.
The optical sensor 915 is used to collect ambient light intensity. In one embodiment, the processor 901 may control the display brightness of the touch display 905 based on the ambient light intensity collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 905 is turned down. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
Proximity sensor 916, also known as a distance sensor, is typically disposed on the front panel of terminal 900. The proximity sensor 916 is used to collect the distance between the user and the front face of the terminal 900. In one embodiment, when the proximity sensor 916 detects that the distance between the user and the front face of the terminal 900 gradually decreases, the processor 901 controls the touch display 905 to switch from the bright screen state to the dark screen state; when the proximity sensor 916 detects that the distance between the user and the front surface of the terminal 900 gradually becomes larger, the processor 901 controls the touch display 905 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of terminal 900, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, storing a computer program which, when executed by a processor, implements the subtitle display method in the above-described embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (22)

1. A subtitle display method applied to a first terminal is characterized by comprising the following steps:
receiving a media stream of a second terminal in a time period of live broadcasting of a user of the second terminal;
when a subtitle message of a multimedia resource is acquired from the media stream, determining a playing state of the multimedia resource according to the subtitle message, wherein the subtitle message carries the playing state of the multimedia resource, a target subtitle and a display timestamp of each word in the target subtitle, and the target subtitle is acquired according to the current playing time of the multimedia resource and the display timestamp of each sentence of subtitle in a subtitle file corresponding to the multimedia resource;
when the playing state of the multimedia resource is in playing, performing word-by-word display on the target caption according to the display timestamp of each word in the target caption;
the playing state of the multimedia resource comprises playing and stopping playing, and the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource.
2. The method of claim 1, wherein the displaying the target caption word by word according to the display time stamp of each word in the target caption comprises:
and displaying the target caption word by adopting a preset animation according to the display timestamp of each word in the target caption.
3. The method of claim 2, wherein the displaying the target subtitles word by using a preset animation comprises:
and drawing the preset animation for displaying the target caption word by word based on a canvas drawing board of a hypertext markup language HTML5 of the multimedia browsing application.
4. The method according to claim 1, wherein after determining the playing status of the multimedia resource according to the caption message, the method further comprises:
and when the playing state of the multimedia resource is the playing stop state, stopping displaying the subtitles.
5. The method of claim 1, wherein the displaying the target subtitles word by word comprises:
displaying the target captions word by word in an interface of a multimedia browsing application;
correspondingly, after the target subtitles are displayed word by word in the interface of the multimedia browsing application, the method further comprises:
and when the interface window of the multimedia browsing application is closed, stopping displaying the subtitles.
6. A subtitle display method applied to a second terminal, the method comprising:
when a user of the second terminal starts to broadcast directly, collecting a video frame and an audio frame when the user broadcasts directly;
generating a media stream based on the collected video frame and audio frame and sending the media stream to a server, wherein the server is used for forwarding the media stream to a first terminal;
in the time period of live broadcasting of the user of the second terminal, when multimedia resources are played, carrying subtitle messages of the multimedia resources in the media stream and sending the subtitle messages to the server;
the subtitle message carries a playing state of the multimedia resource, a target subtitle and a display timestamp of each word in the target subtitle, the playing state of the multimedia resource comprises playing and playing stopping, the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource, and the target subtitle is obtained according to the current playing time of the multimedia resource and the display timestamp of each subtitle in a subtitle file corresponding to the multimedia resource.
7. The method according to claim 6, wherein when playing a multimedia resource, carrying a subtitle packet of the multimedia resource in the media stream and sending the subtitle packet to the server comprises:
when the multimedia resource is played or continuously played, carrying a first caption message of the multimedia resource in the media stream and sending the first caption message to the server, wherein the playing state of the multimedia resource carried by the first caption message is playing;
and when the multimedia resource is stopped being played, carrying a second caption message of the multimedia resource in the media stream and sending the second caption message to the server, wherein the playing state of the multimedia resource carried by the second caption message is the playing stop.
8. The method according to claim 6, wherein when playing a multimedia resource, carrying a subtitle packet of the multimedia resource in the media stream and sending the subtitle packet to the server comprises:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, acquiring a target caption corresponding to the current playing time according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to the server.
9. A caption display system, characterized in that the system comprises a first terminal, a second terminal and a server,
the second terminal is used for collecting video frames and audio frames when a user broadcasts directly, generating media streams based on the collected video frames and audio frames and sending the media streams to the server;
the server is used for sending the media stream to the first terminal;
the first terminal is used for determining the playing state of the multimedia resource according to the subtitle message when the subtitle message of the multimedia resource is obtained from the media stream, and performing word-by-word display on the target subtitle according to the display timestamp of each word in the target subtitle when the playing state of the multimedia resource is in playing, wherein the target subtitle is obtained according to the current playing time of the multimedia resource and the display timestamp of each sentence of subtitle in the subtitle file corresponding to the multimedia resource;
the caption message carries a playing state of the multimedia resource, the target caption and a display timestamp of each word in the target caption, the playing state of the multimedia resource comprises playing and stopping playing, and the target caption refers to each sentence of caption corresponding to the current playing time of the multimedia resource.
10. The system of claim 9, wherein the first terminal is configured to display the target subtitle word by using a preset animation according to a display timestamp of each word in the target subtitle.
11. The system of claim 9, wherein the second terminal is configured to:
when the multimedia resource is played or continuously played, carrying a first caption message of the multimedia resource in the media stream and sending the first caption message to the server, wherein the playing state of the multimedia resource carried by the first caption message is playing;
and when the multimedia resource is stopped being played, carrying a second caption message of the multimedia resource in the media stream and sending the second caption message to the server, wherein the playing state of the multimedia resource carried by the second caption message is the playing stop.
12. The system of claim 9, wherein the second terminal is configured to:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, acquiring a target caption corresponding to the current playing time according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to the server.
13. A subtitle display apparatus, the apparatus comprising:
the receiving module is used for receiving the media stream of the second terminal in the time period of live broadcasting of the user of the second terminal;
the determining module is configured to determine, when a subtitle message of a multimedia resource is obtained from the media stream, a playing state of the multimedia resource according to the subtitle message, where the subtitle message carries the playing state of the multimedia resource, a target subtitle, and a display timestamp of each word in the target subtitle, and the target subtitle is obtained according to a current playing time of the multimedia resource and the display timestamp of each sentence of subtitle in a subtitle file corresponding to the multimedia resource;
the display module is used for displaying the target subtitles word by word according to the display timestamp of each word in the target subtitles when the playing state of the multimedia resource is in playing;
the playing state of the multimedia resource comprises playing and stopping playing, and the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource.
14. The apparatus of claim 13, wherein the display module is configured to display the target subtitle word by using a preset animation according to a display timestamp of each word in the target subtitle.
15. The apparatus of claim 14, wherein the display module is configured to draw the preset animation for displaying the target caption word by word based on a canvas drawing board of a hypertext markup language HTML5 of a multimedia browsing application.
16. The apparatus of claim 13, wherein the display module is further configured to stop displaying subtitles when the playing status of the multimedia asset is stop playing.
17. The apparatus of claim 13, wherein the display module is configured to display the target subtitles verbatim in an interface of a multimedia browsing application;
correspondingly, the display module is further configured to stop displaying the subtitles when the interface window of the multimedia browsing application is closed.
18. A subtitle display apparatus applied to a second terminal, the apparatus comprising:
the acquisition module is used for acquiring video frames and audio frames during live broadcasting of the user when the user of the second terminal starts live broadcasting;
the system comprises a sending module, a first terminal and a second terminal, wherein the sending module is used for generating a media stream based on collected video frames and audio frames and sending the media stream to the server;
the sending module is further configured to, in a time period when the user of the second terminal performs live broadcasting, carry a subtitle message of the multimedia resource in the media stream and send the subtitle message to the server when the multimedia resource is played;
the subtitle message carries a playing state of the multimedia resource, a target subtitle and a display timestamp of each word in the target subtitle, the playing state of the multimedia resource comprises playing and playing stopping, the target subtitle refers to each subtitle corresponding to the current playing time of the multimedia resource, and the target subtitle is obtained according to the current playing time of the multimedia resource and the display timestamp of each subtitle in a subtitle file corresponding to the multimedia resource.
19. The apparatus of claim 18, wherein the sending module is configured to:
when the multimedia resource is played or continuously played, carrying a first caption message of the multimedia resource in the media stream and sending the first caption message to the server, wherein the playing state of the multimedia resource carried by the first caption message is playing;
and when the multimedia resource is stopped being played, carrying a second caption message of the multimedia resource in the media stream and sending the second caption message to the server, wherein the playing state of the multimedia resource carried by the second caption message is the playing stop.
20. The apparatus of claim 18, wherein the sending module is configured to:
when the multimedia resource is played through a designated application, acquiring a subtitle file corresponding to the multimedia resource, wherein the subtitle file is an encrypted subtitle file and comprises each sentence of subtitle of the multimedia resource, the total playing time length, each sentence of subtitle and a display timestamp of each word in each sentence of subtitle;
after the caption file is decrypted through the appointed application, acquiring a target caption corresponding to the current playing time according to the current playing time of the multimedia resource and the display timestamp of each caption in the caption file;
and generating the caption message according to the playing state of the multimedia resource, the target caption and the display timestamp of each word in the target caption, executing the steps of carrying the caption message of the multimedia resource in the media stream and sending the caption message to the server.
21. A terminal comprising a processor and a memory; the memory is used for storing a computer program; the processor, configured to execute the computer program stored in the memory, implements the method steps of any of claims 1-8.
22. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 8.
CN201810509160.7A 2018-05-24 2018-05-24 Subtitle display method and device Active CN108419113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810509160.7A CN108419113B (en) 2018-05-24 2018-05-24 Subtitle display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810509160.7A CN108419113B (en) 2018-05-24 2018-05-24 Subtitle display method and device

Publications (2)

Publication Number Publication Date
CN108419113A CN108419113A (en) 2018-08-17
CN108419113B true CN108419113B (en) 2021-01-08

Family

ID=63140513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810509160.7A Active CN108419113B (en) 2018-05-24 2018-05-24 Subtitle display method and device

Country Status (1)

Country Link
CN (1) CN108419113B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672919A (en) * 2018-12-26 2019-04-23 新华三技术有限公司合肥分公司 Caption presentation method, device and user terminal
CN109788335B (en) * 2019-03-06 2021-08-17 珠海天燕科技有限公司 Video subtitle generating method and device
CN111835988B (en) * 2019-04-23 2023-03-07 阿里巴巴集团控股有限公司 Subtitle generation method, server, terminal equipment and system
CN111092991B (en) * 2019-12-20 2021-09-21 广州酷狗计算机科技有限公司 Lyric display method and device and computer storage medium
CN110996167A (en) * 2019-12-20 2020-04-10 广州酷狗计算机科技有限公司 Method and device for adding subtitles in video
CN112256176B (en) * 2020-10-23 2022-04-05 北京字节跳动网络技术有限公司 Character display method and device, electronic equipment and computer readable storage medium
CN112347298B (en) * 2020-11-13 2024-07-30 广州酷狗计算机科技有限公司 Text information display method, text information display device, terminal and storage medium
CN115474066A (en) * 2021-06-11 2022-12-13 北京有竹居网络技术有限公司 Subtitle processing method and device, electronic equipment and storage medium
CN113658594A (en) * 2021-08-16 2021-11-16 北京百度网讯科技有限公司 Lyric recognition method, device, equipment, storage medium and product
CN117749965A (en) * 2022-09-14 2024-03-22 北京字跳网络技术有限公司 Subtitle processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008199117A (en) * 2007-02-08 2008-08-28 Sharp Corp Digital broadcast receiver
CN102820027A (en) * 2012-06-21 2012-12-12 福建星网视易信息系统有限公司 Accompaniment subtitle display system and method
CN106128440A (en) * 2016-06-22 2016-11-16 北京小米移动软件有限公司 A kind of lyrics display processing method, device, terminal unit and system
CN106488264A (en) * 2016-11-24 2017-03-08 福建星网视易信息系统有限公司 Singing the live middle method, system and device for showing the lyrics
CN106598996A (en) * 2015-10-19 2017-04-26 广州酷狗计算机科技有限公司 Multi-media poster generation method and device
US9749504B2 (en) * 2011-09-27 2017-08-29 Cisco Technology, Inc. Optimizing timed text generation for live closed captions and subtitles
CN107220339A (en) * 2017-05-26 2017-09-29 北京酷我科技有限公司 A kind of lyrics word for word display methods
CN107948715A (en) * 2017-11-28 2018-04-20 北京潘达互娱科技有限公司 Live network broadcast method and device
CN108063970A (en) * 2017-11-22 2018-05-22 北京奇艺世纪科技有限公司 A kind of method and apparatus for handling live TV stream

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008011397A (en) * 2006-06-30 2008-01-17 Toshiba Corp Data broadcast playback apparatus and method
JP5903924B2 (en) * 2012-02-17 2016-04-13 ソニー株式会社 Receiving apparatus and subtitle processing method
CN106098088B (en) * 2016-06-01 2018-09-04 广州酷狗计算机科技有限公司 A kind of method and apparatus of the display lyrics
CN106792071A (en) * 2016-12-19 2017-05-31 北京小米移动软件有限公司 Method for processing caption and device
CN106653071B (en) * 2016-12-30 2019-11-22 腾讯音乐娱乐(深圳)有限公司 A kind of lyric display method and device
CN106993239B (en) * 2017-03-29 2019-12-10 广州酷狗计算机科技有限公司 Information display method in live broadcast process
CN107786887B (en) * 2017-10-10 2020-07-31 北京奇艺世纪科技有限公司 Method and device for displaying display information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008199117A (en) * 2007-02-08 2008-08-28 Sharp Corp Digital broadcast receiver
US9749504B2 (en) * 2011-09-27 2017-08-29 Cisco Technology, Inc. Optimizing timed text generation for live closed captions and subtitles
CN102820027A (en) * 2012-06-21 2012-12-12 福建星网视易信息系统有限公司 Accompaniment subtitle display system and method
CN106598996A (en) * 2015-10-19 2017-04-26 广州酷狗计算机科技有限公司 Multi-media poster generation method and device
CN106128440A (en) * 2016-06-22 2016-11-16 北京小米移动软件有限公司 A kind of lyrics display processing method, device, terminal unit and system
CN106488264A (en) * 2016-11-24 2017-03-08 福建星网视易信息系统有限公司 Singing the live middle method, system and device for showing the lyrics
CN107220339A (en) * 2017-05-26 2017-09-29 北京酷我科技有限公司 A kind of lyrics word for word display methods
CN108063970A (en) * 2017-11-22 2018-05-22 北京奇艺世纪科技有限公司 A kind of method and apparatus for handling live TV stream
CN107948715A (en) * 2017-11-28 2018-04-20 北京潘达互娱科技有限公司 Live network broadcast method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于Android的PhoneGap平台研究及其跨移动平台媒体框架的扩展》;李宝韩;《中国优秀硕士学位论文全文数据库》;20120515;全文 *

Also Published As

Publication number Publication date
CN108419113A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108419113B (en) Subtitle display method and device
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN110572722B (en) Video clipping method, device, equipment and readable storage medium
CN109348247B (en) Method and device for determining audio and video playing time stamp and storage medium
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN107908929B (en) Method and device for playing audio data
WO2019114514A1 (en) Method and apparatus for displaying pitch information in live broadcast room, and storage medium
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN110324689B (en) Audio and video synchronous playing method, device, terminal and storage medium
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN112929687A (en) Interaction method, device and equipment based on live video and storage medium
CN109874043B (en) Video stream sending method, video stream playing method and video stream playing device
CN110290392B (en) Live broadcast information display method, device, equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
CN111901658A (en) Comment information display method and device, terminal and storage medium
CN113204672B (en) Resource display method, device, computer equipment and medium
CN107896337B (en) Information popularization method and device and storage medium
CN110996167A (en) Method and device for adding subtitles in video
CN112118477A (en) Virtual gift display method, device, equipment and storage medium
CN112104648A (en) Data processing method, device, terminal, server and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN109451248B (en) Video data processing method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant