CN112235521B - Method, device and equipment for realizing subtitle conversion - Google Patents

Method, device and equipment for realizing subtitle conversion Download PDF

Info

Publication number
CN112235521B
CN112235521B CN202011052845.7A CN202011052845A CN112235521B CN 112235521 B CN112235521 B CN 112235521B CN 202011052845 A CN202011052845 A CN 202011052845A CN 112235521 B CN112235521 B CN 112235521B
Authority
CN
China
Prior art keywords
data
subtitle
caption
bitmap
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011052845.7A
Other languages
Chinese (zh)
Other versions
CN112235521A (en
Inventor
黄永
白俊东
刘永家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN202011052845.7A priority Critical patent/CN112235521B/en
Publication of CN112235521A publication Critical patent/CN112235521A/en
Application granted granted Critical
Publication of CN112235521B publication Critical patent/CN112235521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The embodiment of the application discloses a method, a device and equipment for realizing subtitle conversion. According to the offset address of the compressed bitmap data in the caption attribute information, the compressed bitmap data can be determined from the caption data and decoded to obtain the original caption bitmap data. And converting the original caption bitmap data by utilizing the color mapping information and the alpha channel information to obtain the caption bitmap data suitable for the vehicle-mounted multimedia system. And finally, constructing caption data of the vehicle-mounted multimedia system by using the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data, and displaying the caption according to the caption data of the vehicle-mounted multimedia system. Therefore, the design cost of the vehicle-mounted multimedia system is reduced, and the overall performance of the vehicle-mounted multimedia system is improved.

Description

Method, device and equipment for realizing subtitle conversion
Technical Field
The present application relates to the field of data processing, and in particular, to a method, an apparatus, and a device for implementing subtitle conversion.
Background
An in-vehicle multimedia system is configured in a vehicle for entertainment activities by a user of the vehicle. The vehicle-mounted multimedia system has a video playing function, and subtitle data acquired from the multimedia file needs to be processed when the video is played, so that subtitles are correspondingly displayed when the video is displayed.
However, when the format of the caption data is different from the format requirement of the vehicle-mounted multimedia system, the vehicle-mounted multimedia system cannot read the caption data and cannot normally display the caption.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, and a device for implementing subtitle conversion, which can implement subtitle conversion, so that a vehicle-mounted multimedia system can normally read subtitle data to perform subtitle display.
In order to solve the above problem, the technical solution provided by the embodiment of the present application is as follows:
a method for realizing caption conversion, which is applied to a vehicle-mounted multimedia system, comprises the following steps:
acquiring subtitle data;
determining caption control domain data from the caption data;
determining subtitle attribute information according to the subtitle control domain data, wherein the subtitle attribute information comprises subtitle starting time, subtitle ending time, color mapping information, alpha channel information, subtitle bitmap position information and compressed bitmap data offset addresses;
determining compressed bitmap data from the subtitle data according to the offset address of the compressed bitmap data, and decoding original subtitle bitmap data from the compressed bitmap data;
converting the original caption bitmap data into caption bitmap data suitable for the vehicle-mounted multimedia system by utilizing the color mapping information and the alpha channel information;
and constructing the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data into vehicle-mounted multimedia system caption data, and displaying the caption according to the vehicle-mounted multimedia system caption data.
In one possible implementation, the determining subtitle control field data from the subtitle data includes:
determining an offset address of a caption control field from the caption data;
and determining caption control domain data from the caption data according to the offset address of the caption control domain.
In a possible implementation manner, the determining subtitle attribute information according to the subtitle control field data includes:
determining the subtitle starting time according to the data of the 0 th byte and the 1 st byte in the subtitle control domain data;
determining an offset address of an ending sequence according to data of a2 nd byte and a3 rd byte in the subtitle control field data;
determining ending sequence data from the subtitle control field data according to the offset address of the ending sequence;
determining the end time of the caption according to the data of the 0 th byte and the 1 st byte in the end sequence data;
respectively determining color mapping information sequence data, alpha channel information sequence data, caption bitmap position sequence data and compressed bitmap data offset address sequence data from the caption control domain data according to each sequence type identifier in the caption control domain data;
determining color mapping information from the color mapping information sequence data, determining an alpha channel information sequence from the alpha channel information sequence data, determining subtitle bitmap position information from the subtitle bitmap position sequence data, and determining a compressed bitmap data offset address from the compressed bitmap data offset address sequence data.
In one possible implementation manner, the determining the compressed bitmap data from the subtitle data according to the compressed bitmap data offset address and decoding the original subtitle bitmap data from the compressed bitmap data includes:
determining compressed bitmap even line data from the subtitle data according to the compressed bitmap even line data offset address;
carrying out data decompression on the compressed bitmap even line data by using a run length encoding RLE decompression algorithm to obtain original subtitle bitmap even line data;
determining odd line data of the compressed bitmap from the subtitle data according to the odd line data offset address of the compressed bitmap;
performing data decompression on the odd line data of the compressed bitmap by using the RLE decompression algorithm to obtain the odd line data of the original caption bitmap data;
and according to the subtitle bitmap position information, carrying out cross storage on the even line data of the original subtitle bitmap data and the odd line data of the original subtitle bitmap data to generate original subtitle bitmap data.
In a possible implementation manner, before converting the original subtitle bitmap data into subtitle bitmap data suitable for the vehicle multimedia system by using the color mapping information and the alpha channel information, the method further includes:
searching a non-transparent color value in each pixel point color value corresponding to the original caption bitmap data to determine a pixel point with the non-transparent color value;
determining the position of an effective caption area according to the position of a first line of the pixel point with the non-transparent color value, the position of a last line of the pixel point with the non-transparent color value, the position of a first column of the pixel point with the non-transparent color value and the position of a last column of the pixel point with the non-transparent color value;
and updating the subtitle bitmap position information according to the effective subtitle area position, and updating original subtitle bitmap data in the effective subtitle area position into subtitle bitmap data.
In a possible implementation manner, the converting the original subtitle bitmap data into subtitle bitmap data suitable for the vehicle-mounted multimedia system by using the color mapping information and the alpha channel information includes:
creating palette information suitable for the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information and the custom palette information;
searching the palette information suitable for the vehicle-mounted multimedia system, and converting color values of all pixel points corresponding to the original subtitle bitmap data into target color values of all pixel points;
and generating caption bitmap data suitable for the vehicle-mounted multimedia system according to the target color values of the pixel points.
In a possible implementation manner, the creating palette information applicable to the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information, and the custom palette information includes:
respectively converting the ith element value in the color mapping information into a first color value by searching custom palette information, wherein i is an integer from 0 to n, and n is the number of the element values in the color mapping information minus 1;
taking the ith first color value as a low bit, and taking the ith element value in the alpha channel information as a high bit to form an ith second color value;
creating palette information applicable to the in-vehicle multimedia system including n second color values.
An apparatus for implementing caption conversion, the apparatus being applied to a vehicle-mounted multimedia system, the apparatus comprising:
an acquisition unit configured to acquire subtitle data;
a first determining unit configured to determine subtitle control domain data from the subtitle data;
the second determining unit is used for determining subtitle attribute information according to the subtitle control domain data, wherein the subtitle attribute information comprises subtitle starting time, subtitle ending time, color mapping information, alpha channel information, subtitle bitmap position information and compressed bitmap data offset addresses;
a decoding unit, configured to determine compressed bitmap data from the subtitle data according to the compressed bitmap data offset address, and decode original subtitle bitmap data from the compressed bitmap data;
the conversion unit is used for converting the original subtitle bitmap data into subtitle bitmap data suitable for the vehicle-mounted multimedia system by utilizing the color mapping information and the alpha channel information;
and the construction unit is used for constructing the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data into vehicle-mounted multimedia system caption data and performing caption display according to the vehicle-mounted multimedia system caption data.
An apparatus for implementing subtitle conversion, comprising: the processor executes the computer program to realize the method for realizing the subtitle conversion.
A computer-readable storage medium having stored therein instructions which, when run on a terminal device, cause the terminal device to execute the above-described method of implementing subtitle conversion.
Therefore, the embodiment of the application has the following beneficial effects:
the method for realizing the subtitle conversion is applied to a vehicle-mounted multimedia system, the subtitle control domain data is determined through the acquired subtitle data, and the subtitle attribute information is determined according to the subtitle control domain data. The subtitle attribute information includes subtitle start time, subtitle end time, color mapping information, alpha channel information, subtitle bitmap position information, and compressed bitmap data offset addresses. According to the offset address of the compressed bitmap data, the compressed bitmap data can be determined from the subtitle data and decoded to obtain the original subtitle bitmap data. And converting the original caption bitmap data by utilizing the color mapping information and the alpha channel information to obtain the caption bitmap data suitable for the vehicle-mounted multimedia system. And finally, constructing vehicle-mounted multimedia system subtitle data by using the subtitle starting time, the subtitle ending time, the subtitle bitmap position information and the subtitle bitmap data, and displaying subtitles according to the vehicle-mounted multimedia system subtitle data. Therefore, in the embodiment of the application, the caption control domain data is determined from the acquired caption data, the corresponding caption attribute information is determined according to the caption control domain data, the original caption bitmap data is further determined, the caption bitmap data can be obtained through conversion, and the vehicle-mounted multimedia system caption data is constructed. The vehicle-mounted multimedia system caption data obtained through the caption data can be suitable for a vehicle-mounted multimedia system, the conversion between the caption data and the vehicle-mounted multimedia system caption data can be realized, the design cost of the vehicle-mounted multimedia system is reduced, and the overall performance of the vehicle-mounted multimedia system is improved.
Drawings
Fig. 1 is a schematic diagram of a framework of an exemplary application scenario provided in an embodiment of the present application;
fig. 2 is a flowchart of a method for implementing subtitle conversion according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a two-screen subtitle provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a subtitle data structure according to an embodiment of the present application;
fig. 5 is a schematic diagram of a subtitle data structure according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a start segment of subtitle data according to an embodiment of the present application;
fig. 7 is a schematic diagram of a partial segment of subtitle data for one screen according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of subtitle control field data according to an embodiment of the present application;
fig. 9 is a schematic sequence diagram of subtitle control field data according to an embodiment of the present application;
fig. 10 is a schematic diagram illustrating a determination of a position of a valid subtitle region according to an embodiment of the present application;
fig. 11 is a flowchart of a method for generating subtitle bitmap data suitable for use in the vehicle-mounted multimedia system according to an embodiment of the present application;
fig. 12 is a flowchart illustrating a method for obtaining palette information applicable to the in-vehicle multimedia system according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an apparatus for implementing subtitle conversion according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the drawings are described in detail below.
In order to facilitate understanding of the technical solutions provided in the present application, the following description will first be made on the background art related to the present application.
After researching the subtitle display of the traditional vehicle-mounted multimedia system, the inventor discovers that for the vehicle-mounted multimedia system developed based on the GSstreamer open source framework, subtitle data is obtained from a multimedia file through the GSstreamer and is provided to the vehicle-mounted multimedia system. And the vehicle-mounted multimedia system reads the subtitle data to realize the display of the subtitle. However, the format of the subtitle data acquired by the GStreamer from the multimedia file is different from the format of the subtitle data that can be read by the vehicle-mounted multimedia system, so that the vehicle-mounted multimedia system cannot normally process the subtitle data acquired by the GStreamer from the multimedia file, and normally displays the corresponding subtitle.
Based on this, the embodiment of the present application provides a method for implementing subtitle conversion, which is applied to a vehicle-mounted multimedia system, and determines subtitle control domain data through acquired subtitle data, and determines subtitle attribute information according to the subtitle control domain data. The subtitle attribute information includes subtitle start time, subtitle end time, color mapping information, alpha channel information, subtitle bitmap position information, and compressed bitmap data offset addresses. According to the offset address of the compressed bitmap data, the compressed bitmap data can be determined from the subtitle data and decoded to obtain the original subtitle bitmap data. And converting the original caption bitmap data by utilizing the color mapping information and the alpha channel information to obtain the caption bitmap data suitable for the vehicle-mounted multimedia system. And finally, constructing vehicle-mounted multimedia system subtitle data by using the subtitle starting time, the subtitle ending time, the subtitle bitmap position information and the subtitle bitmap data, and displaying subtitles according to the vehicle-mounted multimedia system subtitle data.
In order to facilitate understanding of the method for implementing subtitle conversion provided in the embodiments of the present application, the following description is made with reference to a scene example shown in fig. 1. Referring to fig. 1, the figure is a schematic diagram of a framework of an exemplary application scenario provided in an embodiment of the present application. The method for realizing the subtitle conversion provided by the embodiment of the application can be applied to a vehicle-mounted multimedia system.
In practical application, the vehicle-mounted multimedia system firstly acquires the caption data, determines the caption control domain data from the caption data, and then determines the caption attribute information from the caption control domain data. And determining compressed bitmap data in the subtitle data through the subtitle attribute information, and decoding to obtain original subtitle bitmap data. By converting the original subtitle bitmap data, subtitle bitmap data can be obtained. And constructing caption data of the vehicle-mounted multimedia system for caption display by utilizing the caption bitmap data, the caption starting time, the caption ending time and the caption bitmap position information.
Those skilled in the art will appreciate that the block diagram shown in fig. 1 is only one example in which embodiments of the present application may be implemented. The scope of applicability of the embodiments of the present application is not limited in any way by this framework.
It should be noted that, in the embodiment of the present application, the vehicle-mounted multimedia system may be an example of an existing, developing or future developed vehicle-mounted multimedia system capable of implementing message forwarding. The embodiments of the present application are not limited in any way in this respect.
Based on the above description, the method for implementing subtitle conversion provided by the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, which is a flowchart of a method for implementing subtitle conversion according to an embodiment of the present disclosure, as shown in fig. 2, the method is applied to a vehicle-mounted multimedia system, and the method may include S201 to S206:
the method for realizing the subtitle conversion provided by the embodiment of the application is applied to a vehicle-mounted multimedia system. The vehicle-mounted multimedia system can operate in a vehicle machine and can be a system for multimedia display arranged on a vehicle. The vehicle-mounted multimedia system has the functions of playing videos and displaying corresponding subtitles. The embodiment of the application does not limit the specific type of the vehicle-mounted multimedia system, and the vehicle-mounted multimedia system can be developed based on a GStreamer open source framework.
S201: and acquiring subtitle data.
The caption data is data for displaying captions by the in-vehicle multimedia system in the multimedia file. The vehicle-mounted multimedia system can obtain the caption for display by acquiring the caption data and reading the caption data.
It should be noted that the subtitle display and the video display may be independent from each other, and when the video display picture is played and displayed, the vehicle-mounted multimedia system may read the subtitle data that needs to be displayed at the corresponding time, and display the corresponding subtitle data and the video display picture together.
The subtitle data may be divided according to a subtitle to be displayed, and the subtitle to be displayed in one display interface has corresponding one-screen subtitle data. For example, refer to fig. 3, which is a schematic diagram of a two-screen subtitle provided in an embodiment of the present application. For a "long-past" subtitle to be displayed in a display interface, there is corresponding one-screen subtitle data. "she bought a bunch of flowers" displayed in the next display interface, with the corresponding other screen caption data.
When the vehicle-mounted multimedia system is developed by a GStreamer open source framework, the vehicle-mounted multimedia system can acquire the subtitle data through the GStreamer, but the format of the acquired subtitle data may be different from that of the subtitle data required by the vehicle-mounted multimedia system to be displayed, and format conversion is required.
The vehicle-mounted multimedia system can acquire the caption data according to one screen of caption data when acquiring the caption data, and after acquiring the one screen of caption data, the one screen of caption data is cached to a cache region of the vehicle-mounted multimedia system so as to read the corresponding one screen of caption data from the cache region in the following process and obtain the corresponding vehicle-mounted multimedia system caption data for display.
S202: and determining caption control domain data from the caption data.
The subtitle data includes subtitle control field data, which is data of a subtitle display control instruction. Through the caption control domain data, the vehicle-mounted multimedia system can obtain related caption attribute information corresponding to the caption, and the caption data can be correspondingly read and converted by utilizing the caption attribute information, so that the vehicle-mounted multimedia system caption data for display can be obtained.
The manner in which the on-board multimedia system determines the caption control field data from the caption data is different for different configurations of the caption data. The embodiment of the present application provides a method for determining subtitle control domain data from subtitle data, which is described in detail below.
S203: and determining subtitle attribute information according to the subtitle control domain data, wherein the subtitle attribute information comprises subtitle starting time, subtitle ending time, color mapping information, alpha channel information, subtitle bitmap position information and compressed bitmap data offset addresses.
The caption control domain data has caption attribute information representing caption related attributes, and the related information of the caption can be determined when the caption is displayed through the caption attribute information. Specifically, the subtitle attribute information includes subtitle start time, subtitle end time, color mapping information, alpha channel information, subtitle bitmap position information, and compressed bitmap data offset addresses. Referring to fig. 4, fig. 4 is a schematic diagram of a subtitle data structure according to an embodiment of the present application.
The subtitle start time is used for representing the start time of the subtitle to start playing, and the time for starting playing of the subtitle can be determined according to the subtitle start time. In fig. 4, the subtitle start time is "U int32_ t start _ display _ time". The subtitle end time is an end time for representing the end of the subtitle playback, and in fig. 4, the subtitle end time is "U int32_ end _ display _ time". The duration of playing the subtitle can be determined by the subtitle start time and the subtitle end time. The subtitle playing time may be zero, and the corresponding subtitle ending time may be a time for continuously playing the subtitle. For example, when the subtitle start time corresponding to one screen of the subtitle "long ago" is zero and the subtitle end time is 4 seconds, it can be determined that the duration of the subtitle "long ago" displayed at the time of video playback is 4 seconds. In addition, for one-screen subtitle data, the subtitle start time and the time for playing the subtitle in the video are independent of each other, and the subtitle start time and the subtitle end time are only related to the duration for playing the one-screen subtitle data and can be independent of the time for playing the one-screen subtitle data in the whole video playing process.
The color mapping information is information for representing the display color of the subtitle, and the color of the subtitle can be obtained through the color mapping information. In fig. 4, the color mapping information is "U int color map [4]", where 4 indicates that the data corresponding to the color mapping information is 4 bits.
The alpha channel information is information for representing an alpha channel, and format conversion of related display colors of the subtitle data and the vehicle-mounted multimedia system subtitle data can be achieved through the alpha channel information and the color mapping information. In fig. 4, the alpha channel information is "uint8_ t alpha [4]", where 4 indicates that the data corresponding to the alpha channel information is 4 bits.
The subtitle bitmap position information is information for characterizing a display position of a subtitle in a display interface. Through the subtitle bitmap position information, the display position of the displayed subtitle corresponding to the subtitle data can be determined. The position information of the subtitle bitmap can be determined by the position coordinates of the subtitle in the display interface, and can also be determined by the width and the height of the subtitle display. In fig. 4, the subtitle bitmap position information is "agent 8_ t w, h", where w is the width of the subtitle display and h is the height of the subtitle display.
The compressed bitmap data offset address is a start address position in the subtitle data for representing the compressed bitmap data in the subtitle data. The compressed bitmap data is compressed subtitle bitmap data, that is, data for determining the display mode of each pixel in the subtitle picture. By compressing the offset address of the bitmap data, the compressed bitmap data can be obtained, and the display mode of each pixel in the subtitle picture is further determined. Specifically, in a possible implementation manner, odd and even rows of displayed pixels may be divided, and the bitmap data may be compressed correspondingly to obtain compressed bitmap data including compressed bitmap even row data and compressed bitmap odd row data, and the compressed bitmap data may correspond to a compressed bitmap even row data offset address and a compressed bitmap odd row data offset address. In fig. 4, the compressed bitmap data offset addresses are "int64_ t rleOffset1, rleOffset2". rleOffset1 indicates the compressed bitmap even line data offset address, and rleOffset2 indicates the compressed bitmap odd line data offset address.
S204: and determining compressed bitmap data from the subtitle data according to the offset address of the compressed bitmap data, and decoding original subtitle bitmap data from the compressed bitmap data.
After determining the offset address of the compressed bitmap data from the subtitle attribute information, the compressed bitmap data is determined from the subtitle data using the offset address of the compressed bitmap data. And decoding the compressed bitmap data to obtain original caption bitmap data.
When the offset addresses of the compressed bitmap data include offset addresses of even row data of the compressed bitmap and offset addresses of odd row data of the compressed bitmap, the embodiment of the present application further provides a specific implementation manner of S204, please refer to the following.
It should be noted that the original caption bitmap data is obtained by reading the caption data, and the format of the original caption bitmap data may be different from the format that can be displayed by the vehicle-mounted multimedia system, and it is necessary to further perform format conversion on the original caption bitmap data.
S205: and converting the original caption bitmap data into caption bitmap data suitable for the vehicle-mounted multimedia system by using the color mapping information and the alpha channel information.
The original caption bitmap data can be converted through the color mapping information and the alpha channel information which are obtained from the caption attribute information, and the original caption bitmap data can be converted into the caption bitmap data suitable for the vehicle-mounted multimedia system.
The embodiment of the present application provides a specific implementation manner of S205, please refer to the following.
S206: and constructing the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data into vehicle-mounted multimedia system caption data, and displaying the caption according to the vehicle-mounted multimedia system caption data.
After the original caption bitmap data is converted, caption bitmap data suitable for a vehicle-mounted multimedia system can be obtained, and caption data suitable for the vehicle-mounted multimedia system is constructed according to the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data. The display time of the caption can be determined according to the caption starting time and the caption ending time in the caption data of the vehicle-mounted multimedia system, the display position of the caption can be determined according to the position information of the caption bitmap in the caption data of the vehicle-mounted multimedia system, each pixel to be displayed can be determined according to the caption bitmap data in the caption data of the vehicle-mounted multimedia system, and the caption can be displayed according to the caption bitmap data of the vehicle-mounted multimedia system.
In one possible implementation, the vehicle-mounted multimedia system provides a graphic system canvas, namely a graphic display buffer area, and data meeting format requirements are directly sent to the buffer area, so that the vehicle-mounted display screen can display subtitle effects.
Based on the contents of S201-S206, in the embodiment of the present application, by determining the subtitle control field data from the acquired subtitle data and determining the corresponding subtitle attribute information according to the subtitle control field data, the original subtitle bitmap data can be determined. The subtitle bitmap data can be obtained by converting the original subtitle bitmap data. And then the obtained caption bitmap data, the caption starting time, the caption ending time and the caption bitmap position information are jointly constructed to obtain the caption data of the vehicle-mounted multimedia system, which is suitable for the vehicle-mounted multimedia system. Therefore, format conversion between the caption data and the caption data of the vehicle-mounted multimedia system can be realized, so that the vehicle-mounted multimedia system can read and convert the caption data, display of captions is realized, the overall performance of the vehicle-mounted multimedia system is improved, and the design cost of the vehicle-mounted multimedia system is reduced.
The contents of data included in subtitle data may be different for subtitle data of different structures. In a possible implementation manner, an embodiment of the present application provides a method for determining subtitle control field data from subtitle data, which specifically includes:
determining an offset address of a caption control field from the caption data; and determining caption control domain data from the caption data according to the offset address of the caption control domain.
The subtitle data may be composed of a subtitle structure data size, a control field offset value, compressed bitmap data, and subtitle control field data. Referring to fig. 5, fig. 5 is a schematic diagram of a subtitle data structure according to an embodiment of the present application.
The subtitle structure data size is used for representing the size of one screen of subtitle data, the position occupied by the subtitle structure data size in a standard definition format is 2 bytes, and the position occupied by the subtitle structure data size in a high definition format is 4 bytes.
The control domain offset value is used for representing the offset address of the caption control domain data, and the start address of the caption control domain data in the caption data can be determined according to the control domain offset value, so that the caption control domain data is determined. The control field offset value in standard definition format occupies 2 bytes, and the control field offset value in high definition format occupies 4 bytes.
The offset address of the caption control domain can be determined through the offset value of the control domain in the caption data, the initial position of the caption control domain data can be determined according to the obtained offset address of the caption control domain, and the caption control domain data can be read and obtained.
Referring to fig. 6, fig. 6 is a schematic diagram of a start segment of one-screen subtitle data according to an embodiment of the present application. The caption data corresponds to a standard definition format, the first row and the first column of data are scales required for reading the caption data and used for positioning data addresses, and other data are caption data. Here, data "0C 08" in the black dotted square frame is a subtitle structure data size indicating the size of the entire one-screen subtitle data, specifically 3272byte corresponding to hexadecimal 0C 08. Data "0CAA" in the black solid line box is a control field offset value for indicating that the offset address of the subtitle control field is "0CAA". According to the determined offset address and the scale of the caption control domain, the initial position of the caption control domain data can be positioned.
Referring to fig. 7, fig. 7 is a schematic diagram of a partial segment of one-screen subtitle data according to an embodiment of the present application. The caption data corresponds to a standard definition format, the first row and the first column of data are scales required for reading the caption data and used for positioning data addresses, and other data are caption data. The subtitle control field data may be determined according to the offset address indicating the subtitle control field determined in the start segment of the one-screen subtitle data being "0 ca". The start data of the subtitle control field data is determined to be "00" from "00000CA0" and "0A" in the scale, and the subtitle control field data, that is, the data in the black dotted square in fig. 7, can be read.
In the embodiment of the application, the subtitle control domain data can be determined by determining the offset address of the subtitle control domain from the subtitle data, the subtitle attribute information can be further determined according to the subtitle control domain data, the original subtitle bitmap data can be obtained and converted, and finally the subtitle data of the vehicle-mounted multimedia system capable of being used for subtitle display is obtained.
According to the above, the subtitle control domain data includes various subtitle attribute information, and the embodiment of the present application further provides a method for determining subtitle attribute information according to the subtitle control domain data, which specifically includes the following six steps:
a1: and determining the subtitle starting time according to the data of the 0 th byte and the 1 st byte in the subtitle control domain data.
In a specific implementation, the related subtitle attribute information included in the subtitle control field data is subtitle start time, subtitle end time, color mapping information, alpha channel information, subtitle bitmap position information, and compressed bitmap data offset address. Referring to fig. 8, fig. 8 is a schematic structural diagram of subtitle control field data according to an embodiment of the present application.
The first 0 th byte and the first 1 st byte of the caption control field data are used to indicate the caption start time, and the third 3 th byte and the fourth 4 th byte are used to indicate the offset address of the end sequence data. And then comprises a plurality of sequence data, each sequence data is called as [ ctrl seq ], and each sequence data represents corresponding caption attribute information. The subtitle control field data is finally end sequence data called end seq. To facilitate data partitioning, FF is used as a separator, and [ end seq ] is separated from its previous [ ctrl seq ] by FF.
Referring to fig. 9, fig. 9 is a schematic sequence diagram of subtitle control field data according to an embodiment of the present application.
The sequence name corresponding to the sequence type of 0x01 is the caption start time, the data corresponding to the sequence are the 0 th byte and the 1 st byte of the caption control domain data, and the caption start time can be calculated through the first two bytes of the caption control domain data.
A specific calculation method is ((p [0] < <8 > < < p [1 ]) < < 10)/90, where p denotes the head address of the subtitle control field data. And moving the 0 th byte of the caption control domain data to the left by 8 bits, performing OR operation on the 0 th byte of the caption control domain data and the 1 st byte of the caption control domain data, moving the obtained data to the left by 10 bits, and dividing the data by 90 bits to obtain the caption initial time.
As an example, see the subtitle control field data in fig. 7, where scales "00000CA0" and "0E" correspond to the sequence type of the subtitle start time. Correspondingly, "00" corresponding to the scales "00000CA0" and "0A", and "00" corresponding to the scales "00000CA0" and "0B" are data indicating the subtitle start time. The caption start time obtained by calculation according to ((p [0] < <8 >, [ p 1 ]) < < 10)/90 is 0 msec based on the determination of "0000".
A2: and determining the offset address of the ending sequence according to the data of the 2 nd byte and the 3 rd byte in the caption control domain data.
A3: and determining the ending sequence data from the caption control domain data according to the offset address of the ending sequence.
The subtitle control field data has an end sequence, by which the subtitle end time can be determined. Before determining the subtitle end time, it is first necessary to determine an end sequence in the subtitle control field data.
The 2 nd byte and the 3 rd byte in the caption control field data are offset addresses of the ending sequence, and the ending sequence can be determined by the data of the 2 nd byte and the 3 rd byte in the caption control field data.
As an example, see the subtitle control field data in fig. 7, where the scale "00000CA0" and "0C", "0D" correspond to "0CC2" which is the offset address of the end sequence. The head address of the end sequence determined by "0CC2" is scales "00000CC0" and "02". Specifically, the termination sequence is "01 33 C2 02FF".
A4: and determining the subtitle ending time according to the data of the 0 th byte and the 1 st byte in the ending sequence data.
Referring to fig. 9, the sequence name corresponding to the sequence type of 0x02 is the subtitle end time, and the data corresponding to the sequence is the first two bytes in the end sequence in the subtitle control field data.
After the ending sequence is determined by the offset address of the ending sequence, the subtitle ending time is determined according to the first two bytes of the ending sequence.
The calculation formula of the subtitle ending time is the same as that of the subtitle starting time, and is not described herein again.
Still taking the subtitle control field data shown in fig. 7 as an example, the first two bytes of the ending sequence are "01 33". By substituting "01" into the formula ((p [0] < <8 >, [ p ], [1 ]) < < 10)/90, ((0 x01< <8 >, < 0x 33) < < 10)/90 =3492 can be obtained. The calculated subtitle end time is 3492 msec.
And obtaining the display duration of the corresponding one-screen caption data through the calculated caption starting time and the caption ending time.
A5: and respectively determining color mapping information sequence data, alpha channel information sequence data, subtitle bitmap position sequence data and compressed bitmap data offset address sequence data from the subtitle control domain data according to each sequence type identifier in the subtitle control domain data.
A6: determining color mapping information from the color mapping information sequence data, determining an alpha channel information sequence from the alpha channel information sequence data, determining subtitle bitmap position information from the subtitle bitmap position sequence data, and determining a compressed bitmap data offset address from the compressed bitmap data offset address sequence data.
First, referring to fig. 9, a sequence name corresponding to a sequence type of 0x03 is referred to as a color mapping information sequence, corresponding color mapping information sequence data is determined by a 03 sequence type identifier in subtitle control field data, and color mapping information is determined from the corresponding color mapping information sequence data.
Wherein the color mapping information sequence data occupies 3 bytes. The color mapping information sequence data may be represented as [03wxyz ], each character representing 4 bits, each character after the sequence type identification having a corresponding color.
Taking the subtitle control field data shown in fig. 7 as an example, the sequence type identifier 03 is located at "00000CA0" and "0F". "0F23" after "03" constitutes color information sequence data "030F23" together with the sequence type identification. Wherein "0F23" each character has a corresponding color to be displayed.
Next, referring to fig. 9, a sequence name corresponding to the sequence type 0x04 is an alpha channel information sequence, the corresponding alpha channel information sequence data is determined by the 04 sequence type identifier in the subtitle control field data, and the alpha channel information sequence is determined from the corresponding alpha channel information sequence data.
Wherein, the alpha channel information sequence data occupies 3 bytes. The alpha channel information sequence data may be represented as [04wxyz ], and wxyz each represent one character.
Taking the subtitle control field data shown in fig. 7 as an example, the sequence type identifiers 04 are located at "00000CB0" and "02". "FF F0" after "04" constitutes alpha channel information sequence data "04FFF0" together with the sequence type identification. The alpha channel information sequence is "FFF0".
Again, referring to fig. 9, the sequence name corresponding to the sequence type of 0x05 is called a subtitle bitmap position sequence, the corresponding subtitle bitmap position sequence data is determined by the 05 sequence type identifier in the subtitle control field data, and the subtitle bitmap position information is determined from the corresponding subtitle bitmap position sequence data.
Wherein the position sequence data of the subtitle bitmap occupies 7 bytes. The subtitle bitmap position sequence data may be represented as [05 xxxxxxxyyyyyyyy ], xxxxxxxyyyyyy representing 12 different characters, respectively, XXX representing a position of a first column in the subtitle, XXX representing a position of a last column in the subtitle, yy representing a position of a first line in the subtitle, and YYY representing a position of a last line in the subtitle.
Taking the subtitle control field data shown in fig. 7 as an example, the sequence type identifier 05 is located at "00000CB0" and "05". And the '00 02CF00 DF' after the '05' and the sequence type identifier form subtitle bitmap position sequence data '050002 CF0021 DF'. Where "000" represents the position of the first column in the subtitle, "2CF" represents the position of the last column in the subtitle, "002" represents the position of the first line in the subtitle, and "1DF" represents the position of the last line in the subtitle.
Further, the width and height of the subtitle, that is, the position information of the subtitle bitmap, can be calculated through the sequence data of the subtitle bitmap position. A specific width may be (XXX-XXX + 1) and a height may be (YYY-YYY + 1).
Finally, referring to fig. 9, the sequence name corresponding to the sequence type of 0x06 is the offset address of the compressed bitmap data, the corresponding offset address sequence data of the compressed bitmap data is determined by the 06 sequence type identifier in the subtitle control field data, and the offset address of the compressed bitmap data is determined from the corresponding offset address sequence data of the compressed bitmap data.
Wherein the compressed bitmap data offset address sequence data occupies 5 bytes. The compressed bitmap data offset address sequence data may be represented as [05xxxxyyy ], [ xxxxyyyy ] representing 8 different characters, respectively.
Taking the subtitle control field data shown in fig. 7 as an example, the sequence type identifier 06 is located at "00000CB0" and "0C". The "0004065E" after "06" and the sequence type identifier together constitute compressed bitmap data offset address sequence data "060004065E", and the compressed bitmap data offset address "0004065E" is obtained.
In the embodiment of the present application, by interpreting the subtitle control field data, each subtitle attribute information can be determined. And acquiring subtitle attribute information from the subtitle control domain data through the structure of the subtitle control domain data and the sequence type identifier included in the subtitle control domain data, and converting original subtitle bitmap data according to the subtitle attribute information.
In one possible implementation, the bitmap data may be compressed by RLE (RUN-LENGTH ENCODING) compression to obtain compressed bitmap data. The compressed bitmap data obtained by the RLE compression mode comprises compressed bitmap even line data and compressed bitmap odd line data, and the corresponding compressed bitmap data offset addresses comprise compressed bitmap even line data offset addresses and compressed bitmap odd line data offset addresses.
For compressed bitmap data obtained by an RLE compression mode, including compressed bitmap data offset addresses of even line data of a compressed bitmap and compressed bitmap data offset addresses of odd line data of the compressed bitmap, the embodiment of the application provides a corresponding method for obtaining original subtitle bitmap data, which includes the following five steps:
b1: and determining the even line data of the compressed bitmap from the caption data according to the offset address of the even line data of the compressed bitmap.
The RLE compression mode is to compress even line data and odd line data of bitmap data respectively, and the obtained compressed bitmap data comprises compressed bitmap even line data and compressed bitmap odd line data.
Before obtaining the even line data and the odd line data of the compressed bitmap for decompression, the corresponding compressed bitmap data is determined by the offset address of the compressed bitmap data.
The compressed bitmap data offset address may be obtained when the subtitle attribute information is obtained from the subtitle control field data. The offset addresses of the compressed bitmap data comprise offset addresses of even row data and offset addresses of odd row data of the compressed bitmap. According to the offset address of the even line data of the compressed bitmap, the address of the even line data of the compressed bitmap in the caption data can be determined, and then the even line data of the compressed bitmap can be determined.
Taking the subtitle control field data provided in fig. 7 above as an example, the bitmap data is compressed by the offset address "0004065E". Wherein, the offset address of the even row data of the compressed bitmap is "0004". And after the offset address of the even line data of the compressed bitmap is obtained, correspondingly determining the even line data of the compressed bitmap in the caption data.
B2: and decompressing the data of the even line data of the compressed bitmap by using a run length encoding RLE (run length encoding) decompression algorithm to obtain the even line data of the bitmap data of the original subtitle.
After determining the compressed bitmap even line data, decompression is performed using the RLE decompression algorithm. Specifically, the pointer of the reader of the RLE decompression algorithm may be pointed to the first byte of the even line data of the compressed bitmap, and the bit-by-bit operation may be performed to obtain the even line data of the original subtitle bitmap data.
It should be noted that, for the compressed bitmap even line data in the standard definition format and the compressed bitmap even line data in the high definition format, there are corresponding decompression methods.
The decompression method of the compressed bitmap even line data in the standard definition format comprises the following steps:
(1) read 4bit compressed bitmap even row data, noted 0000abcd. Wherein, abcd represents read 1-bit compressed bitmap even line data respectively.
(2) If (0000 abcd) > = (0000 0100), it is determined that the pixels corresponding to the even line data of the compressed bitmap display the repeated colors, and the number of the repeated colors is 0000 00ab and is recorded as N. And taking 0000 0000cd as a repeated color value, wherein the color value has a preset corresponding display color. Writing N (0000 00cd) data into the decompression buffer area, and reading the even line data of the next 4-bit compressed bitmap.
(3) If (0000 abcd) < (0000 0100), then ab is 00 with no repeated color. And (4) leftwards shifting (0000 00cd) for 4 bits, and performing OR operation on next 4-bit data in the even line data of the compressed bitmap, and recording the obtained data as (00 cd efgh).
(4) If (00 cd efgh) > = (0001 0000), it is determined to have a repeated color. 0000cdef is recorded as the number of repetitions N, and 0000 00gh is taken as the color value of the repetition. Directly writing N (0000 00gh) data into the decompression buffer area, and reading the next 4-bit compressed bitmap even line data.
(5) If (00 cd efgh) < (0001 0000), then cd is indicated as 00, with no repeated colors. The (0000 efgh) is shifted to the left by 4 bits, and then the even line data of the next 4-bit t compressed bitmap and (0000 efgh) are OR-ed, and the obtained data is recorded as (efgh ijkl).
(6) If (efgh ijkl) > = (0100 0000), it is determined to have a repeated color. Namely 00ef ghij is recorded as the repetition number N,0000 00kl is taken as the repeated color value, N (0000 00kl) is directly written into the decompression buffer area, and the even line data of the next 4-bit compression bitmap is read.
(7) If (efgh ijkl) < (0100 0000), then ef is said to be 00, i.e., (00 gh ijkl). If the shift is performed again to obtain the even line data of the next 4-bit t compressed bitmap for OR operation, the (ghijklmn op) is obtained, and the obtained data exceeds 8-bit length and can not be normally processed. Therefore, at this time, N (0000 00kl) blocks are directly written into the decompression buffer with 0000ghij as the repetition number N and 0000 00kl as the repetition color value, and the next 4-bit compressed bitmap even line data is read.
When the above operation N =0, the decompression of the even line data of the compressed bitmap is ended; or when reading the compressed bitmap even line data is finished, the decompression of the compressed bitmap even line data is finished.
The decompression method of the high-definition compressed bitmap even line data comprises the following steps:
(1) and acquiring the even line data of the compressed bitmap with the highest bit of 1bit, and recording the even line data as A. It is determined that a is 1 or 0, a =1 has a repeating color, and a =0 has a repeating color.
(2) When a =0, the number of repetitions of color N =1. And acquiring even line data of the second highest 1-bit compressed bitmap, recording the even line data as B, and judging whether the color is represented by 2 bits after B or 8 bits after B. Determining that the value of B is 1 or 0, B =1 then B is followed by 8bit compressed bitmap even line data to represent color, and B =0 then B is followed by 2bit compressed bitmap even line data to represent color.
(3) When A =1, the second highest 1-bit compressed bitmap even line data is acquired, which is marked as B and used for judging whether B is followed by 2 bits to represent color or B is followed by 8 bits to represent color. Determining that the value of B is 1 or 0, B =1 then B is followed by 8bit compressed bitmap even line data to represent color, and B =0 then B is followed by 2bit compressed bitmap even line data to represent color.
And recording the 1-bit compressed bitmap even line data after the compressed bitmap even line data representing the colors is obtained as C, wherein the C is used for determining the number of times of repeated colors. When B =1, C is the even line data of the 9 th bit compressed bitmap after B. When B =0, C is the 3bit compressed bitmap even line data after B.
And when C is 0, taking 3bit of compressed bitmap even line data after C, and recording the data as abc. The numerical range corresponding to abc is 0 to (2) 3 -1). In order to ensure that the number of times of repeating colors is 2,n = (0000 0 abc) +2 or more.
When C is 1, even line data of 7-bit compressed bitmap after C is taken and recorded as abcefgh, and the numerical range is 0 to (2) 7 -1) when the value is guaranteed to be greater than or equal to ((2) 3 -1) + 2), otherwise take 7bit unintentionally. Therefore, N = (0 abc efgh) + ((2) 3 -1)+2)。
And repeating the steps until the reading of the even line data of the compressed bitmap is finished, and decompressing the even line data of the compressed bitmap.
B3: and determining the odd line data of the compressed bitmap from the subtitle data according to the offset address of the odd line data of the compressed bitmap.
The method for obtaining the offset address of the odd line data of the compressed bitmap and determining the odd line data of the compressed bitmap from the caption data is the same as the method for obtaining the offset address of the even line data of the compressed bitmap and determining the even line data of the compressed bitmap from the caption data, and is not repeated herein.
Taking the subtitle control field data provided in fig. 7 above as an example, the bitmap data is compressed by the offset address "0004065E". Wherein the offset address of the odd row data of the compressed bitmap is "065E". And after the offset address of the odd line data of the compressed bitmap is obtained, the odd line data of the compressed bitmap is correspondingly determined in the caption data.
B4: and performing data decompression on the compressed bitmap odd line data by using the RLE decompression algorithm to obtain original caption bitmap odd line data.
The method for performing data decompression on the compressed bitmap odd line data by using the RLE decompression algorithm to obtain the original subtitle bitmap odd line data is the same as the method for performing data decompression on the compressed bitmap even line data by using the RLE decompression algorithm to obtain the original subtitle bitmap even line data, and the method is not repeated herein.
It should be noted that the order of decompressing the compressed bitmap even line data and the compressed bitmap odd line data is not limited in the embodiments of the present application. The even line data of the compressed bitmap and the odd line data of the compressed bitmap are decompressed independently. In a possible implementation manner, the even line data of the compressed bitmap may be decompressed first, the odd line data of the compressed bitmap may be decompressed first, and the even line data of the compressed bitmap and the odd line data of the compressed bitmap may be decompressed simultaneously.
B5: and according to the subtitle bitmap position information, carrying out cross storage on the even line data and the odd line data of the original subtitle bitmap data to generate original subtitle bitmap data.
According to the subtitle bitmap position information, the amount of data to be stored per line of the subtitle bitmap can be determined. And then according to the even line data and the odd line data of the original caption bitmap data obtained by decoding, carrying out cross storage according to the sequence of the even line and the odd line to generate corresponding original caption bitmap data. In a possible implementation manner, the data of the 0 th line may be stored first, and then the data of the 1 st line is stored, and the data is stored in a crossed manner until the even line data and the odd line data of the original caption bitmap data are correspondingly stored, so as to obtain the original caption bitmap data.
Specifically, the amount of data to be stored per line of the subtitle bitmap may be determined according to the width and height in the position information of the subtitle bitmap and the format type of the subtitle data. The number of pixels contained in the subtitle bitmap can be determined by the width and height in the subtitle bitmap position information. And then, the byte corresponding to each pixel can be determined according to the format type of the caption data, the number of the pixels and the byte corresponding to each pixel are calculated, and the quantity of data to be stored in each line of the caption bitmap can be determined. For example, if the width and height in the subtitle bitmap position information are both 10, each line of the subtitle bitmap has 10 pixels. When the format type of the subtitle data is 24 in RGB format, each pixel corresponds to 3 bytes of data. It can thus be determined that each line of the subtitle bitmap has 30 bytes of data. Based on the determined amount of each row of data, the amount of data stored per row may be determined. For example, when performing the cross storage, the even line data of the original subtitle bitmap data of 30 bytes may be stored to the 0 th line, the odd line data of the original subtitle bitmap data of 30 bytes may be stored to the 1 st line, the even line data of the original subtitle bitmap data of 30 bytes may be stored to the 2 nd line, and so on until the original subtitle bitmap data is obtained.
In the embodiment of the application, the even line data and the odd line data of the compressed bitmap are decompressed respectively to obtain the corresponding even line data and odd line data of the original caption bitmap data. And generating original caption bitmap data by performing cross storage on even line data and odd line data of the original caption bitmap data. Thus, decompression of the compressed bitmap data can be realized, and the original subtitle bitmap data for conversion can be obtained.
The caption bitmap data includes color values of respective pixels, and by setting colors of the pixels to be displayed correspondingly, the corresponding caption to be displayed can be composed by the pixels. However, in order not to block the normal playback of the video, the portions other than the subtitles are all transparent pixels. Before the original subtitle bitmap data is converted, an area where subtitles are actually displayed may be determined, and the corresponding original subtitle bitmap data is converted for the area.
Before converting the original caption bitmap data into caption bitmap data suitable for the vehicle-mounted multimedia system by using the color mapping information and the alpha channel information, the method further comprises the following three steps:
c1: and searching a non-transparent color value in each pixel point color value corresponding to the original caption bitmap data to determine the pixel points with the non-transparent color value.
In order to determine the pixel points actually used for displaying the caption, the color values of the non-transparent colors can be found in the color values of the pixel points corresponding to the bitmap data of the original caption, and then the pixel points with the color values of the non-transparent colors can be determined.
The setting method of the color value may be different for different types of subtitle data. As an example, the transparent color value may be 00 or FF.
C2: and determining the position of the effective caption area according to the position of the first line of the pixel point with the non-transparent color value, the position of the last line of the pixel point with the non-transparent color value, the position of the first column of the pixel point with the non-transparent color value and the position of the last column of the pixel point with the non-transparent color value.
After the pixel points with the non-transparent color values are found, the positions of the effective caption areas can be determined according to the appearing ranges of the pixel points.
Specifically, according to the first row position, the last row position, the first column position and the last column position of the pixel point having the non-transparent color value, the region of the pixel point having the non-transparent color value, that is, the position of the effective caption region, is determined.
The embodiment of the present application does not limit the manner of determining the edge position, and in a possible implementation manner, a method of criss-cross and head point positioning may be adopted. Referring to fig. 10, fig. 10 is a schematic diagram for determining a position of a valid subtitle region according to an embodiment of the present application. When searching for the pixel point of the non-transparent color value, the pixel point can be searched vertically and horizontally from the upper left corner to the lower right corner, when the pixel point of the first non-transparent color value is encountered, y0 in the horizontal and vertical crossing point (x 0, y 0) is the first row position of the pixel point with the non-transparent color value, and x0 is the first column position of the pixel point with the non-transparent color value. Similarly, the image is searched vertically and horizontally from the lower right corner to the upper left corner, when a pixel point with a first non-transparent color value is encountered, y1 in the horizontal and vertical crossing point (x 1, y 1) is the last row position of the pixel point with the non-transparent color value, and x1 is the last column position of the pixel point with the non-transparent color value. The valid subtitle region position may be determined from (x 0, y 0) and (x 1, y 1).
C3: and updating the subtitle bitmap position information according to the effective subtitle area position, and updating original subtitle bitmap data in the effective subtitle area position into subtitle bitmap data.
And after the position of the effective caption area is determined, updating the position information of the caption bitmap according to the position of the effective caption area. Specifically, the width of the position information of the original subtitle bitmap may be updated to the width of the position of the effective subtitle region, and the height of the position information of the original subtitle bitmap may be updated to the height of the position of the effective subtitle region. And adding the first row position and the first column position of the pixel point of the non-transparent color value in the subtitle bitmap position information. Therefore, the position of the effective caption area can be determined according to the first row position and the first column position of the pixel point of the non-transparent color value and the updated width and height.
Correspondingly, the original caption bitmap data in the effective caption area position is updated into caption bitmap data for subsequent conversion of the caption bitmap data suitable for the vehicle-mounted multimedia system.
In the embodiment of the application, by determining the effective caption area, processing of caption bitmap data corresponding to the pixel points with the transparent color values can be reduced, memory consumption is reduced, and conversion efficiency of the caption bitmap data is improved.
When format conversion is carried out on original caption bitmap data, the color value of each pixel in caption display represented in the original caption bitmap data needs to be converted into a color value which can be read by a vehicle-mounted multimedia system. The original caption bitmap data can be updated caption bitmap data after the effective caption area position is determined, or can be original caption bitmap data which is not updated
In a possible implementation manner, the converting the original caption bitmap data into the caption bitmap data suitable for the vehicle-mounted multimedia system by using the color mapping information and the alpha channel information may specifically include the following three steps:
d1: and creating palette information suitable for the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information and the custom palette information.
In order to convert color values of each pixel point corresponding to original caption bitmap data and realize conversion of the original caption bitmap data, custom palette information can be established in advance.
Have the colour value that corresponds with color mapping information in the custom palette information, through custom palette information, can carry out the conversion of colour value to color mapping information, be convenient for convert the colour value into the colour value that is applicable to on-vehicle multimedia system and reads.
Through the color mapping information, the alpha channel information and the custom palette information in the subtitle data, the color mapping information in the subtitle data can be converted into palette information suitable for a vehicle-mounted multimedia system. Have the colour value that on-vehicle hypersystem can read in the palette that is applicable to on-vehicle hypersystem.
It should be noted that the palette information suitable for the vehicle multimedia system is correspondingly established according to the color mapping information in the one-screen subtitle data. That is, the palette information applicable to the vehicle-mounted multimedia system corresponds to the color mapping information in the one-screen subtitle data, and different palette information applicable to the vehicle-mounted multimedia system is established by different one-screen subtitle data.
D2: and searching the palette information suitable for the vehicle-mounted multimedia system, and converting the color value of each pixel point corresponding to the original caption bitmap data into the target color value of each pixel point.
And converting each pixel point color value corresponding to the original caption bitmap data into each pixel point target color value according to the created palette information suitable for the vehicle-mounted multimedia system. And the target color value of the pixel point is a color value which can be read by the vehicle-mounted multimedia system.
D3: and generating caption bitmap data suitable for the vehicle-mounted multimedia system according to the target color values of the pixel points.
And generating subtitle bitmap data suitable for the vehicle-mounted multimedia system according to the converted target color values of the pixel points. And specifically, corresponding data format conversion can be performed on the target color value of each pixel point. The manner of conversion may be determined by the format of the caption data and the caption data suitable for the in-vehicle multimedia system.
In one possible implementation, the subtitle data may be in RGB format, and the subtitle data suitable for the vehicle multimedia system may be in ARGB format. Referring to fig. 11, fig. 11 is a flowchart of a method for generating subtitle bitmap data suitable for the vehicle-mounted multimedia system according to an embodiment of the present disclosure.
Wherein i is the number of the pixel points, y is the row where the pixel points are located, x is the column where the pixel points are located, w is the width in the subtitle bitmap position information, and h is the height in the subtitle bitmap position information. By comparing the sizes of y and h and the sizes of x and w, pixel points can be traversed, and conversion of all pixel points in original caption bitmap data is achieved. v = argb _ palette [ i ] is the target color value of the ith pixel. And Bmp32Data [ i ] is subtitle bitmap Data which corresponds to the ith pixel point and is suitable for the vehicle-mounted multimedia system. Since the color value of each pixel point in the ARGB format is represented by 4 bytes of Data, the ith pixel point has four corresponding color values Bmp32Data [ i +0], bmp32Data [ i +1], bmp32Data [ i +2] and Bmp32Data [ i +3 ]. By shifting v and anding, the color value in the corresponding digit can be obtained. For example, when v =01020304, the calculated Bmp32Data [ i +0] =01, bmp32Data [ i +1] =02, bmp32Data [ i +2] =03, and Bmp32Data [ i +3] =04. Therefore, subtitle bitmap data corresponding to the ith pixel point and suitable for the vehicle-mounted multimedia system can be obtained.
In the embodiment of the application, the target color values of the pixel points are generated, so that the caption bitmap data suitable for the vehicle-mounted multimedia system is generated, and the conversion of the original caption bitmap data is realized.
In a possible implementation manner, the color mapping information may be converted by using the custom palette information, and then combined with the alpha channel information to obtain the palette information suitable for the vehicle-mounted multimedia system.
Creating palette information suitable for the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information and the custom palette information, and specifically comprising the following three steps:
f1: and respectively converting the ith element value in the color mapping information into a first color value by searching the custom palette information, wherein i is an integer from 0 to n, and n is the number of the element values in the color mapping information minus 1.
The color mapping information has a plurality of element values, and each element value in the color mapping information is converted into a corresponding first color value in the custom palette information by searching the corresponding custom palette information. Wherein, the first color value may be 1 byte.
Referring to fig. 12, a flowchart of a method for obtaining palette information suitable for the in-vehicle multimedia system according to an embodiment of the present application is provided.
In fig. 12, colormap (i) is used to represent the i-th element value in the color mapping information, and palette _ table (colormap (i)) is used to represent the i-th first color value of the conversion.
Take the example of the color mapping information "0F23" in fig. 7, where "0", "F", "2", and "3" are each an element value. Since the color mapping information is hexadecimal and corresponds to 16 colors at most, the custom palette information may have color values corresponding to the colors in 16. In a specific implementation manner, a color value corresponding to an element value in the color mapping information in the custom palette information may be determined in an indexed manner. In addition, in order to ensure the size of data read by the color value in actual use, data expansion can be performed on each element value in the color mapping information on the premise that the value is not changed. For example, "0", "F", "2", and "3" are correspondingly extended to "00", "0F", "02", and "03" in order to read and find corresponding custom palette information for the element values in the color mapping information.
F2: and taking the ith first color value as a low bit, and taking the ith element value in the alpha channel information as a high bit to form the ith second color value.
After the first color value is obtained, the first color value needs to be digitally adjusted, so that the ith element value in the alpha channel information is combined.
And taking the ith first color value as a low bit, and taking the ith element value in the alpha channel information as a high bit, and combining to obtain the ith second color value. The digital adjustment of the ith first color value and the ith element value in the alpha channel information is determined from the bytes of the second color value.
Specifically, referring to fig. 12, when the byte of the second color value is 4 bytes, the ith first color value may be and-operated with "0x00FFFFFF", so as to implement low-order adjustment of the ith first color value. And then, the ith element value in the alpha channel information, namely the alpha [ i ], is shifted to the left by 24 bits to obtain the ith element value in the high-order alpha channel information. And performing or operation on the obtained i first color values serving as the lower bits and the i element value in the alpha channel information serving as the upper bits to obtain the i second color value, namely argb _ palette [ i ].
F3: creating palette information applicable to the in-vehicle multimedia system including n second color values.
After second color values corresponding to element values in all the color mapping information are obtained through calculation, palette information which comprises n second color values and is applicable to the vehicle-mounted multimedia system is utilized.
In the embodiment of the application, by customizing the palette information and the alpha channel information, corresponding palette information suitable for the vehicle-mounted multimedia system can be created based on the color mapping information, so that the conversion of subtitle data is realized.
Based on the method for realizing subtitle conversion provided by the above method embodiment, the embodiment of the present application further provides a device for realizing subtitle conversion, and the device for realizing subtitle conversion will be described with reference to the accompanying drawings.
Referring to fig. 13, this figure is a schematic structural diagram of an apparatus for implementing subtitle conversion according to an embodiment of the present application, where the apparatus is applied to a vehicle-mounted multimedia system, and the apparatus includes:
an obtaining unit 1301, configured to obtain subtitle data;
a first determining unit 1302, configured to determine subtitle control domain data from the subtitle data;
a second determining unit 1303, configured to determine subtitle attribute information according to the subtitle control field data, where the subtitle attribute information includes subtitle start time, subtitle end time, color mapping information, alpha channel information, subtitle bitmap position information, and compressed bitmap data offset address;
a decoding unit 1304, configured to determine compressed bitmap data from the subtitle data according to the offset address of the compressed bitmap data, and decode original subtitle bitmap data from the compressed bitmap data;
a converting unit 1305, configured to convert the original subtitle bitmap data into subtitle bitmap data suitable for the vehicle-mounted multimedia system by using the color mapping information and the alpha channel information;
the constructing unit 1306 is configured to construct the subtitle start time, the subtitle end time, the subtitle bitmap position information, and the subtitle bitmap data into vehicle-mounted multimedia system subtitle data, and perform subtitle display according to the vehicle-mounted multimedia system subtitle data.
Optionally, the first determining unit 1302 includes:
a first determining subunit, configured to determine an offset address of a subtitle control field from the subtitle data;
and the second determining subunit is used for determining the subtitle control domain data from the subtitle data according to the offset address of the subtitle control domain.
Optionally, the second determining unit 1303 includes:
a third determining subunit, configured to determine a subtitle start time according to the data of the 0 th byte and the 1 st byte in the subtitle control field data;
a fourth determining subunit, configured to determine an offset address of an ending sequence according to data of a2 nd byte and a3 rd byte in the subtitle control field data;
a fifth determining subunit, configured to determine end sequence data from the subtitle control field data according to the offset address of the end sequence;
a sixth determining subunit, configured to determine a subtitle end time according to data of the 0 th byte and the 1 st byte in the end sequence data;
a seventh determining subunit, configured to determine color mapping information sequence data, alpha channel information sequence data, subtitle bitmap position sequence data, and compressed bitmap data offset address sequence data from the subtitle control domain data, respectively, according to each sequence type identifier in the subtitle control domain data;
an eighth determination subunit operable to determine color mapping information from the color mapping information sequence data, determine an alpha channel information sequence from the alpha channel information sequence data, determine subtitle bitmap position information from the subtitle bitmap position sequence data, and determine a compressed bitmap data offset address from the compressed bitmap data offset address sequence data.
Optionally, the compressed bitmap data offset addresses include a compressed bitmap even row data offset address and a compressed bitmap odd row data offset address, and the decoding unit 1304 includes:
a ninth determining subunit, configured to determine, according to the even line data offset address of the compressed bitmap, even line data of the compressed bitmap from the subtitle data;
the first decompression subunit is used for performing data decompression on the even line data of the compressed bitmap by using a run length encoding RLE decompression algorithm to obtain the even line data of the original subtitle bitmap data;
a tenth determining subunit, configured to determine odd line data of the compressed bitmap from the subtitle data according to the odd line data offset address of the compressed bitmap;
the second decompression subunit is configured to perform data decompression on the odd line data of the compressed bitmap by using the RLE decompression algorithm to obtain odd line data of the original subtitle bitmap data;
and the storage subunit is used for performing cross storage on the even line data and the odd line data of the original caption bitmap data according to the caption bitmap position information to generate original caption bitmap data.
Optionally, the apparatus further comprises:
the searching subunit is used for searching a non-transparent color value from each pixel point color value corresponding to the original subtitle bitmap data so as to determine a pixel point with the non-transparent color value;
a position determining subunit, configured to determine a position of an effective subtitle area according to a first row position at which the pixel having the non-transparent color value appears, a last row position at which the pixel having the non-transparent color value appears, a first column position at which the pixel having the non-transparent color value appears, and a last column position at which the pixel having the non-transparent color value appears;
and the updating subunit is used for updating the subtitle bitmap position information according to the effective subtitle area position and updating the original subtitle bitmap data in the effective subtitle area position into subtitle bitmap data.
Optionally, the converting unit 1305 includes:
the creating subunit is configured to create palette information applicable to the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information, and the custom palette information;
the conversion subunit is configured to search the palette information applicable to the vehicle-mounted multimedia system, and convert color values of each pixel point corresponding to the original subtitle bitmap data into target color values of each pixel point;
and the generation subunit is used for generating subtitle bitmap data suitable for the vehicle-mounted multimedia system according to the target color values of the pixels.
Optionally, the creating subunit includes:
a first color value conversion subunit, configured to convert an ith element value in the color mapping information into a first color value by searching for custom palette information, where i is an integer from 0 to n, and n is a value obtained by subtracting 1 from the number of element values in the color mapping information;
a second color value forming subunit, configured to form an ith second color value by using the ith first color value as a low bit and using an ith element value in the alpha channel information as a high bit;
a palette information creating subunit for creating palette information including n second color values applicable to the in-vehicle multimedia system.
An embodiment of the present application further provides a device for implementing subtitle conversion, including: the processor executes the computer program to realize the method for realizing the subtitle conversion.
The embodiment of the present application also provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is caused to execute the above method for implementing subtitle conversion.
According to the embodiment of the application, the caption control domain data is determined through the acquired caption data, and the caption attribute information is determined according to the caption control domain data. The subtitle attribute information comprises subtitle start time, subtitle end time, color mapping information, alpha channel information, subtitle bitmap position information and compressed bitmap data offset addresses. According to the offset address of the compressed bitmap data, the compressed bitmap data can be determined from the subtitle data and decoded to obtain the original subtitle bitmap data. And converting the original caption bitmap data by utilizing the color mapping information and the alpha channel information to obtain the caption bitmap data suitable for the vehicle-mounted multimedia system. And finally, constructing vehicle-mounted multimedia system subtitle data by using the subtitle starting time, the subtitle ending time, the subtitle bitmap position information and the subtitle bitmap data, and displaying subtitles according to the vehicle-mounted multimedia system subtitle data. Therefore, in the embodiment of the application, the caption control domain data is determined from the acquired caption data, the corresponding caption attribute information is determined according to the caption control domain data, the original caption bitmap data is further determined, the caption bitmap data can be obtained through conversion, and the caption data of the vehicle-mounted multimedia system is constructed. The vehicle-mounted multimedia system subtitle data obtained through the subtitle data can be suitable for a vehicle-mounted multimedia system, the conversion between the subtitle data and the vehicle-mounted multimedia system subtitle data can be achieved, the design cost of the vehicle-mounted multimedia system is reduced, and the overall performance of the vehicle-mounted multimedia system is improved.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for realizing subtitle conversion is applied to a vehicle-mounted multimedia system and comprises the following steps:
acquiring subtitle data;
determining caption control domain data from the caption data;
determining subtitle attribute information according to the subtitle control domain data, wherein the subtitle attribute information comprises subtitle starting time, subtitle ending time, color mapping information, alpha channel information, subtitle bitmap position information and compressed bitmap data offset addresses;
determining compressed bitmap data from the subtitle data according to the offset address of the compressed bitmap data, and decoding original subtitle bitmap data from the compressed bitmap data;
converting the original caption bitmap data into caption bitmap data suitable for the vehicle-mounted multimedia system by utilizing the color mapping information and the alpha channel information;
constructing the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data suitable for the vehicle-mounted multimedia system into vehicle-mounted multimedia system caption data, and displaying captions according to the vehicle-mounted multimedia system caption data;
the converting the original caption bitmap data into caption bitmap data suitable for the vehicle-mounted multimedia system by using the color mapping information and the alpha channel information comprises:
creating palette information suitable for the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information and the custom palette information;
searching the palette information suitable for the vehicle-mounted multimedia system, and converting color values of all pixel points corresponding to the original subtitle bitmap data into target color values of all pixel points;
generating subtitle bitmap data suitable for the vehicle-mounted multimedia system according to the target color values of the pixel points;
the creating palette information suitable for the vehicle-mounted multimedia system by using the color mapping information, the alpha channel information and the custom palette information comprises:
respectively converting the ith element value in the color mapping information into a first color value by searching custom palette information, wherein i is an integer from 0 to n, and n is the number of the element values in the color mapping information minus 1;
taking the ith first color value as a low bit, and taking the ith element value in the alpha channel information as a high bit to form an ith second color value;
creating palette information applicable to the in-vehicle multimedia system including n second color values.
2. The method of claim 1, wherein determining the subtitle control field data from the subtitle data comprises:
determining an offset address of a caption control field from the caption data;
and determining caption control domain data from the caption data according to the offset address of the caption control domain.
3. The method of claim 1, wherein determining subtitle attribute information based on the subtitle control field data comprises:
determining the subtitle starting time according to the data of the 0 th byte and the 1 st byte in the subtitle control domain data;
determining an offset address of an ending sequence according to data of a2 nd byte and a3 rd byte in the subtitle control field data;
determining ending sequence data from the subtitle control field data according to the offset address of the ending sequence;
determining the end time of the caption according to the data of the 0 th byte and the 1 st byte in the end sequence data;
respectively determining color mapping information sequence data, alpha channel information sequence data, subtitle bitmap position sequence data and compressed bitmap data offset address sequence data from the subtitle control domain data according to each sequence type identifier in the subtitle control domain data;
determining color mapping information from the color mapping information sequence data, determining an alpha channel information sequence from the alpha channel information sequence data, determining subtitle bitmap position information from the subtitle bitmap position sequence data, and determining a compressed bitmap data offset address from the compressed bitmap data offset address sequence data.
4. The method of claim 1, wherein the compressed bitmap data offset addresses comprise compressed bitmap even line data offset addresses and compressed bitmap odd line data offset addresses, and wherein determining compressed bitmap data from the subtitle data based on the compressed bitmap data offset addresses and decoding original subtitle bitmap data from the compressed bitmap data comprises:
determining compressed bitmap even line data from the subtitle data according to the compressed bitmap even line data offset address;
carrying out data decompression on the compressed bitmap even line data by using a run length encoding RLE decompression algorithm to obtain original subtitle bitmap even line data;
determining odd line data of the compressed bitmap from the subtitle data according to the odd line data offset address of the compressed bitmap;
performing data decompression on the compressed bitmap odd line data by using the RLE decompression algorithm to obtain original subtitle bitmap odd line data;
and according to the subtitle bitmap position information, carrying out cross storage on the even line data and the odd line data of the original subtitle bitmap data to generate original subtitle bitmap data.
5. The method of claim 1, wherein prior to converting the original caption bitmap data into caption bitmap data suitable for the in-vehicle multimedia system using the color mapping information and the alpha channel information, the method further comprises:
searching a non-transparent color value in each pixel point color value corresponding to the original caption bitmap data to determine a pixel point with the non-transparent color value;
determining the position of an effective caption area according to the position of a first line of the pixel point with the non-transparent color value, the position of a last line of the pixel point with the non-transparent color value, the position of a first column of the pixel point with the non-transparent color value and the position of a last column of the pixel point with the non-transparent color value;
and updating the subtitle bitmap position information according to the effective subtitle region position, and updating original subtitle bitmap data in the effective subtitle region position into subtitle bitmap data.
6. An apparatus for implementing subtitle conversion, the apparatus being applied to a vehicle multimedia system, the apparatus comprising:
an acquisition unit configured to acquire subtitle data;
a first determining unit configured to determine subtitle control domain data from the subtitle data;
the second determining unit is used for determining subtitle attribute information according to the subtitle control domain data, wherein the subtitle attribute information comprises subtitle starting time, subtitle ending time, color mapping information, alpha channel information, subtitle bitmap position information and compressed bitmap data offset addresses;
a decoding unit, configured to determine compressed bitmap data from the subtitle data according to the compressed bitmap data offset address, and decode original subtitle bitmap data from the compressed bitmap data;
the conversion unit is used for converting the original subtitle bitmap data into subtitle bitmap data suitable for the vehicle-mounted multimedia system by utilizing the color mapping information and the alpha channel information;
the construction unit is used for constructing the caption starting time, the caption ending time, the caption bitmap position information and the caption bitmap data suitable for the vehicle-mounted multimedia system into vehicle-mounted multimedia system caption data and displaying captions according to the vehicle-mounted multimedia system caption data;
the conversion unit includes:
a creating subunit, configured to create palette information applicable to the vehicle-mounted multimedia system, using the color mapping information, the alpha channel information, and custom palette information;
the conversion subunit is used for searching the palette information applicable to the vehicle-mounted multimedia system and converting color values of all pixels corresponding to the original subtitle bitmap data into target color values of all pixels;
the generating subunit is used for generating subtitle bitmap data suitable for the vehicle-mounted multimedia system according to the target color values of the pixel points;
the creating subunit includes:
a first color value conversion subunit, configured to convert an ith element value in the color mapping information into a first color value by searching for custom palette information, where i is an integer from 0 to n, and n is a value obtained by subtracting 1 from the number of element values in the color mapping information;
a second color value forming subunit, configured to form an ith second color value by using the ith first color value as a low bit and using an ith element value in the alpha channel information as a high bit;
a palette information creating subunit, configured to create palette information applicable to the in-vehicle multimedia system, which includes n second color values.
7. An apparatus for implementing subtitle conversion, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method of implementing subtitle conversion according to any one of claims 1-5 when executing the computer program.
8. A computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the instructions cause the terminal device to execute the method for realizing subtitle conversion according to any one of claims 1-5.
CN202011052845.7A 2020-09-29 2020-09-29 Method, device and equipment for realizing subtitle conversion Active CN112235521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011052845.7A CN112235521B (en) 2020-09-29 2020-09-29 Method, device and equipment for realizing subtitle conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011052845.7A CN112235521B (en) 2020-09-29 2020-09-29 Method, device and equipment for realizing subtitle conversion

Publications (2)

Publication Number Publication Date
CN112235521A CN112235521A (en) 2021-01-15
CN112235521B true CN112235521B (en) 2023-03-10

Family

ID=74120823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011052845.7A Active CN112235521B (en) 2020-09-29 2020-09-29 Method, device and equipment for realizing subtitle conversion

Country Status (1)

Country Link
CN (1) CN112235521B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117241068B (en) * 2023-11-15 2024-01-19 北京医百科技有限公司 Video subtitle generating method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1561094A (en) * 2004-02-26 2005-01-05 晶晨半导体(上海)有限公司 Method of overlaying caption in digital compression video code stream
KR100559964B1 (en) * 2004-08-26 2006-03-15 디브이에스 코리아 주식회사 DVD player capable of converting audio data in DVD disk to MP3 format file
JP4634410B2 (en) * 2007-03-16 2011-02-16 日本電信電話株式会社 Video playback device, video playback method, and caption superimposing device
JP4856041B2 (en) * 2007-10-10 2012-01-18 パナソニック株式会社 Video / audio recording and playback device
CN103220474A (en) * 2013-03-22 2013-07-24 深圳市九洲电器有限公司 Subtitle displaying method and system
CN103414936B (en) * 2013-08-02 2017-07-25 深圳Tcl新技术有限公司 Caption presentation method and device
CN107736032A (en) * 2015-06-30 2018-02-23 索尼公司 Reception device, method of reseptance, transmitting device and transmission method
US10757361B2 (en) * 2016-10-11 2020-08-25 Sony Corporation Transmission apparatus, transmission method, reception apparatus, and reception method
CN107172351B (en) * 2017-06-16 2020-04-03 福建星网智慧科技股份有限公司 Method for rapidly superposing subtitles in real time by using camera
CN109121015A (en) * 2018-10-17 2019-01-01 武汉斗鱼网络科技有限公司 A kind of method and relevant apparatus for converting barrage format

Also Published As

Publication number Publication date
CN112235521A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
US6452641B1 (en) Method and apparatus for providing and on-screen display with variable resolution capability
EP1628477A1 (en) Reproduction device and program
US20080273804A1 (en) Image Transformation
US6819331B2 (en) Method and apparatus for updating a color look-up table
JPH06303423A (en) Coupling system for composite mode-composite signal source picture signal
US20130038613A1 (en) Method and apparatus for generating and playing animated message
JPH10187126A (en) On-screen display coprocessor
US7421130B2 (en) Method and apparatus for storing image data using an MCU buffer
CN112235521B (en) Method, device and equipment for realizing subtitle conversion
JP2647033B2 (en) Lookup table creation method and lookup table creation device
CN110187858B (en) Image display method and system
US6459433B1 (en) Method and apparatus for compression of a two dimensional video object
US20050212969A1 (en) Apparatus and method of image display with real-time compression
JP6821269B2 (en) Image processing device and image processing method
US8238674B2 (en) Method for encoding and decoding images
JP3014935B2 (en) Interlaced scanning on-screen display circuit
KR100417278B1 (en) Apparatus and method for generating on-screen-display messages using one-bit pixels
US8064709B2 (en) Method and apparatus for buffering output pixel data of a joint photographic experts group image
US7333668B2 (en) Image data compressing apparatus and image data extracting apparatus
US20070165047A1 (en) Graphic rendering system capable of performing real-time compression and decompression
CN113014834B (en) Picture subtitle display method and device and related device
KR101158586B1 (en) Method for generating and playing animations by using a run length coding in a mobile communication terminal
KR20000049172A (en) Apparatus and method for generating on-screen-display messages using true color mode
JP2010245769A (en) Image processing apparatus and image processing method
CN100521753C (en) Screen display device with color circulation function and its display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant