CN106604107A - Caption processing method and apparatus - Google Patents

Caption processing method and apparatus Download PDF

Info

Publication number
CN106604107A
CN106604107A CN201611247681.7A CN201611247681A CN106604107A CN 106604107 A CN106604107 A CN 106604107A CN 201611247681 A CN201611247681 A CN 201611247681A CN 106604107 A CN106604107 A CN 106604107A
Authority
CN
China
Prior art keywords
captions
color
distortion value
given
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611247681.7A
Other languages
Chinese (zh)
Other versions
CN106604107B (en
Inventor
宁超
曹谦
安慎华
苏文华
姚键
杨伟东
潘柏宇
王冀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youku Culture Technology Beijing Co ltd
Original Assignee
Heyi Intelligent Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heyi Intelligent Technology (shenzhen) Co Ltd filed Critical Heyi Intelligent Technology (shenzhen) Co Ltd
Priority to CN201611247681.7A priority Critical patent/CN106604107B/en
Publication of CN106604107A publication Critical patent/CN106604107A/en
Application granted granted Critical
Publication of CN106604107B publication Critical patent/CN106604107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention relates to a caption processing method and apparatus. The method includes the following steps: determining a caption-associated pixel set which includes pixels which are arranged in the region where the captions are; determining the color difference value between a designated color and the caption-associated pixel set, wherein the color difference value is intended for representing the differences between the designated color and the colors of the pixels in the caption-associated pixel set; based on the color difference value, determining the color to be used by the caption. By designating a color, based on the color difference value between the designated color and the caption-associated pixel set, the method selects the color of the caption. According to the invention, the method and the apparatus can ensure clear display of the caption and ensure the consistency of the color of the caption, prevents users from feeling uncomfortable due to frequency changes of captions, at the same time enables users to customerize the color of the caption, and improves user experience.

Description

Method for processing caption and device
Technical field
It relates to display technology field, more particularly to a kind of method for processing caption and device.
Background technology
Video caption can more accurately to user's transmission information, for different dialects, foreign version captions then seem more For important.Have many players all to give tacit consent to a kind of captions color of offer at present, when seeing video captions color sometimes can and film Content is similar to, and causes user not see caption content.
By the color of statistics video display area all pixels point in correlation technique, the representative face in this region is obtained Color, then this inverse for representing color is taken in certain color space (such as HSV space), as Subtitle Demonstration color.So Can obtain preferable contrast effect, but may cause that captions color Jing often changes and captions color is uncontrollable, reduce user Experience.
The content of the invention
In view of this, the present disclosure proposes a kind of method for processing caption, including:Determine captions associated pixel collection, the word Curtain associated pixel collection includes the pixel being located in captions region;It is determined that giving the face of color and the captions associated pixel collection Color difference value, wherein, the color distortion value is used to characterize the given color with pixel in the captions associated pixel collection Otherness between color;According to the color distortion value, it is determined that showing the color used by the captions.
According to the one side of the disclosure, there is provided a kind of captions process device, including:Set of pixels determining module, for true Determine captions associated pixel collection, the captions associated pixel collection includes the pixel being located in captions region;Color distortion value is true Cover half block, for determining the color distortion value of given color and the captions associated pixel collection, wherein, the color distortion value is used In characterizing the otherness in the given color and the captions associated pixel collection between the color of pixel;Captions color determines mould Block, for according to the color distortion value, it is determined that showing the color used by the captions.
According to another aspect of the present disclosure, there is provided a kind of captions process device, including:Processor;For storage process The memory of device executable instruction;Wherein, the processor is configured to:Determine captions associated pixel collection, the captions association Set of pixels includes the pixel being located in captions region;It is determined that giving the color distortion of color and the captions associated pixel collection Value, wherein, the color distortion value be used to characterizing the given color and pixel in the captions associated pixel collection color it Between otherness;According to the color distortion value, it is determined that showing the color used by the captions.
According to another aspect of the present disclosure, there is provided a kind of non-volatile computer readable storage medium storing program for executing, when the storage Instruction in medium by terminal and/or server computing device when so that terminal and/or server are able to carry out a kind of word Curtain processing method, methods described includes:Determine captions associated pixel collection, the captions associated pixel collection includes being located positioned at captions Pixel in region;It is determined that the color distortion value of color and the captions associated pixel collection is given, wherein, the color distortion value For characterizing the otherness in the given color and the captions associated pixel collection between the color of pixel;According to the color Difference value, it is determined that showing the color used by the captions.
By arranging given color, select to show captions according to the color distortion value of given color and captions associated pixel collection Color, can be while Subtitle Demonstration definition be ensured according to the method for processing caption and device of disclosure above-described embodiment Captions consistency of colour is kept as far as possible, it is to avoid the discomfort that captions bring to user is frequently changed, while making subscriber customized word Curtain color is possibly realized, and improves Consumer's Experience.
According to below with reference to the accompanying drawings, to detailed description of illustrative embodiments, the further feature and aspect of the disclosure will become It is clear.
Description of the drawings
Comprising in the description and accompanying drawing and the specification of the part that constitutes specification together illustrates the disclosure Exemplary embodiment, feature and aspect, and for explaining the principle of the disclosure.
Fig. 1 illustrates the flow chart of the method for processing caption according to the embodiment of the disclosure one.
Fig. 2 illustrates the schematic diagram of the caption display area according to the embodiment of the disclosure one.
Fig. 3 illustrates the default selection schematic diagram for representing region according to the embodiment of the disclosure one.
Fig. 4 illustrates the flow chart of the method for processing caption according to the embodiment of the disclosure one.
Fig. 5 illustrates the flow chart of the method for processing caption according to the embodiment of the disclosure one.
Fig. 6 illustrates the schematic diagram of the method for processing caption according to the embodiment of the disclosure one.
Fig. 7 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Fig. 8 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Fig. 9 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 10 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 11 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 12 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 13 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 14 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 15 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Figure 16 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to accompanying drawing.It is identical in accompanying drawing Reference represent the same or analogous element of function.Although the various aspects of embodiment are shown in the drawings, remove Non-specifically is pointed out, it is not necessary to accompanying drawing drawn to scale.
Here special word " exemplary " means " being used as example, embodiment or illustrative ".Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the disclosure, numerous details are given in specific embodiment below. It will be appreciated by those skilled in the art that without some details, the disclosure equally can be implemented.In some instances, for Method well known to those skilled in the art, means, element and circuit are not described in detail, in order to highlight the purport of the disclosure.
Embodiment 1
Fig. 1 illustrates the flow chart of the method for processing caption according to the embodiment of the disclosure one, and the method can apply to terminal In, for example, mobile phone, computer, panel computer, intelligent television etc., it can also be used in server.Fig. 2 is illustrated according to the disclosure one The schematic diagram of the caption display area of embodiment.As shown in figure 1, the method includes:
Step S11, determines captions associated pixel collection, and the captions associated pixel collection includes being located in captions region Pixel.
Captions region can refer to the residing region in display picture of captions, and the local of captions can be covered in other words Region, the region is for example located at picture bottom, top, sidepiece etc., and the regional extent can be the square that can just cover captions Shape or arbitrary shaped region, the disclosure is not restricted to this.
The captions region in display picture can be determined using prior art, for example, captions can be obtained from video data Display location, and captions region delimited according to predetermined rule (such as just covering the rectangle of captions), the disclosure is to this It is not limited.
For example, as shown in Fig. 2 captions are located at the bottom of display picture, the picture in viewing area around captions The display effect of the color meeting image caption of element, covers the rectangular area (captions region) as shown in Figure 2 of the captions In pixel can determine as captions associated pixel collection.
Step S12, it is determined that the color distortion value of color and the captions associated pixel collection is given, wherein, the colour-difference Different value is used to characterize the otherness in the given color and the captions associated pixel collection between the color of pixel.
In a kind of possible embodiment, the given color can be preset by system, or, system according to Family instruction setting, can include multiple given colors.It will be understood by those skilled in the art that the color distortion value can be adopted Related algorithm determination of the prior art, for example, can count the representative that captions associated pixel collection obtains captions associated pixel collection Color, determines and described represents color with the difference value of given color as the color distortion value, it would however also be possible to employ other modes, Here is not limited.
In a kind of possible embodiment, formula can be adopted
OrCalculate the color distortion value, wherein R0、 G0、B0For the pixel value (rgb space pixel value) of the given color, Ri、Gi、BiI-th concentrated for the captions associated pixel The pixel value (rgb space pixel value) of individual pixel, n represents that the captions associated pixel concentrates the number of pixel, and Diff is described Color distortion value, MAX represents maximizing.
Wherein, Diff can characterize (or the display of these pixels compositions of pixel in given color and captions associated pixel collection Region, or captions region) color between otherness, Diff shows that more greatly retrochromism is bigger, and user is easier Captions are seen, Diff is less to show that retrochromism is less, and user is less susceptible to see captions, can by above-mentioned formula Accurately to obtain the otherness in different given colors and captions associated pixel collection between the color of pixel.
It should be noted that although the computational methods for describing color distortion value as an example with above-mentioned formula are as above, It will be appreciated by those skilled in the art that the disclosure answers not limited to this.In fact, user completely can be according to personal like and/or reality The calculation of the flexible setpoint color difference value of border application scenarios, as long as different given colors and captions associated pixel can be weighed Otherness in collection between the color of pixel (in other words with the retrochromism of captions region).
Step S13, according to the color distortion value, it is determined that showing the color used by the captions.
Color distortion value reflects the difference between the color of given color and captions region, can determine that accordingly aobvious Show the color used by the captions, so that the color of captions is distinguished as far as possible with the color of captions region.Specifically really Surely showing the mode of the color used by captions can have a lot, and the disclosure is without limitation, and those skilled in the art can basis Need to select.
For example, the captions can be shown with the corresponding given color of maximum color difference value, so can be used Family is more clearly seen captions, it is also possible to which the given color for selecting color distortion value placed in the middle shows the captions, is ensureing that captions are clear It is clear to avoid excessive ophthalmic uncomfortable sense for bringing of color distortion, etc. while spend.
So, by arranging given color, select aobvious according to the color distortion value of given color and captions associated pixel collection Show the color of captions, can be while Subtitle Demonstration definition be ensured according to the method for processing caption of disclosure above-described embodiment Captions consistency of colour is kept as far as possible, it is to avoid the discomfort that captions bring to user is frequently changed, while making subscriber customized word Curtain color is possibly realized, and improves Consumer's Experience.
In a kind of possible embodiment, determine that captions associated pixel collection may include:Will be complete in captions region Portion's pixel is used as captions associated pixel collection
For example, as shown in Fig. 2 the whole pixels in the square frame that captions can be located associate picture as the captions Element collection.
Or, in alternatively possible embodiment, determine that captions associated pixel collection may include:By captions region Interior partial pixel is used as captions associated pixel collection.
For example, can be using the pixel beyond the captions place pixel in the captions region as the captions Associated pixel collection.Captions are necessarily included in selected region, captions can override a part of picture material, if these are capped Pixel be also carried out calculate practical significance it is little, it is possible to reject captions covering pixel.Those skilled in the art can To understand, after decoding to image or video, algorithm related in prior art can be passed through and reject the pixel that captions are covered, be had Body is not limited.
In a kind of possible embodiment, in order to avoid operand too greatly can be to the picture in the captions region Element is sampled, and the pixel that sampling is obtained for example can take point, take every row as the captions associated pixel collection using interlacing Point takes the mode such as a little and is sampled every a pixel.
In a kind of possible embodiment, in order to avoid operand too greatly can also be by the captions region It is default to represent pixel that region includes as the captions associated pixel collection, for example, as shown in figure 3, can be in captions location The representative region of three rectangles is selected in domain, although representing region it should be noted that describing as an example with three rectangles Selection mode as above, it is understood by one of ordinary skill in the art that the disclosure answers not limited to this, naturally it is also possible to be other shapes Shape, for example, other polygons, circular or irregularly shaped etc., each represent region shape can with it is identical can not also phase Together, representing that the position in region can be uniformly distributed can also uneven distribution.In fact, user completely can be according to personal like And/or practical application scene flexibly sets and represents shape, position, the quantity in region etc., as long as captions region can be represented Color attribute.
Above-mentioned several modes for determining captions associated pixel collection can characterize viewing area and captions colour-difference around captions Take into account amount of calculation while different in nature again, improve the speed for calculating as far as possible while Subtitle Demonstration definition is ensured, improve user Experience.
Fig. 4 illustrates the flow chart of the method for processing caption according to the embodiment of the disclosure one, label and Fig. 1 identicals in Fig. 4 Step has identical function, for simplicity's sake, omits the detailed description to these steps.As shown in figure 4, step S13 Including:
Step S131, is more than default in one or more given colors and the color distortion value of the captions associated pixel collection During threshold value, the given color for showing the captions is determined from one or more of given colors.
In a kind of possible embodiment, can be with threshold value Diff0 of pre-set color difference value (for color distortion value Different calculations can arrange different predetermined threshold values), when in multiple given colors and the captions associated pixel When the color distortion value of collection is more than predetermined threshold value Diff0, the captions can be shown with the given color;When multiple given colors with When the color distortion value of the captions associated pixel collection is both greater than predetermined threshold value Diff0, face can be used as in the foregoing embodiment The maximum given color of color difference value shows the captions, it is also possible to notify user to select for showing the given face of the captions Color.
In a kind of possible embodiment, when multiple given colors are both less than predetermined threshold value Diff0, Ke Yiru Described in previous embodiment, the captions are shown with the maximum given color of color distortion value, can also keep original Subtitle Demonstration Color is constant.
So, by the threshold value of setting color distortion value, can according to the method for processing caption of disclosure above-described embodiment Keep captions consistency of colour as far as possible while Subtitle Demonstration definition is ensured, it is to avoid frequently change captions and bring to user Discomfort, meanwhile, user can with according to the hobby of oneself select Show Color, improve Consumer's Experience.
Fig. 5 illustrates the flow chart of the method for processing caption according to the embodiment of the disclosure one, label and Fig. 1 identicals in Fig. 5 Step has identical function, for simplicity's sake, omits the detailed description to these steps.Fig. 6 illustrates real according to the disclosure one Apply the schematic diagram of the method for processing caption of example.As shown in Figure 5 and Figure 6, step S13 includes:
Step S132, for multiple given colors, calculates given according to the priority order from high to low of given color Whether the color distortion value of color and the captions associated pixel collection simultaneously judges the color distortion value more than predetermined threshold value.
Step S133, when the color distortion value is more than the predetermined threshold value, with color distortion value is corresponding gives with this Determine color and show the captions.
In a kind of possible embodiment, the multiple given color of captions, and correspondence can be set according to user preferences Each given color arranges priority.At this point it is possible to calculate the color distortion value according to the order of priority and judge its with The relation of predetermined threshold value Diff0.In the case where the color distortion value is less than predetermined threshold value Diff0, calculate next Color distortion value simultaneously continues to judge its relation with predetermined threshold value Diff0, until the color distortion value is more than the default threshold Value Diff0, with given color corresponding with the color distortion value captions are shown.
In the case where the color distortion value of all given colors is both less than predetermined threshold value Diff0, can be with various Mode determines the color for showing captions, for example, in a kind of possible embodiment, as shown in figure 5, step S13 may be used also Including step S134, the predetermined threshold value is both less than with the color distortion value of the captions associated pixel collection in all given colors When, notify user to select for showing the given color of the captions.In a kind of possible embodiment, as shown in figure 5, step Rapid S13 may also include step S135, and institute is both less than with the color distortion value of the captions associated pixel collection in all given colors When stating predetermined threshold value, with the given color of highest priority the captions are shown.In a kind of possible embodiment, such as Fig. 5 Shown, step S13 may also include step S136, all given colors and the captions associated pixel collection color distortion value all During less than the predetermined threshold value, with the maximum given color of color distortion value the captions are shown.
For example, as shown in fig. 6, being provided with priority the 1st color from high to low, the 2nd color and the 3rd color, in advance If threshold value is Diff0.The first color distortion value Diff1 of the 1st color and the captions associated pixel collection is calculated first, is sentenced The relation of the first color distortion value Diff1 and predetermined threshold value Diff0 of breaking, in the first color distortion value Diff1 When more than predetermined threshold value Diff0, with the 1st color captions are shown;Institute is less than in the first color distortion value Diff1 When stating predetermined threshold value Diff0, represent that the 1st color is close with the color of captions region, it is not easy to differentiate, calculate described 2nd color distortion value Diff2 of the 2nd color and the captions associated pixel collection, judge the 2nd color distortion value Diff2 with The relation of predetermined threshold value Diff0, when the 2nd color distortion value Diff2 is more than predetermined threshold value Diff0, Captions are shown with the 2nd color, when the 2nd color distortion value Diff2 is less than predetermined threshold value Diff0, the 2nd is represented Color is close with the color of captions region, it is not easy to differentiate, and now directly can show captions with the 3rd color, it is also possible to The 3rd color distortion value Diff3 of the 3rd color and the captions associated pixel collection is calculated, the 3rd color distortion value is judged Diff3 and the relation of predetermined threshold value Diff0, in the 3rd color distortion value Diff3 predetermined threshold value Diff0 is more than When, captions are shown with the 3rd color, when the 3rd color distortion value Diff3 is less than predetermined threshold value Diff0, Represent that the 3rd color is close with the color of captions region, it is not easy to differentiate, can notify that user is selected for showing captions Given color, or, the captions can be shown with the given color (i.e. the 1st color) of highest priority, or, can be with The maximum given color of color distortion value shows the captions in 1 color, the 2nd color and the 3rd color.
Although it should be noted that describe method for processing caption as an example as above with three kinds of colors, this area skill Art personnel are it is understood that the disclosure answers not limited to this.In fact, user completely can be according to personal like and/or practical application field Scape flexibly sets the value volume and range of product of given color.
So, by arranging the priority for giving color, color distortion value is calculated from high to low according to priority and is preset Threshold value selects captions color, and according to the method for processing caption of the disclosure above-described embodiment operand can be reduced, and improves and processes speed Degree, and keep captions consistency of colour as far as possible while Subtitle Demonstration definition is ensured, it is to avoid frequently change captions and give The discomfort that user brings, meanwhile, user can with according to the hobby of oneself select Show Color, as far as possible from user's preferences, The higher color of priority improves Consumer's Experience showing captions.
Embodiment 2
Fig. 7 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one, and the device can be used in terminal, example Such as, mobile phone, computer or panel computer etc., the device can also be used in server.As shown in fig. 7, the device mainly includes: Set of pixels determining module 81, color distortion value determining module 82 and captions color determination module 83.
The set of pixels determining module 81 is configured to determine that captions associated pixel collection, and the captions associated pixel collection includes position Pixel in captions region.
The color distortion value determining module 82 is configured to determine that the color of given color and the captions associated pixel collection Difference value, wherein, the color distortion value is used to characterize the given color with the face of pixel in the captions associated pixel collection Otherness between color.
The captions color determination module 83 is configured to according to the color distortion value, it is determined that showing used by the captions Color.
So, by arranging given color, select aobvious according to the color distortion value of given color and captions associated pixel collection Show the color of captions, can be while Subtitle Demonstration definition be ensured according to the captions process device of disclosure above-described embodiment Captions consistency of colour is kept as far as possible, it is to avoid is frequently changed the discomfort that captions bring to user, is improve Consumer's Experience.
Fig. 8 is shown respectively the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label is identical with Fig. 7 in Fig. 8 Component there is identical function, for simplicity's sake, omit detailed description to these components.
As shown in figure 8, the set of pixels determining module 81 includes:In first determining unit 81a and the second determining unit 81b It is any one or more.
First determining unit 81a is configured to the whole pixels in captions region as captions associated pixel Collection.
Second determining unit 81b is configured to the partial pixel in captions region as captions associated pixel Collection.
Fig. 9 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label and Fig. 7 identical groups in Fig. 9 Part has identical function, for simplicity's sake, omits the detailed description to these components.
As shown in figure 9, the captions color determination module 83 includes:First display unit 831.
First display unit 831 is configured to show the captions with the maximum given color of color distortion value.
Figure 10 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label and Fig. 7 identicals in Figure 10 Component has identical function, for simplicity's sake, omits the detailed description to these components.
As shown in Figure 10, the captions color determination module 83 includes:3rd determining unit 832.
3rd determining unit 832 is configured to the face in one or more given colors with the captions associated pixel collection When color difference value is more than predetermined threshold value, the given face for showing the captions is determined from one or more of given colors Color.
Figure 11 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label and Fig. 7 identicals in Figure 11 Component has identical function, for simplicity's sake, omits the detailed description to these components.
As shown in figure 11, the captions color determination module 83 includes:First judging unit 833 and the second display unit 834。
First judging unit 833 is configured to for multiple given colors, according to given color priority from height to Low order calculates the color distortion value of given color and the captions associated pixel collection and whether judges the color distortion value More than predetermined threshold value;
Second display unit 834 be configured to the color distortion value be more than predetermined threshold value when, with the colour-difference Different value gives accordingly color and shows the captions.
Figure 12 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label and Figure 11 identicals in Figure 12 Component has identical function, for simplicity's sake, omits the detailed description to these components.
As shown in figure 12, the captions color determination module 83 may also include:Second notification unit 835.
Second notification unit 835 is configured to the color distortion in all given colors with the captions associated pixel collection When value is both less than the predetermined threshold value, user is notified to select for showing the given color of the captions.
The captions color determination module may also include:3rd display unit 836.
3rd display unit 836 is configured to the color distortion in all given colors with the captions associated pixel collection When value is both less than predetermined threshold value, with the maximum given color of color distortion value the captions are shown, or with highest priority Given color shows the captions.
Figure 13 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label and Fig. 7 identicals in Figure 13 Component has identical function, for simplicity's sake, omits the detailed description to these components.
As shown in figure 13, the color distortion value determining module 82 includes:Computing unit 821.
The computing unit 821 is configured to adopt formula
OrCalculate the color distortion value, wherein R0、 G0、B0For the pixel value of the given color, Ri、Gi、BiThe pixel value of the ith pixel concentrated for the captions associated pixel, n Represent that the captions associated pixel concentrates the number of pixel, Diff is the color distortion value, and MAX represents maximizing.
Figure 14 illustrates the block diagram of the image processing apparatus according to the embodiment of the disclosure one.Label and Fig. 8 identicals in Figure 14 Component has identical function, for simplicity's sake, omits the detailed description to these components.
As shown in figure 14, second determining unit 81b includes:First determination subelement 8121, the second determination subelement 8122 and the 3rd any one or more in determination subelement 8123.
First determination subelement 8121 is configured to beyond the captions place pixel in the captions region Pixel is used as the captions associated pixel collection;
Second determination subelement 8122 is configured to sample the pixel in the captions region, will sample The pixel for obtaining is used as the captions associated pixel collection;
3rd determination subelement 8123 is configured to for default in the captions region to represent what region included Pixel is used as the captions associated pixel collection.
Embodiment 3
Figure 15 is the block diagram of the device 800 that a kind of captions according to an exemplary embodiment are processed.For example, device 800 can be mobile phone, and computer, digital broadcast terminal, messaging devices, game console, tablet device, medical treatment sets It is standby, body-building equipment, personal digital assistant etc..
With reference to Figure 15, device 800 can include following one or more assemblies:Process assembly 802, memory 804, power supply Component 806, multimedia groupware 808, audio-frequency assembly 810, the interface 812 of input/output (I/O), sensor cluster 814, and Communication component 816.
The integrated operation of the usual control device 800 of process assembly 802, such as with display, call, data communication, phase Machine operates and records the associated operation of operation.Process assembly 802 can refer to including one or more processors 820 to perform Order, to complete all or part of step of above-mentioned method.Additionally, process assembly 802 can include one or more modules, just Interaction between process assembly 802 and other assemblies.For example, process assembly 802 can include multi-media module, many to facilitate Interaction between media component 808 and process assembly 802.
Memory 804 is configured to store various types of data to support the operation in device 800.These data are shown Example includes the instruction of any application program for operating on device 800 or method, and contact data, telephone book data disappears Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) is erasable to compile Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 can include power management system System, one or more power supplys, and other generate, manage and distribute the component that electric power is associated with for device 800.
Multimedia groupware 808 is included in the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch sensings Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action Border, but also detect and the touch or slide related duration and pressure.In certain embodiments, many matchmakers Body component 808 includes a front-facing camera and/or post-positioned pick-up head.When device 800 be in operator scheme, such as screening-mode or During video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and Post-positioned pick-up head can be a fixed optical lens system or with focusing and optical zoom capabilities.
Audio-frequency assembly 810 is configured to output and/or input audio signal.For example, audio-frequency assembly 810 includes a Mike Wind (MIC), when device 800 is in operator scheme, such as call model, logging mode and speech recognition mode, microphone is matched somebody with somebody It is set to reception external audio signal.The audio signal for being received can be further stored in memory 804 or via communication set Part 816 sends.In certain embodiments, audio-frequency assembly 810 also includes a loudspeaker, for exports audio signal.
, to provide interface between process assembly 802 and peripheral interface module, above-mentioned peripheral interface module can for I/O interfaces 812 To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor cluster 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor cluster 814 can detect the opening/closed mode of device 800, and the relative positioning of component is for example described Component is the display and keypad of device 800, and sensor cluster 814 can be with 800 1 components of detection means 800 or device Position change, user is presence or absence of with what device 800 was contacted, the orientation of device 800 or acceleration/deceleration and device 800 Temperature change.Sensor cluster 814 can include proximity transducer, be configured to be detected when without any physical contact The presence of object nearby.Sensor cluster 814 can also include optical sensor, such as CMOS or ccd image sensor, for into As used in application.In certain embodiments, the sensor cluster 814 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access based on the wireless network of communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary enforcement In example, communication component 816 receives the broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 also includes near-field communication (NFC) module, to promote junction service.Example Such as, NFC module can be based on RF identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, Bluetooth (BT) technology and other technologies are realizing.
In the exemplary embodiment, device 800 can be by one or more application specific integrated circuits (ASIC), numeral letter Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components realizations, for performing said method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing including instruction, example are additionally provided Such as include the memory 804 of instruction, above-mentioned instruction can be performed to complete said method by the processor 820 of device 800.
Figure 16 is a kind of block diagram of the video process apparatus 1900 according to an exemplary embodiment.For example, device 1900 may be provided in a server.With reference to Figure 16, device 1900 includes process assembly 1922, its further include one or Multiple processors, and the memory resource by representated by memory 1932, can be by the execution of process assembly 1922 for storage Instruction, such as application program.In memory 1932 store application program can include it is one or more each Corresponding to the module of one group of instruction.Additionally, process assembly 1922 is configured to execute instruction, to perform described in above-described embodiment 2 Method.
Device 1900 can also include that power supply module 1926 be configured to the power management of performs device 1900, one Wired or wireless network interface 1950 is configured to for device 1900 to be connected to network, and input and output (I/O) interface 1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing including instruction, example are additionally provided Such as include the memory 1932 of instruction, above-mentioned instruction can be performed to complete said method by the process assembly 1922 of device 1900.
The disclosure can be system, method and/or computer program.Computer program can include computer Readable storage medium storing program for executing, containing the computer-readable program instructions for being used to make processor realize various aspects of the disclosure.
Computer-readable recording medium can be the tangible of the instruction that holding and storage are used by instruction execution equipment Equipment.Computer-readable recording medium for example can be-- but be not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electromagnetism storage device, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer-readable recording medium More specifically example (non exhaustive list) includes:Portable computer diskette, hard disk, random access memory (RAM), read-only deposit It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static RAM (SRAM), portable Compact disk read-only storage (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon Be stored with instruction punch card or groove internal projection structure and above-mentioned any appropriate combination.Calculating used herein above Machine readable storage medium storing program for executing is not construed as instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations, logical Cross electromagnetic wave (for example, by the light pulse of fiber optic cables) that waveguide or other transmission mediums propagate or by wire transfer Electric signal.
Computer-readable program instructions as described herein can from computer-readable recording medium download to each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, LAN, wide area network and/or wireless network Portion's storage device.Network can include copper transmission cable, Optical Fiber Transmission, be wirelessly transferred, router, fire wall, switch, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment is received from network and counted Calculation machine readable program instructions, and forward the computer-readable program instructions, for being stored in each calculating/processing equipment in meter In calculation machine readable storage medium storing program for executing.
For perform the disclosure operation computer program instructions can be assembly instruction, instruction set architecture (ISA) instruction, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming language The source code write of any combination or object code, the programming language includes OO programming language-such as Smalltalk, C++ etc., and the procedural programming languages of routine-such as " C " language or similar programming language.Computer Readable program instructions can perform fully on the user computer, partly perform on the user computer, as one solely Vertical software kit is performed, on the user computer part performs on the remote computer or completely in remote computer for part Or perform on server.In the situation of remote computer is related to, remote computer can be by the network-bag of any kind LAN (LAN) or wide area network (WAN)-be connected to subscriber computer are included, or, it may be connected to outer computer (such as profit With ISP come by Internet connection).In certain embodiments, by using computer-readable program instructions Status information carry out personalized customization electronic circuit, such as PLD, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can perform computer-readable program instructions, so as to realize each side of the disclosure Face.
Referring herein to the method according to the embodiment of the present disclosure, device (system) and computer program flow chart and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that each square frame and flow chart of flow chart and/or block diagram and/ Or in block diagram each square frame combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to all-purpose computer, special-purpose computer or other programmable datas The processor of processing meanss, so as to produce a kind of machine so that these instructions are by computer or other programmable datas During the computing device of processing meanss, flowchart is generated and/or work(specified in one or more square frames in block diagram The device of energy/action.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to Order causes computer, programmable data processing unit and/or other equipment to work in a specific way, so as to be stored with instruction Computer-readable medium then includes a manufacture, and it is included in flowchart and/or one or more square frames in block diagram The instruction of the various aspects of the function/action of regulation.
Can also computer-readable program instructions be loaded into computer, other programmable data processing units or other On equipment so that perform series of operation steps on computer, other programmable data processing units or miscellaneous equipment, to produce The computer implemented process of life, so that perform on computer, other programmable data processing units or miscellaneous equipment Function/action specified in one or more square frames in instruction flowchart and/or block diagram.
Flow chart and block diagram in accompanying drawing shows system, method and the computer journey of the multiple embodiments according to the disclosure The architectural framework in the cards of sequence product, function and operation.At this point, each square frame in flow chart or block diagram can generation A part for table one module, program segment or instruction a, part for the module, program segment or instruction is used comprising one or more In the executable instruction of the logic function for realizing regulation.In some realizations as replacement, the function of being marked in square frame Can be with different from the order marked in accompanying drawing generation.For example, two continuous square frames can essentially be held substantially in parallel OK, they can also be performed in the opposite order sometimes, and this is depending on involved function.It is also noted that block diagram and/or The combination of each square frame and block diagram and/or the square frame in flow chart in flow chart, can be with the function of performing regulation or dynamic The special hardware based system made is realizing, or can be realized with the combination of computer instruction with specialized hardware.
It is described above the presently disclosed embodiments, described above is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.In the case of the scope and spirit without departing from illustrated each embodiment, for this skill Many modifications and changes will be apparent from for the those of ordinary skill in art field.The selection of term used herein, purport Best explaining principle, practical application or the technological improvement to the technology in market of each embodiment, or lead this technology Other those of ordinary skill in domain are understood that each embodiment disclosed herein.

Claims (19)

1. a kind of method for processing caption, it is characterised in that include:
Determine captions associated pixel collection, the captions associated pixel collection includes the pixel being located in captions region;
It is determined that the color distortion value of color and the captions associated pixel collection is given, wherein, the color distortion value is used to characterize Otherness in the given color and the captions associated pixel collection between the color of pixel;
According to the color distortion value, it is determined that showing the color used by the captions.
2. method for processing caption according to claim 1, it is characterised in that the determination captions associated pixel collection includes:
Using the whole pixels in captions region as captions associated pixel collection;
Or, using the partial pixel in captions region as captions associated pixel collection.
3. method for processing caption according to claim 1, it is characterised in that according to the color distortion value, it is determined that showing Color used by the captions, including:
The captions are shown with the maximum given color of color distortion value.
4. method for processing caption according to claim 1, it is characterised in that according to the color distortion value, it is determined that showing Color used by the captions, including:
When the color distortion value of one or more given colors and the captions associated pixel collection is more than predetermined threshold value, from described The given color for showing the captions is determined in one or more given colors.
5. method for processing caption according to claim 1, it is characterised in that according to the color distortion value, it is determined that showing Color used by the captions, including:
For multiple given colors, according to the priority order from high to low of given color given color and the captions are calculated Whether the color distortion value of associated pixel collection simultaneously judges the color distortion value more than predetermined threshold value;
When the color distortion value is more than the predetermined threshold value, show described with given color corresponding with the color distortion value Captions.
6. method for processing caption according to claim 5, it is characterised in that according to the color distortion value, it is determined that showing Color used by the captions, including:
When all given colors are both less than the predetermined threshold value with the color distortion value of the captions associated pixel collection, notify to use Family selects the given color for showing the captions.
7. method for processing caption according to claim 5, it is characterised in that according to the color distortion value, it is determined that showing Color used by the captions, including:
When all given colors are both less than the predetermined threshold value with the color distortion value of the captions associated pixel collection, color is used The maximum given color of difference value shows the captions, or shows the captions with the given color of highest priority.
8. method for processing caption as claimed in any of claims 1 to 7, it is characterised in that it is determined that given color with The color distortion value of the captions associated pixel collection, including:
Using formula
D i f f = 1 n Σ i = 1 n [ ( R i - R 0 ) 2 + ( G i - G 0 ) 2 + ( B i - B 0 ) 2 ]
OrCalculate the color distortion value, wherein R0、G0、B0 For the pixel value of the given color, Ri、Gi、BiThe pixel value of the ith pixel concentrated for the captions associated pixel, n is represented The captions associated pixel concentrates the number of pixel, and Diff is the color distortion value, and MAX represents maximizing.
9. method for processing caption according to claim 2, it is characterised in that the part picture by captions region Element as the captions associated pixel collection, including:
Using the pixel beyond the captions place pixel in the captions region as the captions associated pixel collection;
Or, the pixel in the captions region is sampled, the pixel that sampling is obtained is associated as the captions Set of pixels;
Or, default in the captions region is represented pixel that region includes as the captions associated pixel collection.
10. a kind of captions process device, it is characterised in that include:
Set of pixels determining module, for determining captions associated pixel collection, the captions associated pixel collection includes being located positioned at captions Pixel in region;
Color distortion value determining module, for determining the color distortion value of given color and the captions associated pixel collection, wherein, The color distortion value is used to characterize the difference in the given color and the captions associated pixel collection between the color of pixel Property;
Captions color determination module, for according to the color distortion value, it is determined that showing the color used by the captions.
11. captions process devices according to claim 10, it is characterised in that the set of pixels determining module includes following It is any one or more in unit:
First determining unit, for using the whole pixels in captions region as captions associated pixel collection;
Second determining unit, for using the partial pixel in captions region as captions associated pixel collection.
12. captions process devices according to claim 10, it is characterised in that the captions color determination module includes:
First display unit, for showing the captions with the maximum given color of color distortion value.
13. captions process devices according to claim 10, it is characterised in that the captions color determination module includes:
3rd determining unit, for being more than with the color distortion value of the captions associated pixel collection in one or more given colors During predetermined threshold value, the given color for showing the captions is determined from one or more of given colors.
14. captions process devices according to claim 10, it is characterised in that the captions color determination module includes:
First judging unit, for for multiple given colors, calculates according to the priority order from high to low of given color Whether the color distortion value of given color and the captions associated pixel collection simultaneously judges the color distortion value more than predetermined threshold value;
Second display unit, for when the color distortion value is more than predetermined threshold value, with color distortion value is corresponding gives with this Determine color and show the captions.
15. captions process devices according to claim 14, it is characterised in that the captions color determination module includes:
Second notification unit, for both less than described with the color distortion value of the captions associated pixel collection in all given colors During predetermined threshold value, user is notified to select for showing the given color of the captions.
16. captions process devices according to claim 14, it is characterised in that the captions color determination module includes:
3rd display unit, for both less than default with the color distortion value of the captions associated pixel collection in all given colors During threshold value, the captions are shown with the maximum given color of color distortion value, or shown with the given color of highest priority The captions.
17. captions process devices according to any one in claim 10 to 16, it is characterised in that the color distortion Value determining module includes:
Computing unit, for adopting formula
D i f f = 1 n Σ i = 1 n [ ( R i - R 0 ) 2 + ( G i - G 0 ) 2 + ( B i - B 0 ) 2 ]
OrCalculate the color distortion value, wherein R0、G0、B0 For the pixel value of the given color, Ri、Gi、BiThe pixel value of the ith pixel concentrated for the captions associated pixel, n is represented The captions associated pixel concentrates the number of pixel, and Diff is the color distortion value, and MAX represents maximizing.
18. captions process devices according to claim 11, it is characterised in that second determining unit includes placing an order It is any one or more in unit:
First determination subelement, for using the pixel beyond the captions place pixel in the captions region as the word Curtain associated pixel collection;
Second determination subelement, for sampling to the pixel in the captions region, the pixel that sampling is obtained is made For the captions associated pixel collection;
3rd determination subelement, for default in the captions region to be represented pixel that region includes as the word Curtain associated pixel collection.
19. a kind of captions process devices, it is characterised in that include:
Processor;
For storing the memory of processor executable;
Wherein, the processor is configured to:
Determine captions associated pixel collection, the captions associated pixel collection includes the pixel being located in captions region;
It is determined that the color distortion value of color and the captions associated pixel collection is given, wherein, the color distortion value is used to characterize Otherness in the given color and the captions associated pixel collection between the color of pixel;
According to the color distortion value, it is determined that showing the color used by the captions.
CN201611247681.7A 2016-12-29 2016-12-29 Subtitle processing method and device Active CN106604107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611247681.7A CN106604107B (en) 2016-12-29 2016-12-29 Subtitle processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611247681.7A CN106604107B (en) 2016-12-29 2016-12-29 Subtitle processing method and device

Publications (2)

Publication Number Publication Date
CN106604107A true CN106604107A (en) 2017-04-26
CN106604107B CN106604107B (en) 2019-12-17

Family

ID=58605136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611247681.7A Active CN106604107B (en) 2016-12-29 2016-12-29 Subtitle processing method and device

Country Status (1)

Country Link
CN (1) CN106604107B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550714A (en) * 2021-06-30 2022-12-30 花瓣云科技有限公司 Subtitle display method and related equipment
CN115834972A (en) * 2022-12-20 2023-03-21 安徽听见科技有限公司 Subtitle color adjusting method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742129A (en) * 2008-11-18 2010-06-16 中兴通讯股份有限公司 Method for processing caption
JP2010217898A (en) * 2010-04-05 2010-09-30 Hitachi Ltd Caption display method
CN103491416A (en) * 2013-09-29 2014-01-01 深圳Tcl新技术有限公司 Single-layer displaying method and device of subtitle data
CN104967923A (en) * 2015-06-30 2015-10-07 北京奇艺世纪科技有限公司 Subtitle color setting method and device
CN104967922A (en) * 2015-06-30 2015-10-07 北京奇艺世纪科技有限公司 Subtitle adding position determining method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742129A (en) * 2008-11-18 2010-06-16 中兴通讯股份有限公司 Method for processing caption
JP2010217898A (en) * 2010-04-05 2010-09-30 Hitachi Ltd Caption display method
CN103491416A (en) * 2013-09-29 2014-01-01 深圳Tcl新技术有限公司 Single-layer displaying method and device of subtitle data
CN104967923A (en) * 2015-06-30 2015-10-07 北京奇艺世纪科技有限公司 Subtitle color setting method and device
CN104967922A (en) * 2015-06-30 2015-10-07 北京奇艺世纪科技有限公司 Subtitle adding position determining method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550714A (en) * 2021-06-30 2022-12-30 花瓣云科技有限公司 Subtitle display method and related equipment
CN115834972A (en) * 2022-12-20 2023-03-21 安徽听见科技有限公司 Subtitle color adjusting method and device, electronic equipment and storage medium
CN115834972B (en) * 2022-12-20 2024-10-18 安徽听见科技有限公司 Subtitle color adjustment method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106604107B (en) 2019-12-17

Similar Documents

Publication Publication Date Title
CN109089170A (en) Barrage display methods and device
CN106941624B (en) Processing method and device for network video trial viewing
CN106791893A (en) Net cast method and device
CN106792170A (en) Method for processing video frequency and device
CN106028143A (en) Video live broadcasting method and device
CN109257645A (en) Video cover generation method and device
CN106993229A (en) Interactive attribute methods of exhibiting and device
CN106375772A (en) Video playing method and device
CN107707954A (en) Video broadcasting method and device
CN107333170A (en) The control method and device of intelligent lamp
CN106454336A (en) Method and device for detecting whether camera of terminal is covered or not, and terminal
CN109961747A (en) Electronic ink screen display methods, device and electronic equipment
CN108260020A (en) The method and apparatus that interactive information is shown in panoramic video
CN105744133A (en) Video fill-in light method and apparatus
CN105653032A (en) Display adjustment method and apparatus
CN108737891A (en) Video material processing method and processing device
CN108924644A (en) Video clip extracting method and device
CN106550252A (en) The method for pushing of information, device and equipment
CN107943550A (en) Method for showing interface and device
CN107797741A (en) Method for showing interface and device
CN106897399A (en) Character displaying method and device
CN110121106A (en) Video broadcasting method and device
CN106792255A (en) Video playback window framework display methods and device
CN104853223B (en) The inserting method and terminal device of video flowing
CN106599191A (en) User attribute analysis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200519

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Alibaba (China) Co.,Ltd.

Address before: 518030 no.19l02 sannuo wisdom building, no.3012 Binhai Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: HEYI INTELLIGENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240621

Address after: 101400 Room 201, 9 Fengxiang East Street, Yangsong Town, Huairou District, Beijing

Patentee after: Youku Culture Technology (Beijing) Co.,Ltd.

Country or region after: China

Address before: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: Alibaba (China) Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right