CN104410929A - Processing method and device for caption images - Google Patents

Processing method and device for caption images Download PDF

Info

Publication number
CN104410929A
CN104410929A CN201410798220.3A CN201410798220A CN104410929A CN 104410929 A CN104410929 A CN 104410929A CN 201410798220 A CN201410798220 A CN 201410798220A CN 104410929 A CN104410929 A CN 104410929A
Authority
CN
China
Prior art keywords
image
interpolation
pixel
subtitling
subtitling image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410798220.3A
Other languages
Chinese (zh)
Inventor
张义轮
侯天峰
朱春波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201410798220.3A priority Critical patent/CN104410929A/en
Publication of CN104410929A publication Critical patent/CN104410929A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4351Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reassembling additional data, e.g. rebuilding an executable program from recovered modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4353Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving decryption of additional data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Abstract

The invention discloses a process method and device for caption images. The concrete implementation manner of the method comprises the following steps: obtaining the magnification times of a first caption image; according to the magnification times, performing interpolation upsampling on the YUV component of the first caption image in the edge direction to obtain the first YUV component; according to the magnification times, performing interpolation upsampling on the transparency component of the first caption image to obtain the first transparency component; synthesizing the first YUV component with the first transparency component to obtain the second caption image; based on the second caption image, obtaining the image for displaying. The implementation manner enables the image used for displaying to be smooth in boundaries, reduces saw-teeth and fuzzy phenomena and improves the elaboration degree of the captions.

Description

The processing method of subtitling image and device
Technical field
The application relates to field of computer technology, is specifically related to Computer Image Processing field, particularly relates to processing method and the device of subtitling image.
Background technology
Universal along with high definition and ultra high-definition TV, when in high definition or ultra high-definition TV during displaying video, some are lower with the resolution of the video caption of graphic form display, in order to give the better visual effect of spectators, just need to amplify accordingly captions.
Prior art mainly contains two class amplification methods: class methods utilize the fontlib after amplifying that prestores to amplify captions, when identifying the word in captions after receiving caption information and finding corresponding font to show from fontlib; Another kind of method is that captions are amplified as image, and prior art is adopt bilinearity or bi-cubic interpolation to amplify captions mostly.
Summary of the invention
In above-mentioned technology, utilize the technology that the fontlib after amplifying that prestores amplifies captions, due to font style variation, not only comprise Chinese but also have foreign language, it is unpractical that the different fonts pattern of all variety classes language and language is all carried out pre-stored, and accuracy when simultaneously mating with the font in fontlib is also a problem to be considered.Adopt the technology that bilinearity or bi-cubic interpolation amplify captions, that simple interpolation is carried out to captions, because the original resolution of captions is lower, border is more coarse, if carry out simple interpolation, visual effect is poor, there is sawtooth and blooming, particularly comparatively large at enlargement ratio, such as, during enlargement ratio >=3, sawtooth and blooming even more serious.
This application provides processing method and the device of subtitling image.
On the one hand, this application provides a kind of processing method of subtitling image, described method comprises: the multiplication factor obtaining the first subtitling image; According to described multiplication factor, the YUV component of described first subtitling image is carried out to the interpolation up-sampling of edge direction, obtain a YUV component; According to described multiplication factor, interpolation up-sampling is carried out to the transparency component of described first subtitling image, obtains the first transparency component; Synthesize a described YUV component and described first transparency component, obtain the second subtitling image; Based on described second subtitling image, obtain the image for showing.
In some embodiments, described according to described multiplication factor, the YUV component of described first subtitling image is carried out to the interpolation up-sampling of edge direction, obtain a YUV component to comprise: carry out the interpolation up-sampling of edge direction, correction and down-sampling to the YUV component of described first subtitling image, obtain pretreated YUV component; Described pretreated YUV component is carried out to the interpolation up-sampling of edge direction, obtain a described YUV component.
In some embodiments, the described interpolation up-sampling carrying out edge direction comprises: using each pixel of interpolation image as an interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains a described interpolating pixel point; The pixel of described interpolation image and a described interpolating pixel are selected as quadratic interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains described quadratic interpolation pixel.
In some embodiments, described acquisition is positioned at the gradient difference in an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain a described interpolating pixel point to comprise: if described interpolation direction is in 45 ° or 135 ° of directions of a described interpolating pixel point, carry out interpolation to be positioned on described interpolation direction and to select two adjacent interpolation reference image vegetarian refreshments respectively with a described interpolating pixel, obtain a described interpolating pixel point; If described interpolation direction is in 0 ° or 90 ° of directions of a described interpolating pixel point, carries out interpolation to selecting four adjacent interpolation reference image vegetarian refreshments with a described interpolating pixel, obtaining a described interpolating pixel point.
In some embodiments, described acquisition is positioned at the gradient difference in quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain quadratic interpolation pixel to comprise: if described interpolation direction is in 45 ° or 135 ° of directions of described quadratic interpolation pixel, interpolation is carried out to four the quadratic interpolation reference image vegetarian refreshments adjacent with described quadratic interpolation pixel, obtains described quadratic interpolation pixel; If described interpolation direction is in 0 ° or 90 ° of directions of described quadratic interpolation pixel, carries out interpolation to being positioned at two quadratic interpolation reference image vegetarian refreshments on described interpolation direction and adjacent with described quadratic interpolation pixel respectively, obtaining described quadratic interpolation pixel.
In some embodiments, after the described YUV component to described first subtitling image carries out the interpolation up-sampling of edge direction, carry out correction comprise: carrying out in the first subtitling image after interpolation up-sampling, obtain the pixel gradient G 1 in the horizontal direction of former first subtitling image and gradient G 2 in the vertical direction; Set up gradient-weights mapping model; Based on described G1, G2 and described gradient-weights mapping model, obtain the weights W1 of the pixel horizontal direction of former first subtitling image and the weights W2 of vertical direction; Based on described W1, W2 and be interpolation image with described first subtitling image quadratic interpolation pixel, the pixel of former first subtitling image described in weighting correction.
In some embodiments, described based on described second subtitling image, the image obtained for showing comprises: carry out back projection (IBP) correction to described second subtitling image, obtain the 3rd subtitling image, based on described 3rd subtitling image, obtain the described image for showing.
In some embodiments, described to described second subtitling image carry out back projection (IBP) revise, obtain the 3rd subtitling image to comprise: the simulation low-resolution image obtaining initial estimation image, described initial estimation image is described second subtitling image; More described simulation low-resolution image and described first subtitling image; Simulation error image is obtained according to comparative result; According to described simulation error image, iterated revision is carried out to described second subtitling image, obtain the 3rd subtitling image.
In some embodiments, described based on described 3rd subtitling image, comprise for the image shown described in obtaining: bilateral filtering is carried out to described 3rd subtitling image, obtains the 4th subtitling image; Using described 4th subtitling image as the described image being used for showing.
In some embodiments, described bilateral filtering is carried out to described 3rd subtitling image, obtain the 4th subtitling image to comprise: by the horizontal direction filtering of spatial domain filter to described 3rd subtitling image, obtain once filtered subtitling image, to the vertical direction filtering of described once filtered subtitling image, obtain the subtitling image after space filtering; In subtitling image after space filtering, to each pixel centered by pixel to be filtered and in the window of length of side r and described pixel to be filtered, ask for the absolute value of actual difference; Set up the corresponding table of the weights of pixel domain filter and the absolute value of all differences in advance; Based on absolute value and the described correspondence table of the difference of described reality, to each pixel of the subtitling image after described space filtering, carry out pixel domain filtering, obtain the 4th subtitling image.
In some embodiments, described first subtitling image comprises: original subtitling image; And/or the area-of-interest in the original subtitling image of user's selection.
In some embodiments, described multiplication factor comprises: the size of video played in full screen and the ratio of video original size; And/or the multiplication factor of the user's setting received.
Second aspect, this application provides a kind of processing unit of subtitling image, and described device comprises: acquiring unit, for obtaining the multiplication factor of the first subtitling image; YUV component up-sampling unit, for according to described multiplication factor, carries out the interpolation up-sampling of edge direction, obtains a YUV component to the YUV component of described first subtitling image; Transparency component up-sampling unit, for according to described multiplication factor, carries out interpolation up-sampling to the transparency component of described first subtitling image, obtains the first transparency component; Synthesis unit, for the synthesis of a described YUV component and described first transparency component, obtains the second subtitling image; Generation unit, for based on described second subtitling image, obtains the image for showing.
In some embodiments, described YUV component up-sampling unit comprises: pretreatment unit, for carrying out the interpolation up-sampling of edge direction, correction and down-sampling to the YUV component of described first subtitling image, obtains pretreated YUV component; Pretreated YUV component up-sampling unit, for carrying out the interpolation up-sampling of edge direction to described pretreated YUV component, obtains a described YUV component.
In some embodiments, the described interpolation up-sampling carrying out edge direction comprises: using each pixel of interpolation image as an interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains a described interpolating pixel point; The pixel of described interpolation image and a described interpolating pixel are selected as quadratic interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains described quadratic interpolation pixel.
In some embodiments, described acquisition is positioned at the gradient difference in an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain a described interpolating pixel point to comprise: if described interpolation direction is in 45 ° or 135 ° of directions of a described interpolating pixel point, carry out interpolation to be positioned on described interpolation direction and to select two adjacent interpolation reference image vegetarian refreshments respectively with a described interpolating pixel, obtain a described interpolating pixel point; If described interpolation direction is in 0 ° or 90 ° of directions of a described interpolating pixel point, carries out interpolation to selecting four adjacent interpolation reference image vegetarian refreshments with a described interpolating pixel, obtaining a described interpolating pixel point.
In some embodiments, described acquisition is positioned at the gradient difference in quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain quadratic interpolation pixel to comprise: if described interpolation direction is in 45 ° or 135 ° of directions of described quadratic interpolation pixel, interpolation is carried out to four the quadratic interpolation reference image vegetarian refreshments adjacent with described quadratic interpolation pixel, obtains described quadratic interpolation pixel; If described interpolation direction is in 0 ° or 90 ° of directions of described quadratic interpolation pixel, carries out interpolation to being positioned at two quadratic interpolation reference image vegetarian refreshments on described interpolation direction and adjacent with described quadratic interpolation pixel respectively, obtaining described quadratic interpolation pixel.
In some embodiments, after the described YUV component to described first subtitling image carries out the interpolation up-sampling of edge direction, carry out correction comprise: carrying out in the first subtitling image after interpolation up-sampling, obtain the pixel gradient G 1 in the horizontal direction of former first subtitling image and gradient G 2 in the vertical direction; Set up gradient-weights mapping model; Based on described G1, G2 and described gradient-weights mapping model, obtain the weights W1 of the pixel horizontal direction of former first subtitling image and the weights W2 of vertical direction; Based on described W1, W2 and be interpolation image with described first subtitling image quadratic interpolation pixel, the pixel of former first subtitling image described in weighting correction.
In some embodiments, described generation unit comprises: revise subelement, for carrying out back projection (IBP) correction to described second subtitling image, obtains the 3rd subtitling image; First generates subelement, for based on described 3rd subtitling image, obtains the described image for showing.
In some embodiments, described correction subelement comprises: first obtains subelement, and for obtaining the simulation low-resolution image of initial estimation image, described initial estimation image is described second subtitling image; Relatively subelement, for more described simulation low-resolution image and described first subtitling image; Second obtains subelement, for obtaining simulation error image according to comparative result; Iterated revision subelement, for according to described simulation error image, carries out iterated revision to described second subtitling image, obtains the 3rd subtitling image.
In some embodiments, described first generation subelement comprises: bilateral filtering subelement, for carrying out bilateral filtering to described 3rd subtitling image, obtains the 4th subtitling image, using described 4th subtitling image as the described image being used for showing.
In some embodiments, described bilateral filtering is carried out to described 3rd subtitling image, obtain the 4th subtitling image to comprise: by the horizontal direction filtering of spatial domain filter to described 3rd subtitling image, obtain once filtered subtitling image, to the vertical direction filtering of described once filtered subtitling image, obtain the subtitling image after space filtering; In subtitling image after space filtering, to each pixel centered by pixel to be filtered and in the window of length of side r and described pixel to be filtered, ask for the absolute value of actual difference; Set up the corresponding table of the weights of pixel domain filter and the absolute value of all differences in advance; Based on absolute value and the described correspondence table of the difference of described reality, to each pixel of the subtitling image after described space filtering, carry out pixel domain filtering, obtain the 4th subtitling image.
In some embodiments, described first subtitling image comprises: original subtitling image; And/or the area-of-interest in the original subtitling image of user's selection.
In some embodiments, described multiplication factor comprises: the size of video played in full screen and the ratio of video original size; And/or the multiplication factor of the user's setting received.
The processing method of the subtitling image that the application provides and device, by obtaining the multiplication factor of the first subtitling image, the interpolation up-sampling of edge direction is carried out subsequently according to the YUV component of multiplication factor to the first subtitling image, obtain a YUV component, then according to multiplication factor, interpolation up-sampling is carried out to the transparency component of the first subtitling image, obtain the first transparency component, synthesize a described YUV component and described first transparency component afterwards, obtain the second subtitling image, last again based on described second subtitling image, obtain the image for showing, make the captions edge smoothing for showing, decrease sawtooth and blooming, improve the fine degree of captions.
Accompanying drawing explanation
By reading the detailed description done non-limiting example done with reference to the following drawings, the other features, objects and advantages of the application will become more obvious:
Fig. 1 shows a kind of exemplary process diagram of the processing method of the subtitling image according to the embodiment of the present application;
Fig. 2 (a), Fig. 2 (b) respectively illustrate the schematic diagram of the frame of video of the schematic diagram of the frame of video comprising original subtitling image and the original subtitling image after comprising amplification;
Fig. 3 shows the schematic diagram of the area-of-interest in the original subtitling image after amplification;
Fig. 4 shows the schematic diagram of the image after quadratic interpolation;
Fig. 5 shows the schematic diagram of the pixel according to former first subtitling image of quadratic interpolation pixel correction;
Fig. 6 shows the schematic diagram of the original captions comprising transparency component;
Fig. 7 shows the schematic flow sheet of IBP algorithm;
Fig. 8 (a), Fig. 8 (b), Fig. 8 (c) respectively illustrate schematic diagram, the schematic diagram of two-sided filter, the schematic diagram of filtered output image of the edge image of noisy image;
Fig. 9 (a), Fig. 9 (b), Fig. 9 (c) respectively illustrate the schematic diagram of the edge image of noisy image, x directional filter and y directional filter and synthesize the schematic diagram of xy directional filter, the filtered schematic diagram of former two-dimentional two-sided filter;
Figure 10 (a), Figure 10 (b), Figure 10 (c) respectively illustrate original subtitling image, according to the subtitling image after the amplification of prior art bi-cubic interpolation, according to the subtitling image after the processing method amplification of the subtitling image of the embodiment of the present application;
Figure 11 shows a kind of exemplary block diagram of the processing unit of the subtitling image according to the embodiment of the present application.
Embodiment
Below in conjunction with drawings and Examples, the application is described in further detail.Be understandable that, specific embodiment described herein is only for explaining related invention, but not the restriction to this invention.It also should be noted that, for convenience of description, in accompanying drawing, illustrate only the part relevant to Invention.
It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.Below with reference to the accompanying drawings and describe the application in detail in conjunction with the embodiments.
Fig. 1 shows a kind of exemplary process diagram of the processing method 100 according to the subtitling image of the embodiment of the present application.The method can be performed by the various image display with processor, and these equipment can include but not limited to TV, computer, mobile phone, wrist-watch, wearable device and mobile unit etc.
As shown in Figure 1, in a step 101, the multiplication factor of the first subtitling image is obtained.
First subtitling image is the subtitling image that needs that user selectes carry out amplifying.User is when selected first subtitling image, and the first subtitling image that can select can include but not limited to: the area-of-interest in the original subtitling image that original subtitling image and/or user select.Such as; user can select to amplify original subtitling image; due to original subtitling image usually can be selected as the first subtitling image; therefore can by original subtitling image the first subtitling image by default, alternatively or additionally the first subtitling image of the area-of-interest in the original subtitling image that user is selected.The mode that user selectes, can select for direct control, also options for user can be provided to select by image display.Wherein, what area-of-interest was selected as user pays close attention to subtitle parts, draws a circle to approve this region to carry out respective handling further, the name that such as user is concerned about, place name or action etc.The operation of selected area-of-interest can include but not limited to: user chooses corresponding setting options with the image of magnifying local region, or, if terminal comprises touch-screen, then user by finger being clicked on the touchscreen the area-of-interest selecting subtitling image, thus can select area-of-interest.
Multiplication factor can be obtained via diverse ways.In certain embodiments, multiplication factor can be the size of video played in full screen and the ratio of video original size.This kind of amplification mode makes captions self adaptation amplify according to the ratio of displaying video and original video size, also self adaptation amplification mode can be called, one video caption browse mode easily, without the need to considering whether it surmounts video boundaries scope after amplifying captions in this mode.
Fig. 2 (a) and Fig. 2 (b) respectively illustrates the schematic diagram of the frame of video of the schematic diagram of the frame of video comprising original subtitling image and the original subtitling image after comprising amplification.As shown in Fig. 2 (a) He Fig. 2 (b), in a concrete application scenarios, original subtitling image is carried out self adaptation amplification by the size that user can play on TV according to video, the size that Fig. 2 (a) is original video, be set as Size1, Fig. 2 (b) is for carrying out video size during played in full screen, be set as Size2, multiplication factor ratio=Size2/Size1, for original subtitling image needs the multiple that carries out amplifying, under this amplification mode, captions keep certain ratio with video size all the time.
Alternatively or additionally, in further embodiments, multiplication factor can for receive user setting multiplication factor.
When the multiplication factor of acquisition first subtitling image, user can set the rule be associated with predetermined action by scheduled operation, when image display receives the scheduled operation of user's input, triggers the predetermined action be associated with scheduled operation.Such as, setting rule one operates for image display receives first, and selected first subtitling image is original subtitling image; Image display receives the second operation afterwards, the first subtitling image self adaptation is amplified; Subtitling image after last image display display amplification.Again such as, first setting rule two receives the 3rd operation for image display, and selecting the first subtitling image is the area-of-interest of original subtitling image; Receive the 4th operation subsequently, the multiplication factor of setting area-of-interest; Finally show the area-of-interest after amplifying.After a regular Sum fanction two sets, when image display receives the first operation and second operation of user's input, original captions image adaptive will be amplified and show according to rule one; When image display receives the 3rd operation and the 4th operation of user's input, will according to regular two area-of-interest be amplified by multiplication factor and show.
When the multiplication factor of acquisition first subtitling image, first subtitling image and multiplication factor, can be set by the user respectively, also can be that image display provides options for user to select respectively, can also be the default action preset by user or image display.Such as, when above-mentioned second or the 4th operation be predetermined to be the multiplication factor of acquiescence time, only need carrying out the first operation or the second operation, just can being amplified the rear final image for showing.
When the multiplication factor of acquisition first subtitling image, the first subtitling image and multiplication factor can also be carried out combining and setting shortcut according to user habit, thus improve the speed that user sets the multiplication factor of the first subtitling image.
Fig. 3 shows the schematic diagram of the area-of-interest in the original subtitling image after amplification.
As shown in Figure 3, in a concrete application scenarios, self adaptation amplification mode can not present captions clearly completely, and in captions, there is certain word that user is concerned about, it is such as important name, place name etc., user can not be differentiated by the sense of hearing, now video caption original position there will be transparent ellipse magnifying glass 301, moving magnifier 301 is carried out by key upper and lower about remote controller, again open up one piece of Transparence Display region 302 that can customize in video simultaneously, information under magnifying glass 301 is amplified in real time according to the multiplication factor preset and is presented in Transparence Display region 302.This kind of pattern can be claimed to be magnifying glass pattern.
Return Fig. 1, in a step 102, according to multiplication factor, the YUV component of the first subtitling image is carried out to the interpolation up-sampling of edge direction, obtain a YUV component.
Further, according to multiplication factor, the YUV component of the first subtitling image is carried out to the interpolation up-sampling of edge direction, obtain a YUV component can include but not limited to: carry out the interpolation up-sampling of edge direction, correction and down-sampling to the YUV component of the first subtitling image, obtain pretreated YUV component; Pretreated YUV component is carried out to the interpolation up-sampling of edge direction, obtain a YUV component.
When carrying out preliminary treatment to the original image of captions, by the interpolation amplification method of edge direction, it is initially amplified, afterwards low-resolution pixel is revised, then down-sampling is carried out to the image after interpolation, obtain and the low-resolution image of input picture with size, and with it as source images, carry out second time interpolation up-sampling, obtain a YUV component.
Before the YUV component of the first subtitling image being carried out to the interpolation up-sampling of edge direction, correction and down-sampling, often edge is more coarse for the picture format captions of program source, now interpolation up-sampling, correction and down-sampling are carried out to program source, obtain pretreated YUV component, some asperity information of captions perimeter can be eliminated.
Further, the interpolation up-sampling carrying out edge direction can include but not limited to: using each pixel of interpolation image as an interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains an interpolating pixel point; The pixel of interpolation image and an interpolating pixel are selected as quadratic interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains quadratic interpolation pixel.
In the present embodiment, when carrying out the interpolation up-sampling of edge direction to the YUV component of the first subtitling image, the interpolation image adopted is the YUV component of the first subtitling image; When carrying out the interpolation up-sampling of edge direction to pretreated YUV component, the interpolation image adopted is pretreated YUV component.
Further, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain an interpolating pixel point can include but not limited to: if interpolation direction is in 45 ° or 135 ° of directions of an interpolating pixel point, carry out interpolation to be positioned on interpolation direction and to select two adjacent interpolation reference image vegetarian refreshments with an interpolating pixel respectively, obtain an interpolating pixel point; If interpolation direction is in 0 ° or 90 ° of directions of an interpolating pixel point, carries out interpolation to selecting four adjacent interpolation reference image vegetarian refreshments with an interpolating pixel, obtaining an interpolating pixel point.
Further, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain quadratic interpolation pixel can include but not limited to: if interpolation direction is in 45 ° or 135 ° of directions of quadratic interpolation pixel, interpolation is carried out to adjacent with quadratic interpolation pixel four quadratic interpolation reference image vegetarian refreshments, obtains quadratic interpolation pixel; If interpolation direction is in 0 ° or 90 ° of directions of quadratic interpolation pixel, carries out interpolation to being positioned at two quadratic interpolation reference image vegetarian refreshments on interpolation direction and adjacent with quadratic interpolation pixel respectively, obtaining quadratic interpolation pixel.
Fig. 4 shows the schematic diagram of the image after quadratic interpolation.
As shown in Figure 4, treat the interpolation up-sampling that interpolation image pixel (LR pixel) carries out edge direction, obtain an interpolating pixel point (modifying pixel in step1), treat the interpolation up-sampling that the pixel (LR pixel) of interpolation image and interpolating pixel point (modifying pixel in step1) carry out edge direction, obtain quadratic interpolation pixel (modifying pixel instep2) and comprising:
Step 1) interpolation amplification is done to image, for any one former first subtitling image pixel (LR pixel) will corresponding to interpolation go out 3 full-resolution picture vegetarian refreshments (HR pixel), first interpolation be as shown in Figure 4 in namely grey high resolution picture vegetarian refreshments (also modifying pixelin step1), this full-resolution picture vegetarian refreshments is asked to the gradient of its 0 ° of 45 ° of 90 ° of 135 ° of four angle direction, the vertical direction of its greatest gradient is got in interpolation direction.The vertical direction of greatest gradient, if be in 45 ° or 135 ° of directions then directly utilize the pixel (LR pixel) of the party's two former first subtitling image upwards to carry out interpolation, if be in 0 ° or 90 °, utilizes the pixel of four of surrounding former first subtitling image (also i.e. LR pixel) to average.
Step 2) in step 1) in namely the full-resolution picture vegetarian refreshments (also modifying pixelin step1) obtained as step 2) in known pixels point, solve the gradient of quadratic interpolation pixel (modifying pixel in step2) 0 ° of 45 ° of 90 ° of 135 ° of four angle direction equally, the vertical direction of its greatest gradient is got in straight cutting direction, namely if vertical direction is in 0 ° and 90 ° of directions, directly utilize two full-resolution picture vegetarian refreshments (also modifying pixel in step1) of solving in the party step1 upwards or two pixels (LR pixel) of former first subtitling image to carry out interpolation.If be in 45 ° or 135 °, the pixel (LR pixel) of two known former first subtitling image of surrounding and the pixel (LRpixel) of two former first subtitling image is utilized to be averaging.
Further, after the interpolation up-sampling YUV component of the first subtitling image being carried out to edge direction, carry out correction can include but not limited to: carrying out in the first subtitling image after interpolation up-sampling, obtain the pixel gradient G 1 in the horizontal direction of former first subtitling image and gradient G 2 in the vertical direction; Set up gradient-weights mapping model; Based on G1, G2 and gradient-weights mapping model, obtain the weights W1 of the pixel horizontal direction of former first subtitling image and the weights W2 of vertical direction; Based on W1, W2 and be interpolation image with described first subtitling image quadratic interpolation pixel, the pixel of former first subtitling image of weighting correction.
Carrying out in the first subtitling image after interpolation up-sampling, according to the weights W1 of the pixel horizontal direction of former first subtitling image, the weights W2 of vertical direction and quadratic interpolation pixel, the pixel of former first subtitling image of weighting correction: the step 1 by corresponding with Fig. 4 before) and step 2) known pixels points all afterwards all obtains, so just full-resolution picture vegetarian refreshments feedback modifiers low-resolution pixel point can be passed through, make the subtitling image after interpolation more natural, can using the LR pixel in Fig. 4 as target pixel points in this step, utilize step 1) and step 2) in the pixel obtained gradient G 1 on its horizontal and vertical direction and G2 are asked to low-resolution pixel point.Set up gradient---the weights mapping model T based on exponential distribution, the index of this model and gradient is inversely proportional to, utilize gradient G 1, G2 and weights model T calculates weighting weights W1 and W2 of low-resolution pixel point horizontal direction and vertical direction respectively, is finally weighted correction to low-resolution pixel point.
Fig. 5 shows the schematic diagram of the pixel according to former first subtitling image of quadratic interpolation pixel correction.
As shown in Figure 5, revise goal pixel is the LR pixel in Fig. 4, and reference image vegetarian refreshments is above-mentioned steps 2) pixel that solves.Obtain corresponding pixel value P1 and P2 to four pixel weighted sums respectively for horizontal and vertical direction exploitation right value coefficient [-1/16,9/16,9/16 ,-1/16], last W1 × P1+W2 × P2 is as final correction value.
Return Fig. 1, in step 103, according to multiplication factor, interpolation up-sampling is carried out to the transparency component of the first subtitling image, obtains the first transparency component.
The present embodiment does not limit the method that the transparency component of the first subtitling image carries out interpolation up-sampling, interpolation method of the prior art, such as bi-cubic interpolation method, bilinear interpolation method etc., all can be used for carrying out interpolation up-sampling to the transparency component of the first subtitling image, for technology well known to those skilled in the art, do not repeat them here.
Transparence information determines the viewing area of captions, often there is many little sawtooth when program source captions are less time, remove part sawtooth by horizon scan line, transparence information is binaryzation numerical value, carries out interpolation amplification and setting threshold blocks to it.
Fig. 6 shows the schematic diagram of the original captions comprising transparency component.
As shown in Figure 6, in image, black region is the transparent region of captions, the part deciding Subtitle Demonstration by the transparency component of captions and the part do not shown, transparency component is binary image, which determine the shape that font finally shows, adopt bi-cubic interpolation to amplify for binary image and use the threshold value of setting to block as binary image.
Return Fig. 1, at step 104, synthesis the one YUV component and the first transparency component, obtain the second subtitling image.
The technology that the technology of synthesizing a YUV component and the first transparency component is well known to those skilled in the art, does not repeat them here.
In step 105, based on the second subtitling image, obtain the image for showing.
When showing the image of final display, for the image after amplification, can by the direct set and display position of user and display mode; Also can be selected for user by image display predefine display position and display mode; Display position and the display mode of acquiescence can also be set by image display.Such as, after original captions image adaptive amplifies, the default display location of image display setting is the display position of the subtitling image of former video, the subtitling image therefore after the position display of the subtitling image of former video is amplified.
The image of final display, its display position can be the arbitrary region in screen, such as, will be presented at the places such as the top of screen, below, side, corner or the Transparence Display frame newly opened up.
The image of final display, its display mode also can be various ways, and rectangle, cloud shape, transparent ellipse etc., do not limit one by one at this.
Further, based on the second subtitling image, the image obtained for showing can include but not limited to: carry out back projection (IBP) correction to the second subtitling image, obtain the 3rd subtitling image, based on the 3rd subtitling image, obtain the image for showing.
IBP constraint is carried out to the second subtitling image, clear captions can be reconstructed.
Further, carry out back projection (IBP) correction to the second subtitling image, obtaining the 3rd subtitling image can include but not limited to: the simulation low-resolution image obtaining initial estimation image, and initial estimation image is the second subtitling image; Relatively simulation low-resolution image and the first subtitling image; Simulation error image is obtained according to comparative result; According to simulation error image, iterated revision is carried out to the second subtitling image, obtain the 3rd subtitling image.
IBP is classical spatial domain super-resolution rebuilding algorithm, and its process of reconstruction is the process to the continuous iteration of initial estimate, and its core procedure is exactly the back projection of error.In the method, HR image is obtained by carrying out iterative backprojection to the error of simulation LR image and observation LR image.
The quality influence that initial estimation is rebuild IBP is very large, and therefore we use the initial estimation of result as IBP iteration of quadratic interpolation up-sampling.
Fig. 7 shows the schematic flow sheet of IBP algorithm.
As shown in Figure 7, the observed image of order input is L, and resolution is [M × N], and high-definition picture to be estimated is H, all expands k doubly, i.e. [k × M × k × N] at x (level) and y (vertically) directional resolution.Estimate that the formula of HR image can be expressed as by IBP method:
H ^ n + 1 ( s , t ) = H ^ n ( s , t ) + Σ x , y ∈ Ω ( L ( x , y ) - L ^ n ( x , y ) ) × p BP ( s , t ; x , y )
In formula, (s, t) is the pixel coordinate in high-definition picture H, and (x, y) is the coordinate of the pixel in low-resolution image, be the simulation LR image of n-th iteration gained, it is according to the HR image of current estimation through degrading generation.Ω is expressed as (x, y) location sets.P bPfor back projection's core, it determines the influence mode of error, is usually chosen for fixing constant for each iteration.
Further, based on the 3rd subtitling image, the image obtained for showing can include but not limited to: carry out bilateral filtering to the 3rd subtitling image, obtains the 4th subtitling image; Using the 4th subtitling image as the image being used for showing.
Bilateral filtering is carried out to the 3rd subtitling image, the noise of captions can be eliminated.
Further, bilateral filtering is carried out to the 3rd subtitling image, obtain the 4th subtitling image can include but not limited to: by the horizontal direction filtering of spatial domain filter to the 3rd subtitling image, obtain once filtered subtitling image, to the vertical direction filtering of once filtered subtitling image, obtain the subtitling image after space filtering; In subtitling image after space filtering, to each pixel centered by pixel to be filtered and in the window of length of side r and pixel to be filtered, ask for the absolute value of actual difference; Set up the corresponding table of the weights of pixel domain filter and the absolute value of all differences in advance; Based on absolute value and the corresponding table of the difference of reality, to each pixel of the subtitling image after space filtering, carry out pixel domain filtering, obtain the 4th subtitling image.
In practice, conventional bilateral filtering carries out denoising to image.The output pixel value of two-sided filter can be obtained by its surrounding pixel weighted average, and relative to medium filtering and Gassian low-pass filter, bilateral filtering has edge-preserving property.Form this is because bilateral filtering is made up of two filters.One is used for being weighted space length for spatial domain filter, and filter weights reduces along with the increase of space length; Another is pixel domain filter, and the less weights of similitude of two grey scale pixel values are less.The advantage of two-sided filter is to ensure that edge is not fuzzy while denoising.
Fig. 8 (a), Fig. 8 (b) and Fig. 8 (c) respectively illustrate the schematic diagram of the edge image of noisy image, two-sided filter and filtered output image.
As shown in Fig. 8 (a), Fig. 8 (b) and Fig. 8 (c), the advantage of two-sided filter is to ensure that edge is not fuzzy while denoising.
Spatial filter is identical with the form of Gaussian filter, and multipair Gaussian filter carries out separating treatment in practice, therefore utilizes the separability of gaussian filtering can carry out the decomposition in x direction and y direction to spatial filter.First x trend pass filtering is carried out to image, and using filtered image as intermediate object program, then y trend pass filtering is carried out to intermediate object program.The complexity calculated carries out Ο (r by original dn) secondary multiplication, Ο (r dn) sub-addition, becomes and needs Ο (drN) secondary multiplication, Ο (drN).Here N is the pixel count of image, and r is space behavior scope, and d is the dimension of image.The spatial filter be separated can improving operational speed greatly.But pixel domain filter does not have space separability, therefore x, y direction bilateral filtering and former bilateral filtering result are carried out respectively to image and not exclusively equal.
Fig. 9 (a), Fig. 9 (b) and Fig. 9 (c) respectively illustrate noisy image border image, x directional filter and y directional filter and synthesize xy directional filter and the filtered schematic diagram of former two-dimentional two-sided filter.
As shown in Fig. 9 (a), Fig. 9 (b) He Fig. 9 (c), separable bilateral filtering result is that of former bilateral filtering result is well approximate.Even if also good result can be drawn to the edge that 45 ° are tilted.
In order to reduce the speed of service further, in actual operation, reduce amount of calculation by tabling look-up.According to being the preset parameter δ controlling Gaussian kernel sand δ rone-dimensional space filtering core w is calculated with the length r of window s, pixel domain filtering core w rbe worth by | f (i, j)-f (x, y) | (in formula, f (x, y) is the pixel value being positioned at coordinate (x, y) place, and the window centered by (x, y) is designated as S x,y, in window, pixel is designated as) f (i, j) decides, obviously | f (i, j)-f (x, y) | span be [0 ~ 255], so any value of w can be obtained in advance.In practical application, we are w sin each value set up an array const double bil [256].The value of this array was just obtained in the compilation phase, can basis | f (i, j)-f (x, y) | and value is tabled look-up and is obtained the w value of correspondence.
Figure 10 (a), Figure 10 (b), Figure 10 (c) respectively illustrate original subtitling image, according to the subtitling image after the amplification of prior art bi-cubic interpolation, according to the subtitling image after the processing method amplification of the subtitling image of the embodiment of the present application.
As shown in Figure 10 (a), Figure 10 (b), Figure 10 (c), Figure 10 (c) is compared with Figure 10 (a), resolution is high, details expressive ability is strong, compared with Figure 10 (b), the captions edge smoothing of display, decreases sawtooth and blooming, improves the fine degree of captions.
The processing method of the subtitling image that the application provides, by obtaining the multiplication factor of the first subtitling image, the interpolation up-sampling of edge direction is carried out subsequently according to the YUV component of multiplication factor to the first subtitling image, obtain a YUV component, then according to multiplication factor, interpolation up-sampling is carried out to the transparency component of the first subtitling image, obtain the first transparency component, synthesize a described YUV component and described first transparency component afterwards, obtain the second subtitling image, last again based on described second subtitling image, obtain the image for showing, make the captions edge smoothing for showing, decrease sawtooth and blooming, improve the fine degree of captions.Not only can provide different captions amplification modes for user, meet different captions reading requirements, the caption information of high-quality visual effect can also be obtained simultaneously.To in the super-resolution rebuilding of captions, pixel domain filter realizes by tabling look-up, drastically reduce the area the complexity of algorithm, only need less running time, when video playback can with video Complete Synchronization, can use in actual embedded multimedia playing system, as used in high definition or ultra high-definition TV.
Figure 11 shows a kind of exemplary block diagram of the processing unit 1100 according to the subtitling image of the embodiment of the present application.
As shown in figure 11, the processing unit 1100 of subtitling image can include but not limited to: acquiring unit 1101, YUV component up-sampling unit 1102, transparency component up-sampling unit 1103, synthesis unit 1104 and generation unit 1105.Those skilled in the art should be appreciated that, acquiring unit 1101, YUV component up-sampling unit 1102, transparency component up-sampling unit 1103, synthesis unit 1104 and generation unit 1105 can be arranged in same processor, also can be arranged in the different processor of networking.
Acquiring unit 1101 may be used for the multiplication factor of acquisition first subtitling image.YUV component up-sampling unit 1102 may be used for according to multiplication factor, the YUV component of the first subtitling image is carried out to the interpolation up-sampling of edge direction, obtains a YUV component.Transparency component up-sampling unit 1103 may be used for according to multiplication factor, carries out interpolation up-sampling, obtain the first transparency component to the transparency component of the first subtitling image.Synthesis unit 1104 may be used for synthesis the one YUV component and the first transparency component, obtains the second subtitling image.Generation unit 1105 may be used for, based on the second subtitling image, obtaining the image for showing.
Further, when acquiring unit 1101 is for obtaining the multiplication factor of the first subtitling image, the first subtitling image obtained can include but not limited to: the area-of-interest in the original subtitling image that original subtitling image and/or user select; The multiplication factor obtained can include but not limited to: the multiplication factor that the size of video played in full screen and the ratio of video original size and/or the user received set.
Further, YUV component up-sampling unit 1102 can include but not limited to pretreatment unit 1106 and pretreated YUV component up-sampling unit 1107.Pretreatment unit 1106 may be used for carrying out the interpolation up-sampling of edge direction, correction and down-sampling to the YUV component of the first subtitling image, obtains pretreated YUV component.Pretreated YUV component up-sampling unit 1107 may be used for the interpolation up-sampling pretreated YUV component being carried out to edge direction, obtains a YUV component.
Further, when pretreatment unit 1106 is for carrying out the interpolation up-sampling of edge direction or pretreated YUV component up-sampling unit 1107 for carrying out the interpolation up-sampling of edge direction to pretreated YUV component to the YUV component of the first subtitling image, wherein, the interpolation up-sampling carrying out edge direction can include but not limited to: using each pixel of interpolation image as an interpolation reference image vegetarian refreshments, acquisition is positioned at interpolating pixel point 0 °, 45 °, the gradient difference in 90 ° and 135 ° directions, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain an interpolating pixel point, the pixel of interpolation image and an interpolating pixel are selected as quadratic interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains quadratic interpolation pixel.
Further, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain an interpolating pixel point can include but not limited to: if interpolation direction is in 45 ° or 135 ° of directions of an interpolating pixel point, carry out interpolation to be positioned on interpolation direction and to select two adjacent interpolation reference image vegetarian refreshments with an interpolating pixel respectively, obtain an interpolating pixel point; If interpolation direction is in 0 ° or 90 ° of directions of an interpolating pixel point, carries out interpolation to selecting four adjacent interpolation reference image vegetarian refreshments with an interpolating pixel, obtaining an interpolating pixel point.
Further, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtain quadratic interpolation pixel can include but not limited to: if interpolation direction is in 45 ° or 135 ° of directions of quadratic interpolation pixel, interpolation is carried out to adjacent with quadratic interpolation pixel four quadratic interpolation reference image vegetarian refreshments, obtains quadratic interpolation pixel; If interpolation direction is in 0 ° or 90 ° of directions of quadratic interpolation pixel, carries out interpolation to being positioned at two quadratic interpolation reference image vegetarian refreshments on interpolation direction and adjacent with quadratic interpolation pixel respectively, obtaining quadratic interpolation pixel.
Further, pretreatment unit 1106 for the YUV component of the first subtitling image is carried out to edge direction interpolation up-sampling after, pretreatment unit 1106 can include but not limited to for carrying out correction to the first subtitling image after interpolation up-sampling: first carrying out in the first subtitling image after interpolation up-sampling, obtain the pixel gradient G 1 in the horizontal direction of former first subtitling image and gradient G 2 in the vertical direction; Then gradient-weights mapping model is set up; Afterwards based on G1, G2 and gradient-weights mapping model, obtain the weights W1 of the pixel horizontal direction of former first subtitling image and the weights W2 of vertical direction; Finally based on W1, W2 and be interpolation image with described first subtitling image quadratic interpolation pixel, the pixel of former first subtitling image of weighting correction.
Further, generation unit 1105 can include but not limited to: revise subelement 1108, for carrying out back projection (IBP) correction to the second subtitling image, obtains the 3rd subtitling image; First generates subelement 1109, for based on the 3rd subtitling image, obtains the image for showing.
Further, revising subelement 1108 can include but not limited to: first obtains subelement 1110, and for obtaining the simulation low-resolution image of initial estimation image, initial estimation image is the second subtitling image; Relatively subelement 1111, for comparing simulation low-resolution image and the first subtitling image; Second obtains subelement, for obtaining simulation error image according to comparative result; Iterated revision subelement 1112, for according to simulation error image, carries out iterated revision to the second subtitling image, obtains the 3rd subtitling image.
Further, the first generation subelement 1109 can include but not limited to: bilateral filtering subelement 1113, for carrying out bilateral filtering to the 3rd subtitling image, obtains the 4th subtitling image, using the 4th subtitling image as the image being used for showing.
Further, bilateral filtering is carried out to the 3rd subtitling image, obtain the 4th subtitling image can include but not limited to: by the horizontal direction filtering of spatial domain filter to the 3rd subtitling image, obtain once filtered subtitling image, to the vertical direction filtering of once filtered subtitling image, obtain the subtitling image after space filtering; In subtitling image after space filtering, to each pixel centered by pixel to be filtered and in the window of length of side r and pixel to be filtered, ask for the absolute value of actual difference; Set up the corresponding table of the weights of pixel domain filter and the absolute value of all differences in advance; Based on absolute value and the corresponding table of the difference of reality, to each pixel of the subtitling image after space filtering, carry out pixel domain filtering, obtain the 4th subtitling image.
The processing method of the subtitling image that the application provides and device, the multiplication factor of the first subtitling image is obtained by acquiring unit, carried out the interpolation up-sampling of edge direction subsequently according to the YUV component of multiplication factor to the first subtitling image by YUV component up-sampling unit, obtain a YUV component, then pass through transparency component up-sampling unit according to multiplication factor, interpolation up-sampling is carried out to the transparency component of the first subtitling image, obtain the first transparency component, afterwards by synthesis unit synthesis the one YUV component and the first transparency component, obtain the second subtitling image, finally by generation unit again based on the second subtitling image, obtain the image for showing, make the captions edge smoothing for showing, decrease sawtooth and blooming, improve the fine degree of captions.
To be described in the embodiment of the present application involved and to unit can be realized by the mode of software, also can be realized by the mode of hardware.Described unit also can be arranged within a processor, such as, can be described as: a kind of processor can include but not limited to acquiring unit, YUV component up-sampling unit, transparency component up-sampling unit, synthesis unit and generation unit.Wherein, the title of these unit does not form the restriction to this unit itself under certain conditions, and such as, acquiring unit can also be described to " for obtaining the unit of the multiplication factor of the first subtitling image ".
As another aspect, present invention also provides a kind of computer-readable recording medium, this computer-readable recording medium can be the computer-readable recording medium comprised in device in above-described embodiment; Also can be individualism, be unkitted the computer-readable recording medium allocated in terminal.Computer-readable recording medium stores more than one or one program, and program is used for performance description in the method using widgets under full frame application of the application by one or more than one processor.
More than describe the preferred embodiment that is only the application with the explanation to institute's application technology principle.Those skilled in the art are to be understood that, in the application involved and invention scope, be not limited to the technical scheme of the particular combination of above-mentioned technical characteristic, also should be encompassed in when not departing from inventive concept, other technical scheme of being carried out combination in any by above-mentioned technical characteristic or its equivalent feature and being formed simultaneously.The technical characteristic that such as, disclosed in above-mentioned feature and the application (but being not limited to) has similar functions is replaced mutually and the technical scheme formed.

Claims (24)

1. a processing method for subtitling image, is characterized in that, described method comprises:
Obtain the multiplication factor of the first subtitling image;
According to described multiplication factor, the YUV component of described first subtitling image is carried out to the interpolation up-sampling of edge direction, obtain a YUV component;
According to described multiplication factor, interpolation up-sampling is carried out to the transparency component of described first subtitling image, obtains the first transparency component;
Synthesize a described YUV component and described first transparency component, obtain the second subtitling image;
Based on described second subtitling image, obtain the image for showing.
2. method according to claim 1, is characterized in that, described according to described multiplication factor, the YUV component of described first subtitling image is carried out to the interpolation up-sampling of edge direction, obtains a YUV component and comprises:
The interpolation up-sampling of edge direction, correction and down-sampling are carried out to the YUV component of described first subtitling image, obtains pretreated YUV component;
Described pretreated YUV component is carried out to the interpolation up-sampling of edge direction, obtain a described YUV component.
3. method according to claim 2, is characterized in that, described in carry out edge direction interpolation up-sampling comprise:
Using each pixel of interpolation image as an interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains a described interpolating pixel point;
The pixel of described interpolation image and a described interpolating pixel are selected as quadratic interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains described quadratic interpolation pixel.
4. method according to claim 3, it is characterized in that, described acquisition is positioned at the gradient difference in an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, and the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains a described interpolating pixel point and comprises:
If described interpolation direction is in 45 ° or 135 ° of directions of a described interpolating pixel point, carries out interpolation to be positioned on described interpolation direction and to select two adjacent interpolation reference image vegetarian refreshments respectively with a described interpolating pixel, obtain a described interpolating pixel point;
If described interpolation direction is in 0 ° or 90 ° of directions of a described interpolating pixel point, carries out interpolation to selecting four adjacent interpolation reference image vegetarian refreshments with a described interpolating pixel, obtaining a described interpolating pixel point.
5. the method according to claim 3 or 4, it is characterized in that, described acquisition is positioned at the gradient difference in quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, and the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains described quadratic interpolation pixel and comprises:
If described interpolation direction is in 45 ° or 135 ° of directions of described quadratic interpolation pixel, interpolation is carried out to four the quadratic interpolation reference image vegetarian refreshments adjacent with described quadratic interpolation pixel, obtains described quadratic interpolation pixel;
If described interpolation direction is in 0 ° or 90 ° of directions of described quadratic interpolation pixel, carries out interpolation to being positioned at two quadratic interpolation reference image vegetarian refreshments on described interpolation direction and adjacent with described quadratic interpolation pixel respectively, obtaining described quadratic interpolation pixel.
6. according to the method one of claim 3 to 5 Suo Shu, it is characterized in that, after the described YUV component to described first subtitling image carries out the interpolation up-sampling of edge direction, carry out correction comprise:
Carrying out in the first subtitling image after interpolation up-sampling, obtain the pixel gradient G 1 in the horizontal direction of former first subtitling image and gradient G 2 in the vertical direction;
Set up gradient-weights mapping model;
Based on described G1, G2 and described gradient-weights mapping model, obtain the weights W1 of the pixel horizontal direction of former first subtitling image and the weights W2 of vertical direction;
Based on described W1, W2 and be interpolation image with described first subtitling image quadratic interpolation pixel, the pixel of former first subtitling image described in weighting correction.
7. according to the method one of claim 1 to 6 Suo Shu, it is characterized in that, described based on described second subtitling image, the image obtained for showing comprises:
Back projection (IBP) correction being carried out to described second subtitling image, obtains the 3rd subtitling image, based on described 3rd subtitling image, obtaining the described image for showing.
8. method according to claim 7, is characterized in that, described to described second subtitling image carry out back projection (IBP) revise, obtain the 3rd subtitling image and comprise:
Obtain the simulation low-resolution image of initial estimation image, described initial estimation image is described second subtitling image;
More described simulation low-resolution image and described first subtitling image;
Simulation error image is obtained according to comparative result;
According to described simulation error image, iterated revision is carried out to described second subtitling image, obtain the 3rd subtitling image.
9. method according to claim 7, is characterized in that, described based on described 3rd subtitling image, comprises described in obtaining for the image shown:
Bilateral filtering is carried out to described 3rd subtitling image, obtains the 4th subtitling image;
Using described 4th subtitling image as the described image being used for showing.
10. method according to claim 9, is characterized in that, describedly carries out bilateral filtering to described 3rd subtitling image, obtains the 4th subtitling image and comprises:
By the horizontal direction filtering of spatial domain filter to described 3rd subtitling image, obtain once filtered subtitling image, to the vertical direction filtering of described once filtered subtitling image, obtain the subtitling image after space filtering;
In subtitling image after space filtering, to each pixel centered by pixel to be filtered and in the window of length of side r and described pixel to be filtered, ask for the absolute value of actual difference;
Set up the corresponding table of the weights of pixel domain filter and the absolute value of all differences in advance;
Based on absolute value and the described correspondence table of the difference of described reality, to each pixel of the subtitling image after described space filtering, carry out pixel domain filtering, obtain the 4th subtitling image.
11. methods according to claim 1, is characterized in that, described first subtitling image comprises:
Original subtitling image; And/or
Area-of-interest in the original subtitling image that user selects.
12. methods according to claim 11, is characterized in that, described multiplication factor comprises:
The size of video played in full screen and the ratio of video original size; And/or
The multiplication factor of the user's setting received.
The processing unit of 13. 1 kinds of subtitling image, is characterized in that, described device comprises:
Acquiring unit, for obtaining the multiplication factor of the first subtitling image;
YUV component up-sampling unit, for according to described multiplication factor, carries out the interpolation up-sampling of edge direction, obtains a YUV component to the YUV component of described first subtitling image;
Transparency component up-sampling unit, for according to described multiplication factor, carries out interpolation up-sampling to the transparency component of described first subtitling image, obtains the first transparency component;
Synthesis unit, for the synthesis of a described YUV component and described first transparency component, obtains the second subtitling image;
Generation unit, for based on described second subtitling image, obtains the image for showing.
14. devices according to claim 13, is characterized in that, described YUV component up-sampling unit comprises:
Pretreatment unit, for carrying out the interpolation up-sampling of edge direction, correction and down-sampling to the YUV component of described first subtitling image, obtains pretreated YUV component;
Pretreated YUV component up-sampling unit, for carrying out the interpolation up-sampling of edge direction to described pretreated YUV component, obtains a described YUV component.
15. devices according to claim 14, is characterized in that, described in carry out edge direction interpolation up-sampling comprise:
Using each pixel of interpolation image as an interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains a described interpolating pixel point;
The pixel of described interpolation image and a described interpolating pixel are selected as quadratic interpolation reference image vegetarian refreshments, obtain the gradient difference being positioned at quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains described quadratic interpolation pixel.
16. devices according to claim 15, it is characterized in that, described acquisition is positioned at the gradient difference in an interpolating pixel point 0 °, 45 °, 90 ° and 135 ° direction, and the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains a described interpolating pixel point and comprises:
If described interpolation direction is in 45 ° or 135 ° of directions of a described interpolating pixel point, carries out interpolation to be positioned on described interpolation direction and to select two adjacent interpolation reference image vegetarian refreshments respectively with a described interpolating pixel, obtain a described interpolating pixel point;
If described interpolation direction is in 0 ° or 90 ° of directions of a described interpolating pixel point, carries out interpolation to selecting four adjacent interpolation reference image vegetarian refreshments with a described interpolating pixel, obtaining a described interpolating pixel point.
17. devices according to claim 15 or 16, it is characterized in that, described acquisition is positioned at the gradient difference in quadratic interpolation pixel 0 °, 45 °, 90 ° and 135 ° direction, and the vertical direction of greatest gradient difference is carried out interpolation as interpolation direction, obtains quadratic interpolation pixel and comprises:
If described interpolation direction is in 45 ° or 135 ° of directions of described quadratic interpolation pixel, interpolation is carried out to four the quadratic interpolation reference image vegetarian refreshments adjacent with described quadratic interpolation pixel, obtains described quadratic interpolation pixel;
If described interpolation direction is in 0 ° or 90 ° of directions of described quadratic interpolation pixel, carries out interpolation to being positioned at two quadratic interpolation reference image vegetarian refreshments on described interpolation direction and adjacent with described quadratic interpolation pixel respectively, obtaining described quadratic interpolation pixel.
18., according to claim 15 to the device one of 17 described, is characterized in that, carry out correction and comprise after the described YUV component to described first subtitling image carries out the interpolation up-sampling of edge direction:
Carrying out in the first subtitling image after interpolation up-sampling, obtain the pixel gradient G 1 in the horizontal direction of former first subtitling image and gradient G 2 in the vertical direction;
Set up gradient-weights mapping model;
Based on described G1, G2 and described gradient-weights mapping model, obtain the weights W1 of the pixel horizontal direction of former first subtitling image and the weights W2 of vertical direction;
Based on described W1, W2 and be interpolation image with described first subtitling image quadratic interpolation pixel, the pixel of former first subtitling image described in weighting correction.
19. according to claim 13 to the device one of 18 described, and it is characterized in that, described generation unit comprises:
Revise subelement, for carrying out back projection (IBP) correction to described second subtitling image, obtain the 3rd subtitling image;
First generates subelement, for based on described 3rd subtitling image, obtains the described image for showing.
20. devices according to claim 19, is characterized in that, described correction subelement comprises:
First obtains subelement, and for obtaining the simulation low-resolution image of initial estimation image, described initial estimation image is described second subtitling image;
Relatively subelement, for more described simulation low-resolution image and described first subtitling image;
Second obtains subelement, for obtaining simulation error image according to comparative result;
Iterated revision subelement, for according to described simulation error image, carries out iterated revision to described second subtitling image, obtains the 3rd subtitling image.
21. devices according to claim 19, is characterized in that, described first generates subelement comprises:
Bilateral filtering subelement, for carrying out bilateral filtering to described 3rd subtitling image, obtains the 4th subtitling image, using described 4th subtitling image as the described image being used for showing.
22. devices according to claim 21, is characterized in that, describedly carry out bilateral filtering to described 3rd subtitling image, obtain the 4th subtitling image and comprise:
By the horizontal direction filtering of spatial domain filter to described 3rd subtitling image, obtain once filtered subtitling image, to the vertical direction filtering of described once filtered subtitling image, obtain the subtitling image after space filtering;
In subtitling image after space filtering, to each pixel centered by pixel to be filtered and in the window of length of side r and described pixel to be filtered, ask for the absolute value of actual difference;
Set up the corresponding table of the weights of pixel domain filter and the absolute value of all differences in advance;
Based on absolute value and the described correspondence table of the difference of described reality, to each pixel of the subtitling image after described space filtering, carry out pixel domain filtering, obtain the 4th subtitling image.
23. devices according to claim 13, is characterized in that, described first subtitling image comprises:
Original subtitling image; And/or
Area-of-interest in the original subtitling image that user selects.
24. devices according to claim 23, is characterized in that, described multiplication factor comprises:
The size of video played in full screen and the ratio of video original size; And/or
The multiplication factor of the user's setting received.
CN201410798220.3A 2014-12-19 2014-12-19 Processing method and device for caption images Pending CN104410929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410798220.3A CN104410929A (en) 2014-12-19 2014-12-19 Processing method and device for caption images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410798220.3A CN104410929A (en) 2014-12-19 2014-12-19 Processing method and device for caption images

Publications (1)

Publication Number Publication Date
CN104410929A true CN104410929A (en) 2015-03-11

Family

ID=52648513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410798220.3A Pending CN104410929A (en) 2014-12-19 2014-12-19 Processing method and device for caption images

Country Status (1)

Country Link
CN (1) CN104410929A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135415A (en) * 2017-04-11 2017-09-05 青岛海信电器股份有限公司 Video caption processing method and processing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200611209A (en) * 2004-09-24 2006-04-01 Realtek Semiconductor Corp Method and apparatus for scaling image block
US20070177466A1 (en) * 2006-01-31 2007-08-02 Hideo Ando Information reproducing system using information storage medium
CN102804228A (en) * 2010-03-18 2012-11-28 皇家飞利浦电子股份有限公司 Functional image data enhancement and/or enhancer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200611209A (en) * 2004-09-24 2006-04-01 Realtek Semiconductor Corp Method and apparatus for scaling image block
US20070177466A1 (en) * 2006-01-31 2007-08-02 Hideo Ando Information reproducing system using information storage medium
CN102804228A (en) * 2010-03-18 2012-11-28 皇家飞利浦电子股份有限公司 Functional image data enhancement and/or enhancer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申利平: "基于插值的数字图像处理技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135415A (en) * 2017-04-11 2017-09-05 青岛海信电器股份有限公司 Video caption processing method and processing device

Similar Documents

Publication Publication Date Title
JP5962393B2 (en) Image processing apparatus, image processing method, and image processing program
US9076234B2 (en) Super-resolution method and apparatus for video image
CN105138317A (en) Window display processing method and device applied to terminal equipment
JP2012521708A (en) Method and apparatus for correcting an image using a saliency map based on color frequency
CN111667410B (en) Image resolution improving method and device and electronic equipment
KR100860968B1 (en) Image-resolution-improvement apparatus and method
CN108154474A (en) A kind of super-resolution image reconstruction method, device, medium and equipment
JP6039657B2 (en) Method and device for retargeting 3D content
US10810707B2 (en) Depth-of-field blur effects generating techniques
US20110097011A1 (en) Multi-resolution image editing
JPWO2011111819A1 (en) Image processing apparatus, image processing program, and method for generating image
CN111179159B (en) Method and device for eliminating target image in video, electronic equipment and storage medium
JP5460987B2 (en) Image processing apparatus, image processing method, and image processing program
JP2014522596A5 (en)
CN105930464A (en) Web rich media multi-screen adaptation method and apparatus
WO2014008329A1 (en) System and method to enhance and process a digital image
JP5289540B2 (en) Image processing apparatus and image processing method
CN104410929A (en) Processing method and device for caption images
KR102470242B1 (en) Image processing device, image processing method and program
JP5900321B2 (en) Image processing apparatus, image processing method, and image processing program
JP2004133592A (en) Image processor, image processing method and image processing program for magnifying image
CN113658064A (en) Texture image generation method and device and electronic equipment
JP2008532168A (en) Image contrast and sharpness enhancement
JP5085589B2 (en) Image processing apparatus and method
JP6982846B2 (en) Image processing device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150311