CN105163043A - Method and device for converting picture into output video - Google Patents

Method and device for converting picture into output video Download PDF

Info

Publication number
CN105163043A
CN105163043A CN201510549518.5A CN201510549518A CN105163043A CN 105163043 A CN105163043 A CN 105163043A CN 201510549518 A CN201510549518 A CN 201510549518A CN 105163043 A CN105163043 A CN 105163043A
Authority
CN
China
Prior art keywords
block
comentropy
output video
target area
row
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510549518.5A
Other languages
Chinese (zh)
Other versions
CN105163043B (en
Inventor
李勇鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201510549518.5A priority Critical patent/CN105163043B/en
Publication of CN105163043A publication Critical patent/CN105163043A/en
Application granted granted Critical
Publication of CN105163043B publication Critical patent/CN105163043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method and a device for converting a picture into an output video in an embodiment. The method comprises: dividing an original pictures into a plurality of identification blocks; calculating the information entropy of each identification block and calculating the threshold of the information entropies according to each information entropy; taking a region where the identification blocks having the information entropies greater than the threshold are dense as a target region; segmenting the target region and converting the target region into the output video. According to the method and the device provided by the embodiment of the invention, a visual salient region in the original picture can be automatically determined, and the target region is segmented and converted into the output video, and therefore, the visual effect of the output video can be enhanced.

Description

A kind of picture is converted to the method and apparatus of output video
Technical field
The present invention relates to technical field of image processing, particularly relate to the method and apparatus that a kind of picture is converted to output video.
Background technology
Along with the progress of electronic technology, smart mobile phone and panel computer more and more universal, user can take pictures anywhere or anytime.Output video makes application and orderly for photo can be organized together, and forms the output video with certain topic, thus improves sight.
The photo component of user's shooting is more, in most cases just wherein some part of paying close attention to of user.When making the output video of a certain theme, by user, on original image, Manual interception area-of-interest is as video display content often, but this method wastes time and energy, and cannot carry out batch, process fast.
Therefore, the technical problem needing those skilled in the art urgently to solve at present is exactly: how on original image, to determine the interested region of user automatically, and show the content in this region in the mode of video.
Summary of the invention
Embodiments provide the method and apparatus that a kind of picture is converted to output video, how automatically to determine area-of-interest to solve, and represent the problem of the content in this region in the mode of video.The invention discloses a kind of method that picture is converted to output video, comprising:
Original image is divided into multiple home block;
Calculate the comentropy of home block described in each, and according to the threshold value of comentropy computing information entropy described in each;
Described comentropy is greater than the intensive region of the home block of described threshold value as target area;
Split described target area, described target area is converted to output video.
Preferably, the comentropy of described calculating home block described in each, and comprise according to the threshold value of comentropy computing information entropy described in each:
Calculate the comentropy of home block described in each;
According to comentropy described in each, calculate the mean value of described comentropy;
According to threshold value described in described mean value calculation;
Wherein, the formula calculating the comentropy E (i, j) of home block described in each is as follows:
E ( i , j ) = - Σ m = 0 255 p ( m ) l o g ( p ( m ) )
Wherein, i is expert at by described block of information, and j is described block of information column, and m is the brightness value of pixel, the probability that the span of m is m for the pixel that comprises with described home block for brightness value that sample obtains for 0-255p (m);
Described according to comentropy described in each, the formula calculating the mean value Eavg of described comentropy is as follows:
E a v g = &Sigma; i = 0 , j = 0 i < W , j < H E ( i , j ) W &times; H
Wherein, W is the described original image often quantity of home block described in row, H for described original image often arrange described in the quantity of home block;
As follows according to the formula of threshold value T described in described mean value calculation:
T=a×Eavg
Wherein, a is the constant between (0,1).
Preferably, in the comentropy of described calculating home block described in each, and after threshold value according to comentropy computing information entropy described in each; In the intensive region of the described home block described comentropy being greater than described threshold value as before target area, also comprise:
The comentropy of attribute block described in each and described threshold value are compared, if described comentropy is greater than described threshold value, then described attribute block is the remarkable block of vision;
Filtering is carried out to the remarkable block of described vision, obtains object block.
Preferably, the intensive region of the described home block that described information entropy is greater than described mean value, as target area, comprising:
Calculate the quantity of often object block described in row respectively, determine the row that there is described object block;
Exist in the row of object block described, calculate the quantity of often object block described in row respectively, determine the row that there is described object block;
The region that the row that there is object block using described and the described row that there is object block intersect is as target area.
Preferably, the described target area of described segmentation, is converted to output video by described target area, comprises:
Obtain the parameter of target area and the parameter of output video, according to the parameter of described target area and the parameter of described output video, determine output video effect;
According to the parameter of described target area, the parameter of described output video and described output video effect, calculate scaling;
According to described scaling, convergent-divergent is carried out in described target area, generate region to be split;
Split described region to be split, generate described output video according to segmentation order.
On the other hand, the embodiment of the present invention additionally provides the device that a kind of picture is converted to output video, comprising:
Home block divides module, for original image is divided into multiple home block;
Comentropy computing module, for calculating the comentropy of home block described in each, and according to the threshold value of comentropy computing information entropy described in each;
Target area determination module, the intensive region of the home block for described comentropy being greater than described threshold value is as target area;
Video conversion module, for splitting described target area, is converted to output video by described target area.
Preferably, described comentropy computing module, comprising:
Comentropy computing unit, for calculating the comentropy of home block described in each;
Average calculation unit, for according to comentropy described in each, calculates the mean value of described comentropy;
Threshold computation unit, for according to threshold value described in described mean value calculation;
Wherein, to calculate the formula of the comentropy E (i, j) of home block described in each as follows for described comentropy computing unit:
E ( i , j ) = - &Sigma; m = 0 255 p ( m ) l o g ( p ( m ) )
Wherein, i is expert at by described block of information, and j is the column of described block of information, and m is the brightness value of pixel, and the span of m is 0-255, p (m) for the pixel comprised with described home block is sample, and the brightness value obtained is the probability of m;
According to comentropy described in each described in described average calculation unit, the formula calculating the mean value Eavg of described comentropy is as follows:
E a v g = &Sigma; i = 0 , j = 0 i < W , j < H E ( i , j ) W &times; H
Wherein, W is the described original image often quantity of home block described in row, H for described original image often arrange described in the quantity of home block;
Described threshold computation unit is as follows according to the formula of threshold value T described in described mean value calculation:
T=a×Eavg
Wherein, a is the constant between (0,1).
Preferably, described device also comprises:
The remarkable block determination module of vision, for the comentropy of attribute block described in each and described threshold value being compared, if described comentropy is greater than described threshold value, then described attribute block is the remarkable block of vision;
Object block obtains module, for carrying out filtering to the remarkable block of described vision, obtains object block.
Preferably, described target area determination module comprises:
Row determining unit, for calculating the often quantity of object block in row respectively, determines the row that there is described object block;
Row determining unit, for existing in the row of object block described, calculating the quantity of object block in often arranging respectively, determining the row that there is described object block;
Intersection region determining unit, the region intersected for the row that there is object block using described and the described row that there is object block is as target area.
Preferably, described video conversion module comprises:
Video effect determining unit, for the parameter of the parameter and output video that obtain target area, according to the parameter of described target area and the parameter of described output video, determines output video effect;
Scaling computing unit, for according to the parameter of described target area, the parameter of described output video and described output video effect, calculates scaling;
Area generation unit to be split, for according to scaling, by described target area convergent-divergent, generates region to be split;
Output video generation unit, for splitting described region to be split, generates described output video according to segmentation order.
Compared with prior art, the embodiment of the present invention comprises following advantage:
Because the comentropy of visual salient region is often comparatively large, therefore can by original image be divided into multiple home block; Calculate the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy; Comentropy is greater than the visual salient region that the intensive region of the home block of threshold value is determined as target area in original image automatically, and by segmentation object region, target area is converted to output video, thus improves the visual effect of output video.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet that a kind of picture that the embodiment of the present invention one provides is converted to the method for output video;
A kind of home block that Fig. 2 provides for the embodiment of the present invention one divides schematic diagram;
Fig. 3 is the schematic flow sheet that a kind of picture that the embodiment of the present invention two provides is converted to the method for output video;
Fig. 4 is the schematic flow sheet that a kind of picture that the embodiment of the present invention three provides is converted to the method for output video;
A kind of target area schematic diagram that Fig. 5 provides for the embodiment of the present invention three;
A kind of convergent-divergent schematic diagram that Fig. 6 provides for the embodiment of the present invention three;
Fig. 7 is the structural representation that a kind of picture that the embodiment of the present invention four provides is converted to the device of output video;
Fig. 8 is the structural representation that a kind of picture that the embodiment of the present invention five provides is converted to the device of output video;
Fig. 9 is the structural representation of a kind of video conversion module that the embodiment of the present invention six provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Embodiment one
The embodiment of the present invention one provides a kind of method that picture is converted to output video, as shown in Figure 1, can comprise the following steps:
Step S101, is divided into multiple home block by original image.
In this step, original image can be divided into the home block that multiple areas etc. are large.As shown in Figure 2, w is the wide of original image, and h is the height of original image, and dividing the home block produced is square, and the length of side is N.
Step S102, calculates the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy.
In this step, the comentropy of each home block can be calculated respectively, and calculate the mean value of each comentropy, then the threshold value of foundation mean value calculation comentropy.
Step S103, is greater than the intensive region of the home block of threshold value as target area using comentropy.
In this step, this threshold value can be judge the whether significant foundation of the visual effect of block of information, in general, the home block visual effect that comentropy is greater than threshold value is comparatively remarkable, then the region that comentropy is greater than the home block of threshold value intensive can be then the comparatively significant region of visual effect.
Step S104, segmentation object region, is converted to output video by target area.
In this step, the parameter of target area and the parameter of output video can be obtained, according to the parameter of target area and the parameter of output video, determine output video effect; According to the parameter of target area, the parameter of output video and output video effect, calculate scaling; According to scaling, by target area convergent-divergent, generate region to be split; Split region to be split, generate output video according to segmentation order.
A kind of picture provided by the embodiment of the present invention one is converted to the method for output video, because the comentropy of visual salient region is often comparatively large, and therefore can by original image being divided into multiple home block; Calculate the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy; Comentropy is greater than the visual salient region that the intensive region of the home block of threshold value is determined as target area in original image automatically, and by segmentation object region, target area is converted to output video, thus improves the subjective effect of output video.
Embodiment two
The embodiment of the present invention two provides a kind of method that picture is converted to output video, as shown in Figure 3, can comprise the following steps:
Step S301, is divided into multiple home block by original image.
Step S302, calculates the comentropy of each home block.
In this step, the concrete formula of the comentropy E (i, j) of each home block is calculated as shown in formula (1):
E ( i , j ) = - &Sigma; m = 0 255 p ( m ) l o g ( p ( m ) ) - - - ( 1 )
Wherein, i is expert at by block of information B (i, j), j is the column of block of information B (i, j), and m is the brightness value of pixel, the span of m is 0-255, p (m) is the pixel that comprises with the home block probability that is m for brightness value that sample obtains.
S303, according to mean value calculation threshold value.
In this step, according to each comentropy, the formula of the mean value Eavg of computing information entropy is as shown in formula (2):
E a v g = &Sigma; i = 0 , j = 0 i < W , j < H E ( i , j ) W &times; H - - - ( 2 )
Wherein, W is the described original image often quantity quantity of home block described in row, concrete W=w/N, w is the wide of original image, H for described original image often arrange described in the quantity of home block, concrete H=h/N, h is the height of original image, and N is the length of home block and wide.
Step S304, according to mean value calculation threshold value.
In this step, the formula of foundation mean value calculation threshold value T is as shown in formula (3):
T=a×Eavg(3)
Wherein, a is the constant between (0,1).
Step S305, compares the comentropy of each attribute block and threshold value, if comentropy is greater than threshold value, then attribute block is the remarkable block of vision.
In this step, can judge whether home block is the remarkable block map (i, j) of visual effect according to formula (4):
In formula (4), if the comentropy of attribute block is greater than threshold value, then home block is vision displaying block, if the comentropy of attribute block is less than or equal to threshold value, then home block is not the remarkable block of vision.
Step S306, carries out filtering to the remarkable block of vision, obtains object block.
In this step, filtering can be carried out according to following formula (5) to the remarkable block map (i, j) of vision, obtain object block fmap (i, j):
Wherein, p is the abscissa of home block, and q is the ordinate of home block.
Step S307, calculates the quantity of object block in often going respectively, determines the row that there is object block.
In this step, the often quantity v (j) of object block in row can be calculated respectively according to formula (6):
v ( j ) = &Sigma; p = 0 p = H - 1 f m a p ( p , j ) - - - ( 6 )
And the maximum vmax that formula (7) calculates v (j) can be passed through:
vmax=max(v(j))(7)
Wherein, the span of j is (0, H-1);
Can determine according to following formula (8) the initial row lu_y that there is object block, determine according to following formula (9) the termination row br_y that there is object block, the performance-based objective region candidate region between initial row lu_y and termination row br_y.
lu_y=min(v(j))(8)
br_y=max(v(j))(9)
Wherein, v (j) > vmax × b, b are the constant between 0 to 1.
Step 308, in the row that there is object block, calculates the quantity of object block in often arranging respectively, determines the row that there is object block.
In this step, can calculate by following formula (10) the quantity h (i) often arranging the object block comprised in candidate region, target area:
h ( i ) = &Sigma; p = l u _ y p = b r _ y f m a p ( i , p ) - - - ( 10 )
Can determine to calculate the maximum often arranging the object block comprised in candidate region, target area according to following formula (11):
hmax=max(h(i))(11)
Wherein, the span of i is (0, W-1).
Can determine according to following formula (12) the initial row lu_x that there is object block, determine according to following formula (13) end column that there is object block, between initial row lu_x and end column br_x, be classified as candidate region, target area:
lu_x=min(v(i))(12)
br_x=max(v(i))(13)
Wherein, v (i) > hmax × c, c are the constant between 0 to 1.
Step S309, the region intersected by the row of going and there is object block that there is object block is as target area.
Step S310, segmentation object region, is converted to output video by target area.
A kind of picture provided by the embodiment of the present invention two is converted to the method for output video, because the comentropy of visual salient region is often comparatively large, and therefore can by original image being divided into multiple home block; Calculate the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy; Comentropy is greater than the intensive region of the home block of threshold value as target area to determine the visual salient region in original image; And by segmentation object region, target area is converted to output video, thus improves the visual effect of output video.
Embodiment three
The embodiment of the present invention three provides a kind of method that picture is converted to output video, and as shown in Figure 4, " the step S104, segmentation object region, is converted to output video by target area " that the embodiment of the present invention one provided is optimized for following steps:
Step S401, obtains the parameter of target area and the parameter of output video, according to the parameter of target area and the parameter of output video, determines output video effect.
As shown in Figure 5,1 is original image, and 2 is target area.The parameter of target area can comprise: the high h of target area 1, concrete h 1the wide w of=(br_x-lu_x) * N, target area 1, concrete w 1=(br_y-lu_y) * N, the coordinate (x, y) of target area in original image.The parameter of output video v can comprise: duration t, wide vw, high vh and frame per second fps.Output video effect can be any one in moving left and right and moving up and down, can run following process to determine output video effect according to the parameter of the parameter of target area and output video:
i f w 1 h 1 > v w + f p s &times; t v h a n d w 1 h 1 &GreaterEqual; v w v h + f p s &times; t
scheme=a
e l s e i f w 1 h 1 &le; v w + f p s &times; t v h a n d w 1 h 1 &GreaterEqual; v w v h + f p s &times; t a n d w 1 h 1 > vw 2 + v w &times; f p s &times; t vh 2 + v h &times; f p s &times; t
scheme=a
e l s e i f w 1 h 1 > v w + f p s &times; t v h a n d w 1 h 1 < v w v h + f p s &times; t a n d w 1 h 1 < vw 2 + v w &times; f p s &times; t vh 2 + v h &times; f p s &times; t
scheme=a
else:
scheme=b
In above-mentioned process, if scheme=a, the output video moving left and right effect can be generated; If scheme=b, the output video moving up and down effect can be generated.
Step S402, according to the parameter of target area, the parameter of output video and output video effect, calculates scaling.
In this step, if output video effect is for moving left and right, then according to the parameter of target area and the parameter of output video, calculate scaling r formula as shown in formula (1):
r=max((vw+fps*t)/w 1,vh/h 1)(1)
Wherein, vw is the wide of output video, and fps is the frame per second of output video, and t is the duration of output video, w 1wide for target area, vh is the height of output video, h 1for the height of target area;
If output video effect is for moving up and down, then according to the parameter of target area and the parameter of output video, calculate scaling r formula as shown in formula (2):
r=max((vh+fps*t)/h 1,vw/w 1)(2)
Wherein, vh is the height of output video, and fps is the frame per second of output video, and t is the duration of output video, h 1for the height of target area, vw is the wide of output video, w 1wide for target area.
Step S403, according to scaling, carries out convergent-divergent by target area, generates region to be split.
As shown in Figure 6,2 is target area, and 3 is the cut zone of carrying out convergent-divergent generation according to scaling r.
Step S404, splits region to be split, generates output video according to segmentation order.
In this step, partitioning scheme from left to right and partitioning scheme from right to left can obtain the output video moving left and right effect.Partitioning scheme from top to bottom and partitioning scheme from the bottom to top can obtain the output video moving left and right effect.
A kind of picture provided by the embodiment of the present invention three is converted to the method for output video, under the prerequisite keeping user's area-of-interest natural form, user's area-of-interest is full of the whole picture of output video, without the need to supplementary black surround, represent the content element of user's area-of-interest in an optimal manner.
For clarity sake, four kinds of Iamge Segmentation modes of partitioning scheme from left to right, partitioning scheme from right to left, partitioning scheme from top to bottom, partitioning scheme are from the bottom to top embodiments provided.
Partitioning scheme is from left to right as follows: respectively with (0 on region to be split, 0), (1,0), (2,0) ... (fps*t-1,0) for top left corner apex, be wide and high intercepting picture c0, c1 with vw and vh ... cfps*t-1, by c0, c1 ... cfps*t-1 sequential combination obtains output video v.Wherein, fps is the frame per second of output video, and t is the duration of output video, and vw is the wide of output video, and vh is the height of output video.
Partitioning scheme is from right to left as follows: respectively with (gw on region to be split, 0), (gw-1,0), (gw-2,0) ... (gw-fps*t+1,0) for top left corner apex, be wide and high intercepting picture c0, c1 with vw and vh ... cfps*t-1, by c0, c1 ... cfps*t-1 sequential combination obtains output video v.
The partitioning scheme of partitioning scheme is from top to bottom as follows: respectively with (0 on region to be split, 0), (0,1), (0,2) ... (0, fps*t-1) for top left corner apex, be wide and high intercepting picture c0, c1 with vw and vh ... cfps*t-1, by c0, c1 ... cfps*t-1 sequential combination obtains output video v.
The partitioning scheme of partitioning scheme is from the bottom to top as follows: respectively with (0 on region to be split, gw), (0, gw-1), (0, gw-2) ... (0, gw-fps*t+1) for top left corner apex, be wide and high intercepting picture c0, c1 with vw and vh ... cfps*t-1, by c0, c1 ... cfps*t-1 sequential combination obtains output video v.
Embodiment four
The embodiment of the present invention four provides the device that a kind of picture is converted to output video, a kind of picture that can perform the embodiment of the present invention one provides is converted to the method for output video as shown in Figure 7, can comprise with lower module: home block divides module 71, comentropy computing module 72, target area determination module 73 and video conversion module 74.
Home block divides module 71, for original image is divided into multiple home block; Comentropy computing module 72, for calculating the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy; Target area determination module 73, the intensive region of the home block for comentropy being greater than threshold value is as target area; Video conversion module 74, for segmentation object region, is converted to output video by target area.
Divide in module 71 at home block, original image can be divided into the home block that multiple areas etc. are large.
In comentropy computing module 72, the comentropy of each home block can be calculated respectively, and calculate the mean value of each comentropy, then the threshold value of foundation mean value calculation comentropy.
In target area determination module 73, this threshold value can as judging the whether significant foundation of the visual effect of block of information, in general, the home block visual effect that comentropy is greater than threshold value is comparatively remarkable, then the region that comentropy is greater than the home block of threshold value intensive can be then the comparatively significant region of visual effect.
In video conversion module 74, the parameter of target area and the parameter of output video can be obtained, according to the parameter of target area and the parameter of output video, determine output video effect; According to the parameter of target area, the parameter of output video and output video effect, calculate scaling; According to scaling, by target area convergent-divergent, generate region to be split; Split region to be split, generate output video according to segmentation order.
A kind of picture provided by the embodiment of the present invention four is converted to the device of output video, because the comentropy of visual salient region is often comparatively large, and therefore can by original image being divided into multiple home block; Calculate the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy; Comentropy is greater than the visual salient region that the intensive region of the home block of threshold value is determined as target area in original image automatically, and by segmentation object region, target area is converted to output video, thus improves the visual effect of output video.
Embodiment five
The embodiment of the present invention five provides the device that a kind of picture is converted to output video, and the picture that can perform the embodiment of the present invention two provides is converted to the method for output video, as shown in Figure 8, can comprise with lower module:
Home block divides module 81, comentropy computing module 82, the remarkable block determination module 83 of vision, object block acquisition module 84, target area determination module 85 and video conversion module 86; Wherein, comentropy computing module 82 comprises: comentropy computing unit 821, average calculation unit 822 and threshold computation unit 823; Target area determination module 85 comprises: row determining unit 851, row determining unit 852 and intersection region determining unit 853.
Home block divides module 81, for calculating the comentropy of each home block.
Comentropy computing unit 821, for calculating the comentropy of each home block;
Average calculation unit 822, for according to each comentropy, the mean value of computing information entropy;
Threshold computation unit 823, for foundation mean value calculation threshold value;
The remarkable block determination module 83 of vision, for the comentropy of each attribute block and threshold value being compared, if comentropy is greater than threshold value, then attribute block is the remarkable block of vision;
Object block obtains module 84, for carrying out filtering to the remarkable block of vision, obtains object block;
Row determining unit 851, for calculating the often quantity of object block in row respectively, determines the row that there is object block.
Row determining unit 852, in the row that there is object block, calculates the quantity of object block in often arranging respectively, determines the row that there is object block.
Intersection region determining unit 853, for using exist object block go and exist object block row intersect region as target area.
Video conversion module 86, for splitting described target area, is converted to output video by described target area.
A kind of picture provided by the embodiment of the present invention five is converted to the device of output video, because the comentropy of visual salient region is often comparatively large, and therefore can by original image being divided into multiple home block; Calculate the comentropy of each home block, and according to the threshold value of each comentropy computing information entropy; Comentropy is greater than the intensive region of the home block of threshold value as target area to determine the visual salient region in original image; And by segmentation object region, target area is converted to output video, thus improves the visual effect of output video.
Embodiment six
The embodiment of the present invention six provides a kind of video conversion module, and as shown in Figure 9, this video conversion module can comprise with lower unit:
Video effect determining unit 91, scaling computing unit 92, scaling computing unit 93 and output video generation unit 94.
Video effect determining unit 91, for the parameter of the parameter and output video that obtain target area, according to the parameter of described target area and the parameter of described output video, determines output video effect;
Scaling computing unit 92, for according to the parameter of described target area, the parameter of described output video and described output video effect, calculates scaling;
Area generation unit 93 to be split, for according to scaling, by described target area convergent-divergent, generates region to be split;
Output video generation unit 94, for splitting described region to be split, generates described output video according to segmentation order.
By the video conversion module that the embodiment of the present invention six provides, can under the prerequisite keeping user's area-of-interest natural form, user's area-of-interest is full of the whole picture of output video, without the need to supplementary black surround, represents the content element of user's area-of-interest in an optimal manner.
Above a kind of picture provided by the present invention is converted to the method and apparatus of output video, be described in detail, apply specific case herein to set forth principle of the present invention and execution mode, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1. picture is converted to a method for output video, it is characterized in that, comprising:
Original image is divided into multiple home block;
Calculate the comentropy of home block described in each, and according to the threshold value of comentropy computing information entropy described in each;
Described comentropy is greater than the intensive region of the home block of described threshold value as target area;
Split described target area, described target area is converted to output video.
2. method according to claim 1, is characterized in that, the comentropy of described calculating home block described in each, and comprises according to the threshold value of comentropy computing information entropy described in each:
Calculate the comentropy of home block described in each;
According to comentropy described in each, calculate the mean value of described comentropy;
According to threshold value described in described mean value calculation;
Wherein, the formula calculating the comentropy E (i, j) of home block described in each comprises:
E ( i , j ) = - &Sigma; m = 0 255 p ( m ) l o g ( p ( m ) )
Wherein, i is expert at by described block of information, and j is described block of information column, and m is the brightness value of pixel, and the span of m is 0-255, p (m) is the pixel that comprises with the described home block probability that is m for brightness value that sample obtains;
According to comentropy described in each, the formula calculating the mean value Eavg of described comentropy comprises:
E a v g = &Sigma; i = 0 , j = 0 i < W , j < H E ( i , j ) W &times; H
Wherein, W is the described original image often quantity of home block described in row, H for described original image often arrange described in the quantity of home block;
Comprise according to the formula of threshold value T described in described mean value calculation:
T=a×Eavg
Wherein, a is the constant between (0,1).
3. the method according to claim 1 or 2 any one, is characterized in that, in the comentropy of described calculating home block described in each, and after threshold value according to comentropy computing information entropy described in each; In the intensive region of the described home block described comentropy being greater than described threshold value as before target area, also comprise:
The comentropy of attribute block described in each and described threshold value are compared, if described comentropy is greater than described threshold value, then described attribute block is the remarkable block of vision;
Filtering is carried out to the remarkable block of described vision, obtains object block.
4. method according to claim 3, is characterized in that, the intensive region of the described home block that described information entropy is greater than described mean value, as target area, comprising:
Calculate the quantity of often object block described in row respectively, determine the row that there is described object block;
Exist in the row of object block described, calculate the quantity of often object block described in row respectively, determine the row that there is described object block;
The region that the row that there is object block using described and the described row that there is object block intersect is as target area.
5. method according to claim 1, is characterized in that, the described target area of described segmentation, is converted to output video by described target area, comprises:
Obtain the parameter of target area and the parameter of output video, according to the parameter of described target area and the parameter of described output video, determine output video effect;
According to the parameter of described target area, the parameter of described output video and described output video effect, calculate scaling;
According to described scaling, convergent-divergent is carried out in described target area, generate region to be split;
Split described region to be split, generate described output video according to segmentation order.
6. picture is converted to a device for output video, it is characterized in that, comprising:
Home block divides module, for original image is divided into multiple home block;
Comentropy computing module, for calculating the comentropy of home block described in each, and according to the threshold value of comentropy computing information entropy described in each;
Target area determination module, the intensive region of the home block for described comentropy being greater than described threshold value is as target area;
Video conversion module, for splitting described target area, is converted to output video by described target area.
7. device according to claim 6, is characterized in that, described comentropy computing module, comprising:
Comentropy computing unit, for calculating the comentropy of home block described in each;
Average calculation unit, for according to comentropy described in each, calculates the mean value of described comentropy;
Threshold computation unit, for according to threshold value described in described mean value calculation;
Wherein, the formula that described comentropy computing unit calculates the comentropy E (i, j) of home block described in each comprises:
E ( i , j ) = - &Sigma; m = 0 255 p ( m ) l o g ( p ( m ) )
Wherein, i is expert at by described block of information, and j is the column of described block of information, and m is the brightness value of pixel, and the span of m is 0-255, p (m) is the pixel that comprises with the described home block probability that is m for brightness value that sample obtains;
Described average calculation unit is according to comentropy described in each, and the formula calculating the mean value Eavg of described comentropy comprises:
E a v g = &Sigma; i = 0 , j = 0 i < W , j < H E ( i , j ) W &times; H
Wherein, W is the described original image often quantity quantity of home block described in row, H for described original image often arrange described in the quantity of home block;
Described threshold computation unit comprises according to the formula of threshold value T described in described mean value calculation:
T=a×Eavg
Wherein, a is the constant between (0,1).
8. the device according to claim 6 or 7 any one, is characterized in that, also comprise:
The remarkable block determination module of vision, for the comentropy of attribute block described in each and described threshold value being compared, if described comentropy is greater than described threshold value, then described attribute block is the remarkable block of vision;
Object block obtains module, for carrying out filtering to the remarkable block of described vision, obtains object block.
9. device according to claim 8, is characterized in that, described target area determination module comprises:
Row determining unit, for calculating the quantity of often object block described in row respectively, determines the row that there is described object block;
Row determining unit, for existing in the row of object block described, calculating the quantity of often object block described in row respectively, determining the row that there is described object block;
Intersection region determining unit, the region intersected for the row that there is object block using described and the described row that there is object block is as target area.
10. device according to claim 6, is characterized in that, described video conversion module comprises:
Video effect determining unit, for the parameter of the parameter and output video that obtain target area, according to the parameter of described target area and the parameter of described output video, determines output video effect;
Scaling computing unit, for according to the parameter of described target area, the parameter of described output video and described output video effect, calculates scaling;
Area generation unit to be split, for according to described scaling, carries out convergent-divergent by described target area, generates region to be split;
Output video generation unit, for splitting described region to be split, generates described output video according to segmentation order.
CN201510549518.5A 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video Active CN105163043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510549518.5A CN105163043B (en) 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510549518.5A CN105163043B (en) 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video

Publications (2)

Publication Number Publication Date
CN105163043A true CN105163043A (en) 2015-12-16
CN105163043B CN105163043B (en) 2018-04-13

Family

ID=54803785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510549518.5A Active CN105163043B (en) 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video

Country Status (1)

Country Link
CN (1) CN105163043B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111479070A (en) * 2019-01-24 2020-07-31 杭州海康机器人技术有限公司 Image brightness determination method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101242474A (en) * 2007-02-09 2008-08-13 中国科学院计算技术研究所 A dynamic video browse method for phone on small-size screen
CN101447078A (en) * 2008-12-10 2009-06-03 东软集团股份有限公司 Method for obstacle segmentation and device thereof
JP2010093452A (en) * 2008-10-06 2010-04-22 Toshiba Corp Video server, signal conversion circuit and signal converting method
CN102663391A (en) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 Image multifeature extraction and fusion method and system
CN104202661A (en) * 2014-09-15 2014-12-10 厦门美图之家科技有限公司 Automatic picture-to-video conversion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101242474A (en) * 2007-02-09 2008-08-13 中国科学院计算技术研究所 A dynamic video browse method for phone on small-size screen
JP2010093452A (en) * 2008-10-06 2010-04-22 Toshiba Corp Video server, signal conversion circuit and signal converting method
CN101447078A (en) * 2008-12-10 2009-06-03 东软集团股份有限公司 Method for obstacle segmentation and device thereof
CN102663391A (en) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 Image multifeature extraction and fusion method and system
CN104202661A (en) * 2014-09-15 2014-12-10 厦门美图之家科技有限公司 Automatic picture-to-video conversion method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111479070A (en) * 2019-01-24 2020-07-31 杭州海康机器人技术有限公司 Image brightness determination method, device and equipment
CN111479070B (en) * 2019-01-24 2022-02-01 杭州海康机器人技术有限公司 Image brightness determination method, device and equipment

Also Published As

Publication number Publication date
CN105163043B (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN106485720A (en) Image processing method and device
CN103198486B (en) A kind of depth image enhancement method based on anisotropy parameter
CN103235956A (en) Method and device for detecting advertisements
CN104869346A (en) Method and electronic equipment for processing image in video call
CN111757175A (en) Video processing method and device
CN104010212A (en) Method and device for synthesizing multiple layers
CN113126862B (en) Screen capture method and device, electronic equipment and readable storage medium
CN105227873B (en) A kind of test method and device of on-screen display data
CN104038699A (en) Focusing state prompting method and shooting device
CN104346782A (en) Method and device for defogging single image
CN108470547B (en) Backlight control method of display panel, computer readable medium and display device
CN105100895A (en) Video and screen resolution matching method and device with no video resolution information
CN103020908B (en) The method and apparatus of image noise reduction
CN105163043A (en) Method and device for converting picture into output video
CN102542528B (en) Image conversion processing method and system
CN105245817A (en) Video playback method and video playback device
CN103236042A (en) Self-adaptive picture processing method and device
CN102750707A (en) Image processing method and image processing device based on regions of interest
CN104093010A (en) Image processing method and device
CN115063800B (en) Text recognition method and electronic equipment
CN103366343A (en) Bitmap scaling method and system
CN103136735A (en) Single image defogging method based on dual-scale dark channel
CN112149463B (en) Image processing method and device
CN102663677B (en) A kind of image-scaling method and system
KR101382227B1 (en) Method for classifying input image into window image and method and electronic device for converting window image into 3d image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant