CN105163043B - The method and apparatus that a kind of picture is converted to output video - Google Patents

The method and apparatus that a kind of picture is converted to output video Download PDF

Info

Publication number
CN105163043B
CN105163043B CN201510549518.5A CN201510549518A CN105163043B CN 105163043 B CN105163043 B CN 105163043B CN 201510549518 A CN201510549518 A CN 201510549518A CN 105163043 B CN105163043 B CN 105163043B
Authority
CN
China
Prior art keywords
block
mrow
target area
output video
comentropy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510549518.5A
Other languages
Chinese (zh)
Other versions
CN105163043A (en
Inventor
李勇鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201510549518.5A priority Critical patent/CN105163043B/en
Publication of CN105163043A publication Critical patent/CN105163043A/en
Application granted granted Critical
Publication of CN105163043B publication Critical patent/CN105163043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

An embodiment of the present invention provides the method and apparatus that a kind of picture is converted to output video, this method includes:Original image is divided into multiple home blocks;The comentropy of each home block is calculated, and the threshold value of comentropy is calculated according to each comentropy;Using the comentropy region intensive more than the home block of threshold value as target area;Segmentation object region, output video is converted to by target area.By method and apparatus provided in an embodiment of the present invention, the visual salient region in original image can be automatically determined, and by segmentation object region, target area is converted into output video, so as to improve the visual effect of output video.

Description

The method and apparatus that a kind of picture is converted to output video
Technical field
The present invention relates to technical field of image processing, and the method and dress for exporting video are converted to more particularly to a kind of picture Put.
Background technology
With the progress of electronic technology, smart mobile phone and tablet computer are increasingly popularized, and user can clap anywhere or anytime Take the photograph photo.Output video production application can be organized together photo is orderly, forms the output video with certain topic, So as to improve ornamental value.
The photo component of user's shooting is more, the simply some of which part of in most cases user's concern.When When making the output video of a certain theme, often by user on original image Manual interception area-of-interest as video exhibition Show content, but this method is time-consuming and laborious, can not carry out batch, quickly processing.
Therefore, the technical problem that those skilled in the art urgently solve is needed to be exactly at present:How in original image Above automatically determine user region interested, and the content in the region is shown in a manner of video.
The content of the invention
It is how automatic true to solve an embodiment of the present invention provides the method and apparatus that a kind of picture is converted to output video The problem of determining area-of-interest, and showing in a manner of video the content in the region.The invention discloses a kind of picture to be converted to The method for exporting video, including:
Original image is divided into multiple home blocks;
The comentropy of each home block is calculated, and the threshold value of comentropy is calculated according to each described information entropy;
Using the described information entropy region intensive more than the home block of the threshold value as target area;
Split the target area, the target area is converted into output video.
Preferably, the comentropy for calculating each home block, and comentropy is calculated according to each described information entropy Threshold value include:
Calculate the comentropy of each home block;
According to each described information entropy, the average value of calculating described information entropy;
According to threshold value described in the mean value calculation;
Wherein, the formula for calculating the comentropy E (i, j) of each home block is as follows:
Wherein, i is expert at by described information block, and j is described information block column, and m is the brightness value of pixel, the value of m Scope is that the pixel that is included using the home block is probability of the obtained brightness value of sample as m by 0-255p (m);
It is described as follows according to each described information entropy, the formula of the average value Eavg of calculating described information entropy:
Wherein, W is the quantity of home block described in the original image is often gone, and H is described in the original image each column The quantity of home block;
Formula according to threshold value T described in the mean value calculation is as follows:
T=a × Eavg
Wherein, constants of a between (0,1).
Preferably, in the comentropy for calculating each home block, and information is calculated according to each described information entropy After the threshold value of entropy;It is described using described information entropy be more than the intensive region of home block of the threshold value as target area it Before, further include:
By the comentropy of each attribute block compared with the threshold value, if described information entropy is more than the threshold value, Then the attribute block is the notable block of vision;
The notable block of the vision is filtered, obtains object block.
Preferably, the region that described information entropy is intensive more than the home block of the average value is as target area Domain, including:
The quantity of object block described in often going is calculated respectively, determines that there are the row of the object block;
In the row there are object block, the quantity of object block described in each column is calculated respectively, determines that there are the mesh Mark the row of block;
The region that the row there are object block and the row there are object block are intersected is as target area.
Preferably, the segmentation target area, output video is converted to by the target area, including:
The parameter of target area and the parameter of output video are obtained, is regarded according to the parameter of the target area and the output The parameter of frequency, determines output video effect;
The parameter of parameter, the output video according to the target area and the output video effect, calculate scaling Ratio;
According to the scaling, the target area is zoomed in and out, generates region to be split;
Split the region to be split, the output video is sequentially generated according to segmentation.
On the other hand, the embodiment of the present invention additionally provides the device that a kind of picture is converted to output video, including:
Home block division module, for original image to be divided into multiple home blocks;
Comentropy computing module, for calculating the comentropy of each home block, and according to each described information entropy meter Calculate the threshold value of comentropy;
Target area determining module, for using the described information entropy region intensive more than the home block of the threshold value as mesh Mark region;
Video conversion module, for splitting the target area, output video is converted to by the target area.
Preferably, described information entropy computing module, including:
Comentropy computing unit, for calculating the comentropy of each home block;
Average calculation unit, for according to each described information entropy, the average value of calculating described information entropy;
Threshold computation unit, for according to threshold value described in the mean value calculation;
Wherein, described information entropy computing unit calculates the following institute of formula of the comentropy E (i, j) of each home block Show:
Wherein, i is expert at by described information block, and j is the column of described information block, and m is the brightness value of pixel, and m's takes Value scope is 0-255, and p (m) is, using the pixel that the home block is included as sample, obtained brightness value is the probability of m;
According to each described information entropy described in the average calculation unit, calculate the average value Eavg's of described information entropy Formula is as follows:
Wherein, W is the quantity of home block described in the original image is often gone, and H is described in the original image each column The quantity of home block;
The threshold computation unit is as follows according to the formula of threshold value T described in the mean value calculation:
T=a × Eavg
Wherein, constants of a between (0,1).
Preferably, described device further includes:
The notable block determining module of vision, for by the comentropy of each attribute block compared with the threshold value, if Described information entropy is more than the threshold value, then the attribute block is the notable block of vision;
Object block obtains module, for being filtered to the notable block of the vision, obtains object block.
Preferably, the target area determining module includes:
Row determination unit, for calculate respectively often go in object block quantity, determine that there are the row of the object block;
Row determination unit, in the row there are object block, calculating the quantity of object block in each column respectively, determines There are the row of the object block;
Intersection region determination unit, for the area for intersecting the row there are object block and the row there are object block Domain is as target area.
Preferably, the video conversion module includes:
Video effect determination unit, for obtaining the parameter of target area and the parameter of output video, according to the target The parameter of the parameter in region and the output video, determines output video effect;
Scaling computing unit, for the parameter according to the target area, the output parameter of video and described Video effect is exported, calculates scaling;
Area generation unit to be split, for according to scaling, the target area being scaled, generates area to be split Domain;
Video generation unit is exported, for splitting the region to be split, the output video is sequentially generated according to segmentation.
Compared with prior art, the embodiment of the present invention includes advantages below:
, can be by the way that original image be divided into multiple marks since the comentropy of visual salient region is often larger Block;The comentropy of each home block is calculated, and the threshold value of comentropy is calculated according to each comentropy;Comentropy is more than threshold value The intensive region of home block automatically determines the visual salient region in original image as target area, and passes through segmentation object Region, output video is converted to by target area, so as to improve the visual effect of output video.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow diagram for the method that a kind of picture that the embodiment of the present invention one provides is converted to output video;
Fig. 2 is that a kind of home block that the embodiment of the present invention one provides divides schematic diagram;
Fig. 3 is the flow diagram for the method that a kind of picture provided by Embodiment 2 of the present invention is converted to output video;
Fig. 4 is the flow diagram for the method that a kind of picture that the embodiment of the present invention three provides is converted to output video;
Fig. 5 is a kind of target area schematic diagram that the embodiment of the present invention three provides;
Fig. 6 is a kind of scaling schematic diagram that the embodiment of the present invention three provides;
Fig. 7 is the structure diagram for the device that a kind of picture that the embodiment of the present invention four provides is converted to output video;
Fig. 8 is the structure diagram for the device that a kind of picture that the embodiment of the present invention five provides is converted to output video;
Fig. 9 is a kind of structure diagram for video conversion module that the embodiment of the present invention six provides.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work Embodiment, belongs to the scope of protection of the invention.
Embodiment one
The embodiment of the present invention one provide a kind of picture be converted to output video method, as shown in Figure 1, can include with Lower step:
Step S101, multiple home blocks are divided into by original image.
In this step, original image can be divided into the big home block such as multiple areas.As shown in Fig. 2, w is original The width of picture, h are the height of original image, and the home block for dividing generation is square, length of side N.
Step S102, calculates the comentropy of each home block, and the threshold value of comentropy is calculated according to each comentropy.
In this step, the comentropy of each home block can be calculated respectively, and calculate the average value of each comentropy, then Threshold value according to mean value calculation comentropy.
Step S103, using the comentropy region intensive more than the home block of threshold value as target area.
In this step, which can judge the whether significant foundation of visual effect of block of information, in general, letter Cease entropy be more than threshold value home block visual effect it is more notable, then comentropy be more than the intensive region of home block of threshold value then can be with It is the more significant region of visual effect.
Step S104, segmentation object region, output video is converted to by target area.
In this step, the parameter of target area and the parameter of output video, the parameter according to target area can be obtained With the parameter of output video, output video effect is determined;The parameter and output video of parameter, output video according to target area Effect, calculates scaling;According to scaling, target area is scaled, generates region to be split;Split region to be split, Output video is sequentially generated according to segmentation.
The method that a kind of picture provided by the embodiment of the present invention one is converted to output video, due to visual salient region Comentropy it is often larger, therefore can be by the way that original image can be divided into multiple home blocks;Calculate each home block Comentropy, and according to the threshold value of each comentropy calculating comentropy;The comentropy region intensive more than the home block of threshold value is made The visual salient region in original image is automatically determined for target area, and by segmentation object region, target area is turned Output video is changed to, so as to improve the subjective effect of output video.
Embodiment two
The embodiment of the present invention two provide a kind of picture be converted to output video method, as shown in figure 3, can include with Lower step:
Step S301, multiple home blocks are divided into by original image.
Step S302, calculates the comentropy of each home block.
In this step, shown in the specific formula such as formula (1) for calculating the comentropy E (i, j) of each home block:
Wherein, i is expert at by block of information B (i, j), j be block of information B (i, j) column, m be pixel brightness value, m Value range be 0-255, p (m) is probability of the obtained brightness value of sample as m for the pixel that is included using home block.
S303, according to mean value calculation threshold value.
In this step, according to each comentropy, shown in the formula such as formula (2) for the average value Eavg for calculating comentropy:
Wherein, W is the quantity quantity of home block described in the original image is often gone, and specific W=w/N, w are original The width of picture, H be the original image each column described in home block quantity, specific H=h/N, h be original image height, N For the length and width of home block.
Step S304, according to mean value calculation threshold value.
In this step, according to shown in the formula such as formula (3) of mean value calculation threshold value T:
T=a × Eavg (3)
Wherein, constants of a between (0,1).
Step S305, by the comentropy of each attribute block compared with threshold value, if comentropy is more than threshold value, attribute block For the notable block of vision.
In this step, it can judge whether home block is the notable block map (i, j) of visual effect according to formula (4):
In formula (4), if the comentropy of attribute block is more than threshold value, home block is visual display block, if attribute block Comentropy is less than or equal to threshold value, then home block is not the notable block of vision.
Step S306, is filtered the notable block of vision, obtains object block.
In this step, the notable block map (i, j) of vision can be filtered according to equation below (5), obtains object block fmap(i,j):
Wherein, p is the abscissa of home block, and q is the ordinate of home block.
Step S307, calculates the quantity of object block in often going, determines the row there are object block respectively.
In this step, the quantity v (j) of object block in often going can be calculated respectively according to formula (6):
And the maximum vmax of v (j) can be calculated by formula (7):
Vmax=max (v (j)) (7)
Wherein, the value range of j is (0, H-1);
The initial row lu_y there are object block can be determined according to equation below (8), determines to deposit according to equation below (9) In the termination row br_y of object block, the performance-based objective region candidate region between initial row lu_y and termination row br_y.
Lu_y=min (v (j)) (8)
Br_y=max (v (j)) (9)
Wherein, v (j) > vmax × b, b are the constant between 0 to 1.
Step 308, in the row there are object block, the quantity of object block in each column is calculated respectively, determines that there are object block Row.
In this step, the object block that each column includes in the candidate region of target area can be calculated by equation below (10) Quantity h (i):
It can determine to calculate the maximum of the object block that each column includes in the candidate region of target area according to equation below (11):
Hmax=max (h (i)) (11)
Wherein, the value range of i is (0, W-1).
The starting row lu_x there are object block can be determined according to equation below (12), is determined according to equation below (13) There are the end column of object block, target area candidate region is classified as between starting row lu_x and end column br_x:
Lu_x=min (v (i)) (12)
Br_x=max (v (i)) (13)
Wherein, v (i) > hmax × c, c are the constant between 0 to 1.
Step S309, there will be object block go and there are object block row intersect region as target area.
Step S310, segmentation object region, output video is converted to by target area.
The method that output video is converted to by a kind of picture provided by Embodiment 2 of the present invention, due to visual salient region Comentropy it is often larger, therefore can be by the way that original image can be divided into multiple home blocks;Calculate each home block Comentropy, and according to the threshold value of each comentropy calculating comentropy;The comentropy region intensive more than the home block of threshold value is made The visual salient region in original image is determined for target area;And by segmentation object region, target area is changed To export video, so as to improve the visual effect of output video.
Embodiment three
The embodiment of the present invention three provides a kind of method that picture is converted to output video, as shown in figure 4, will be of the invention real " step S104, segmentation object region, output video is converted to by target area " for applying the offer of example one is optimized for following steps:
Step S401, obtains the parameter of target area and the parameter of output video, parameter and output according to target area The parameter of video, determines output video effect.
As shown in figure 5,1 is original image, 2 be target area.The parameter of target area can include:The height of target area h1, specific h1=(br_x-lu_x) * N, the wide w of target area1, specific w1=(br_y-lu_y) * N, target area exists Coordinate (x, y) in original image.The parameter of output video v can include:Duration t, width vw, high vh and frame per second fps.Output regards Yupin effect can be any one in moving left and right and moving up and down, can be according to the parameter and output video of target area Parameter runs following process to determine output video effect:
Scheme=a
Scheme=a
Scheme=a
else:
Scheme=b
In above-mentioned process, the output video for moving left and right effect can be generated if scheme=a;If scheme=b The output video for moving up and down effect can then be generated.
Step S402, the parameter and output video effect of parameter, output video according to target area, calculates pantograph ratio Example.
In this step, if output video effect is moves left and right, parameter and output video according to target area Parameter, calculates shown in scaling r formula such as formula (1):
R=max ((vw+fps*t)/w1,vh/h1) (1)
Wherein, vw be output video width, fps be output video frame per second, t be output video duration, w1For target The width in region, vh be output video height, h1For the height of target area;
If output video effect to move up and down, according to the parameter of target area and the parameter of output video, calculates contracting Put shown in ratio r formula such as formula (2):
R=max ((vh+fps*t)/h1,vw/w1) (2)
Wherein, vh be output video height, fps be output video frame per second, t be output video duration, h1For target The height in region, vw be output video width, w1For the width of target area.
Step S403, according to scaling, target area is zoomed in and out, generates region to be split.
As shown in fig. 6,2 be target area, 3 be the cut zone that generation is zoomed in and out according to scaling r.
Step S404, splits region to be split, and output video is sequentially generated according to segmentation.
In this step, from left to right partitioning scheme and partitioning scheme from right to left, which can obtain, moves left and right effect The output video of fruit.Partitioning scheme from top to bottom and partitioning scheme from the bottom to top, which can obtain, moves left and right the defeated of effect Go out video.
The method that a kind of picture provided by the embodiment of the present invention three is converted to output video, keeping, user is interested On the premise of regional nature form, by user's area-of-interest full of the whole picture for exporting video, without supplementing black surround, with most Excellent mode shows the content element of user's area-of-interest.
For clarity, an embodiment of the present invention provides partitioning scheme from left to right, partitioning scheme from right to left, by Four kinds of image partitioning schemes of partitioning scheme under, partitioning scheme from the bottom to top.
Partitioning scheme from left to right is as follows:Respectively with (0,0), (1,0), (2,0) ... on region to be split (fps*t-1,0) is top left corner apex, using vw and vh as wide and high interception picture c0, c1 ... cfps*t-1, by c0, c1 ... Cfps*t-1 sequential combinations obtain output video v.Wherein, fps be output video frame per second, t be output video duration, vw To export the width of video, vh is the height of output video.
Partitioning scheme from right to left is as follows:Respectively with (gw, 0), (gw-1,0), (gw- on region to be split 2,0) ... (gw-fps*t+1,0) is top left corner apex, using vw and vh as wide and high interception picture c0, c1 ... cfps*t- 1, c0, c1 ... cfps*t-1 sequential combinations are obtained into output video v.
The partitioning scheme of partitioning scheme from top to bottom is as follows:On region to be split respectively with (0,0), (0, 1), (0,2) ... (0, fps*t-1) is top left corner apex, using vw and vh as wide and high interception picture c0, c1 ... cfps*t- 1, c0, c1 ... cfps*t-1 sequential combinations are obtained into output video v.
The partitioning scheme of partitioning scheme from the bottom to top is as follows:On region to be split respectively with (0, gw), (0, Gw-1), (0, gw-2) ... (0, gw-fps*t+1) be top left corner apex, by wide and high interception picture c0 of vw and vh, C1 ... cfps*t-1, output video v is obtained by c0, c1 ... cfps*t-1 sequential combinations.
Example IV
The embodiment of the present invention four provides the device that a kind of picture is converted to output video, can perform the embodiment of the present invention A kind of one picture provided is converted to the method for output video as shown in fig. 7, can include with lower module:Home block division module 71st, comentropy computing module 72, target area determining module 73 and video conversion module 74.
Home block division module 71, for original image to be divided into multiple home blocks;Comentropy computing module 72, is used for The comentropy of each home block is calculated, and the threshold value of comentropy is calculated according to each comentropy;Target area determining module 73, is used In the intensive region of the home block that comentropy is more than to threshold value as target area;Video conversion module 74, for segmentation object Region, output video is converted to by target area.
In home block division module 71, original image can be divided into the big home block such as multiple areas.
In comentropy computing module 72, the comentropy of each home block can be calculated respectively, and calculate each comentropy Average value, then according to mean value calculation comentropy threshold value.
In target area determining module 73, the threshold value can as judge block of information visual effect whether significantly according to According in general, the home block visual effect that comentropy is more than threshold value is more notable, then comentropy is close more than the home block of threshold value The region of collection can be then the more significant region of visual effect.
In video conversion module 74, the parameter of target area and the parameter of output video can be obtained, according to target area The parameter in domain and the parameter of output video, determine output video effect;According to target area parameter, export video parameter and Video effect is exported, calculates scaling;According to scaling, target area is scaled, generates region to be split;Split and treat point Region is cut, output video is sequentially generated according to segmentation.
A kind of picture provided by the embodiment of the present invention four is converted to the device of output video, due to visual salient region Comentropy it is often larger, therefore can be by the way that original image can be divided into multiple home blocks;Calculate each home block Comentropy, and according to the threshold value of each comentropy calculating comentropy;The comentropy region intensive more than the home block of threshold value is made The visual salient region in original image is automatically determined for target area, and by segmentation object region, target area is turned Output video is changed to, so as to improve the visual effect of output video.
Embodiment five
The embodiment of the present invention five provides the device that a kind of picture is converted to output video, can perform the embodiment of the present invention The method that two pictures provided are converted to output video, as shown in figure 8, can include with lower module:
The notable block determining module 83 of home block division module 81, comentropy computing module 82, vision, object block obtain module 84th, target area determining module 85 and video conversion module 86;Wherein, comentropy computing module 82 includes:Comentropy calculates single Member 821, average calculation unit 822 and threshold computation unit 823;Target area determining module 85 includes:Row determination unit 851st, row determination unit 852 and intersection region determination unit 853.
Home block division module 81, for calculating the comentropy of each home block.
Comentropy computing unit 821, for calculating the comentropy of each home block;
Average calculation unit 822, for according to each comentropy, calculating the average value of comentropy;
Threshold computation unit 823, for according to mean value calculation threshold value;
The notable block determining module 83 of vision, for by the comentropy of each attribute block compared with threshold value, if comentropy More than threshold value, then attribute block is the notable block of vision;
Object block obtains module 84, for being filtered to the notable block of vision, obtains object block;
Row determination unit 851, for calculate respectively often go in object block quantity, determine the row there are object block.
Row determination unit 852, in the row there are object block, calculating the quantity of object block in each column respectively, determines There are the row of object block.
Intersection region determination unit 853, for there will be object block go and there are object block row intersect region work For target area.
Video conversion module 86, for splitting the target area, output video is converted to by the target area.
A kind of picture provided by the embodiment of the present invention five is converted to the device of output video, due to visual salient region Comentropy it is often larger, therefore can be by the way that original image can be divided into multiple home blocks;Calculate each home block Comentropy, and according to the threshold value of each comentropy calculating comentropy;The comentropy region intensive more than the home block of threshold value is made The visual salient region in original image is determined for target area;And by segmentation object region, target area is changed To export video, so as to improve the visual effect of output video.
Embodiment six
The embodiment of the present invention six provides a kind of video conversion module, as shown in figure 9, the video conversion module can include With lower unit:
Video effect determination unit 91, scaling computing unit 92, Area generation unit 93 to be split and output video Generation unit 94.
Video effect determination unit 91, for obtaining the parameter of target area and the parameter of output video, according to the mesh The parameter in region and the parameter of the output video are marked, determines output video effect;
Scaling computing unit 92, for the parameter according to the target area, the parameter of the output video and institute Output video effect is stated, calculates scaling;
Area generation unit 93 to be split, for according to scaling, the target area being scaled, generates area to be split Domain;
Video generation unit 94 is exported, for splitting the region to be split, the output is sequentially generated according to segmentation and regards Frequently.
The video conversion module provided by the embodiment of the present invention six, can keep user's area-of-interest natural form On the premise of, whole picture of user's area-of-interest full of output video without supplementing black surround, shows in an optimal manner The content element of user's area-of-interest.
The method and apparatus for being converted to output video to a kind of picture provided by the present invention above, have carried out detailed Jie Continue, specific case used herein is set forth the principle of the present invention and embodiment, and the explanation of above example is only It is the method and its core concept for being used to help understand the present invention;Meanwhile for those of ordinary skill in the art, according to this hair Bright thought, there will be changes in specific embodiments and applications, in conclusion this specification content should not manage Solve as limitation of the present invention.

Claims (8)

1. a kind of method that picture is converted to output video, it is characterised in that including:
Original image is divided into multiple home blocks;
The comentropy of each home block is calculated, and the threshold value of comentropy is calculated according to each described information entropy, including Calculate the comentropy of each home block;
According to each described information entropy, the average value of calculating described information entropy;
According to threshold value described in the mean value calculation;
Wherein, calculating the formula of the comentropy E (i, j) of each home block includes:
<mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>255</mn> </msubsup> <mi>p</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>(</mo> <mi>m</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
Wherein, i is expert at by block of information, and j is described information block column, and m is the brightness value of pixel, and the value range of m is 0- 255, p (m) is probability of the obtained brightness value of sample as m for the pixel that is included using the home block;
According to each described information entropy, calculating the formula of the average value Eavg of described information entropy includes:
<mrow> <mi>E</mi> <mi>a</mi> <mi>v</mi> <mi>g</mi> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>i</mi> <mo>&lt;</mo> <mi>W</mi> <mo>,</mo> <mi>j</mi> <mo>&lt;</mo> <mi>H</mi> </mrow> </msubsup> <mi>E</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>W</mi> <mo>&amp;times;</mo> <mi>H</mi> </mrow> </mfrac> </mrow>
Wherein, W is the quantity of home block described in the original image is often gone, and H is to be identified described in the original image each column The quantity of block;
Formula according to threshold value T described in the mean value calculation includes:
T=a × Eavg
Wherein, constants of a between (0,1);
The described information entropy region intensive more than the home block of the threshold value is more than threshold value as target area, wherein comentropy The intensive region of home block be the significant region of visual effect;
Split the target area, the target area is converted into output video.
2. according to the method described in claim 1 any one, it is characterised in that in the letter for calculating each home block Entropy is ceased, and after the threshold value according to each described information entropy calculating comentropy;Described information entropy is more than the threshold value described The intensive region of home block as target area before, further include:
By the comentropy of each home block compared with the threshold value, if described information entropy is more than the threshold value, the mark Knowledge block is the notable block of vision;
The notable block of the vision is filtered, obtains object block.
3. the according to the method described in claim 2, it is characterized in that, mark that described information entropy is more than to the threshold value The intensive region of block as target area, including:
The quantity of object block described in often going is calculated respectively, determines that there are the row of the object block;
In the row there are object block, the quantity of object block described in each column is calculated respectively, determines that there are the object block Row;
The region that the row there are object block and the row there are object block are intersected is as target area.
4. according to the method described in claim 1, it is characterized in that, described split the target area, by the target area Output video is converted to, including:
The parameter of target area and the parameter of output video are obtained, parameter and the output video according to the target area Parameter, determines output video effect;
The parameter of parameter, the output video according to the target area and the output video effect, calculate scaling;
According to the scaling, the target area is zoomed in and out, generates region to be split;
Split the region to be split, the output video is sequentially generated according to segmentation.
5. a kind of picture is converted to the device of output video, it is characterised in that including:
Home block division module, for original image to be divided into multiple home blocks;
Comentropy computing module, for calculating the comentropy of each home block, and calculates according to each described information entropy and believes The threshold value of entropy is ceased, including comentropy computing unit, for calculating the comentropy of each home block;
Average calculation unit, for according to each described information entropy, the average value of calculating described information entropy;
Threshold computation unit, for according to threshold value described in the mean value calculation;
Wherein, the formula of the comentropy E (i, j) of each home block of described information entropy computing unit calculating includes:
<mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>255</mn> </msubsup> <mi>p</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>(</mo> <mi>m</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
Wherein, i is expert at by block of information, and j is the column of described information block, and m is the brightness value of pixel, and the value range of m is 0-255, p (m) are probability of the obtained brightness value of sample as m for the pixel that is included using the home block;
The average calculation unit is according to each described information entropy, the formula bag of the average value Eavg of calculating described information entropy Include:
<mrow> <mi>E</mi> <mi>a</mi> <mi>v</mi> <mi>g</mi> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>i</mi> <mo>&lt;</mo> <mi>W</mi> <mo>,</mo> <mi>j</mi> <mo>&lt;</mo> <mi>H</mi> </mrow> </msubsup> <mi>E</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>W</mi> <mo>&amp;times;</mo> <mi>H</mi> </mrow> </mfrac> </mrow>
Wherein, W is the quantity quantity of home block described in the original image is often gone, and H is described in the original image each column The quantity of home block;
The threshold computation unit includes according to the formula of threshold value T described in the mean value calculation:
T=a × Eavg
Wherein, constants of a between (0,1);
Target area determining module, for using the described information entropy region intensive more than the home block of the threshold value as target area Domain, wherein the comentropy region intensive more than the home block of threshold value are the significant region of visual effect;
Video conversion module, for splitting the target area, output video is converted to by the target area.
6. according to the device described in claim 5 any one, it is characterised in that further include:
The notable block determining module of vision, for by the comentropy of each home block compared with the threshold value, if described information Entropy is more than the threshold value, then the home block is the notable block of vision;
Object block obtains module, for being filtered to the notable block of the vision, obtains object block.
7. device according to claim 6, it is characterised in that the target area determining module includes:
Row determination unit, for calculate respectively often go described in object block quantity, determine that there are the row of the object block;
Row determination unit, in the row there are object block, calculating the quantity of object block described in each column respectively, determines There are the row of the object block;
Intersection region determination unit, for the region work for intersecting the row there are object block and the row there are object block For target area.
8. device according to claim 5, it is characterised in that the video conversion module includes:
Video effect determination unit, for obtaining the parameter of target area and the parameter of output video, according to the target area Parameter and it is described output video parameter, determine output video effect;
Scaling computing unit, for the parameter according to the target area, the parameter of the output video and the output Video effect, calculates scaling;
Area generation unit to be split, for according to the scaling, the target area being zoomed in and out, generation is to be split Region;
Video generation unit is exported, for splitting the region to be split, the output video is sequentially generated according to segmentation.
CN201510549518.5A 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video Active CN105163043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510549518.5A CN105163043B (en) 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510549518.5A CN105163043B (en) 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video

Publications (2)

Publication Number Publication Date
CN105163043A CN105163043A (en) 2015-12-16
CN105163043B true CN105163043B (en) 2018-04-13

Family

ID=54803785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510549518.5A Active CN105163043B (en) 2015-08-31 2015-08-31 The method and apparatus that a kind of picture is converted to output video

Country Status (1)

Country Link
CN (1) CN105163043B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111479070B (en) * 2019-01-24 2022-02-01 杭州海康机器人技术有限公司 Image brightness determination method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101242474A (en) * 2007-02-09 2008-08-13 中国科学院计算技术研究所 A dynamic video browse method for phone on small-size screen
CN101447078A (en) * 2008-12-10 2009-06-03 东软集团股份有限公司 Method for obstacle segmentation and device thereof
JP2010093452A (en) * 2008-10-06 2010-04-22 Toshiba Corp Video server, signal conversion circuit and signal converting method
CN102663391A (en) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 Image multifeature extraction and fusion method and system
CN104202661A (en) * 2014-09-15 2014-12-10 厦门美图之家科技有限公司 Automatic picture-to-video conversion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101242474A (en) * 2007-02-09 2008-08-13 中国科学院计算技术研究所 A dynamic video browse method for phone on small-size screen
JP2010093452A (en) * 2008-10-06 2010-04-22 Toshiba Corp Video server, signal conversion circuit and signal converting method
CN101447078A (en) * 2008-12-10 2009-06-03 东软集团股份有限公司 Method for obstacle segmentation and device thereof
CN102663391A (en) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 Image multifeature extraction and fusion method and system
CN104202661A (en) * 2014-09-15 2014-12-10 厦门美图之家科技有限公司 Automatic picture-to-video conversion method

Also Published As

Publication number Publication date
CN105163043A (en) 2015-12-16

Similar Documents

Publication Publication Date Title
CN105488758B (en) A kind of image-scaling method based on perception of content
CN102006425B (en) Method for splicing video in real time based on multiple cameras
CN105354806B (en) Rapid defogging method and system based on dark
CN107292247A (en) A kind of Human bodys&#39; response method and device based on residual error network
CN102098528B (en) Method and device for converting planar image into stereoscopic image
CN101981592B (en) Content aware resizing of images and videos
CN104376535B (en) A kind of rapid image restorative procedure based on sample
CN107274445A (en) A kind of image depth estimation method and system
CN105069808A (en) Video image depth estimation method based on image segmentation
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN107944459A (en) A kind of RGB D object identification methods
CN103903256B (en) Depth estimation method based on relative height-depth clue
CN102496138B (en) Method for converting two-dimensional images into three-dimensional images
CN104732561B (en) The interpolation method and device of a kind of image
CN103440664A (en) Method, system and computing device for generating high-resolution depth map
CN103198486A (en) Depth image enhancement method based on anisotropic diffusion
CN108829711A (en) A kind of image search method based on multi-feature fusion
CN104463873A (en) Image target repositioning method based on local uniform scaling
CN103428514A (en) Depth map generation apparatus and method
CN104270624A (en) Region-partitioning 3D video mapping method
CN106373126B (en) Image significance detection method based on fusion class geodesic curve and boundary comparison
CN102572305B (en) Method of video image processing and system
CN103413137A (en) Interaction gesture motion trail partition method based on multiple rules
CN105163043B (en) The method and apparatus that a kind of picture is converted to output video
CN102201126B (en) Image processing method, system and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant