CN105827963B - Scene-change detecting method and mobile terminal during one kind is taken pictures - Google Patents
Scene-change detecting method and mobile terminal during one kind is taken pictures Download PDFInfo
- Publication number
- CN105827963B CN105827963B CN201610169640.4A CN201610169640A CN105827963B CN 105827963 B CN105827963 B CN 105827963B CN 201610169640 A CN201610169640 A CN 201610169640A CN 105827963 B CN105827963 B CN 105827963B
- Authority
- CN
- China
- Prior art keywords
- image
- pixels
- block
- matching
- topography
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/68—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
- H04N25/683—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects by defect estimation performed on the scene signal, e.g. real time or on the fly detection
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Scene-change detecting method during taking pictures the present invention provides one kind, belongs to field of communication technology, comprising: obtains the first preview image and the second preview image of camera acquisition;And according to first preview image and the second preview image, first partial image and the second topography are determined;Then gray value matching is carried out to the first partial image and the second topography, determines whether scene changes;Wherein, first preview image and the second preview image are Y channel image.Method provided by the invention only needs to carry out matching operation to the topography of preview image, reduces the operand of images match, improves the efficiency of Scene change detection;And, gray value matching is carried out using Y channel data, the complexity of matching operation is further reduced, improves the efficiency of Scene change detection, the problem of it is low to solve image scene variation detection efficiency, Scene change detection can not be carried out during taking pictures quickly, in real time.
Description
Technical field
Scene-change detecting method and movement are whole during taking pictures the present invention relates to field of communication technology more particularly to one kind
End.
Background technique
In mobile terminal during taking pictures or imaging, it is required to the camera acquired image of detection mobile terminal
Whether scene changes.For example, when taking pictures or shooting video, in image acquisition procedures, due to camera acquisition
Image has occurred scene changes and is frequently necessary to adjust focal length, sensitivity etc..
Currently, the detection method of image scene variation are as follows: the fetching portion image corresponding position from continuous image sequence
The motion vector information at place, according to the variation of the change-detection images scene of motion vector information;Or from continuous image sequence
The essential attribute information of middle fetching portion image, such as histogram of gradients, hue histogram, according to category each between the image of acquisition
Whether the matching degree detection image scene of property changes.Existing image scene detection method is computationally intensive, leads to scene detection
Low efficiency, be unable to satisfy mobile terminal user it is quick, in real time during taking pictures carry out Scene change detection demand.
Summary of the invention
The embodiment of the present invention provides scene-change detecting method and mobile terminal during one kind is taken pictures, to solve existing skill
It is computationally intensive due to calculating the information such as motion vector or histogram of gradients, hue histogram in art, cause detection efficiency low,
The problem of can not carrying out Scene change detection during taking pictures quickly, in real time.
In a first aspect, scene-change detecting method during taking pictures the embodiment of the invention provides one kind, applied to having
The mobile terminal of camera, it is described take pictures during scene-change detecting method include:
Obtain the first preview image and the second preview image of camera acquisition;
According to first preview image and the second preview image, first partial image and the second topography are determined;
Gray value matching is carried out to the first partial image and the second topography, determines whether scene changes;
Wherein, first preview image and the second preview image are Y channel image.
Second aspect, the embodiment of the invention also provides a kind of mobile terminal, including camera, the mobile terminal is also wrapped
It includes:
Image collection module, for obtaining the first preview image and the second preview image of camera acquisition;
Image determining module, for obtaining the first preview image and the second preview graph that module obtains according to described image
Picture determines first partial image and the second topography;
Scene changes determining module, first partial image and the second Local map for being determined to described image determining module
As carrying out gray value matching, determine whether scene changes;
Wherein, first preview image and the second preview image are Y channel image.
In this way, in the embodiment of the present invention, by the first preview image and the second preview image that obtain camera acquisition;And
According to first preview image and the second preview image, first partial image and the second topography are determined;Then to described
First partial image and the second topography carry out gray value matching, determine whether scene changes;Wherein, described first is pre-
It lookes at image and the second preview image is Y channel image.Compared with prior art, the embodiment of the present invention is only needed to preview image
Topography carry out matching operation, reduce the operand of images match, improve the efficiency of Scene change detection;Also,
Gray value matching is carried out using Y channel data, the complexity of matching operation is further reduced, improves Scene change detection
Efficiency, can quickly, Scene change detection is carried out in previews of taking pictures in real time.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention,
For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the flow chart of the scene-change detecting method of the embodiment of the present invention one;
Fig. 2 is that block of pixels divides schematic diagram in the scene-change detecting method of the embodiment of the present invention one;
Fig. 3 is that the scene-change detecting method topography of the embodiment of the present invention one carries out the matched flow chart of gray value;
Fig. 4 is one of the structure chart of mobile terminal of the embodiment of the present invention two;
Fig. 5 is the two of the structure chart of the mobile terminal of the embodiment of the present invention two;
Fig. 6 is the structure chart of the mobile terminal of the embodiment of the present invention three;
Fig. 7 is the structure chart of the mobile terminal of the embodiment of the present invention four.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Embodiment one:
Scene-change detecting method during one kind is taken pictures is present embodiments provided, applied to the mobile end with camera
End, as shown in Figure 1, the method comprising the steps of 101 to step 103.
Step 101, the first preview image and the second preview image of camera acquisition are obtained.
The data that image is obtained in the embodiment of the present invention are the original images of smart phone acquisition through past bad point, gamma school
Just, after the image procossings such as color correction, color enhancing, denoising, the image data of obtained yuv format, and extract Y channel image
Data are matched.Wherein, first preview image and the second preview image are Y channel image.
Below to this hair by taking the application scenarios of focusing in the process of taking pictures for applying the present invention to smart phone as an example
Bright technical solution is described in detail.
After application is taken pictures in smart phone starting, preview mode will start to show the image of acquisition, then to acquisition
Object auto-focusing in image, or some personage in image or triggering focusing manually is realized when object when the user clicks.
In previews, it often will appear the case where object being taken removes focal range, only in real-time detection previews
Scene changes in the image of acquisition could adjust focal length in time.
The present invention in the specific implementation, selects the first preview image and the second preview image to be detected, wherein institute first
It states the first preview image and second preview image is selected from the image sequence of camera acquisition, first preview image
Frame number is less than second preview image.In the specific implementation, for still image, it can choose first to defocused preview
Frame image as the first preview image for judging scene changes, and in real-time selection previews subsequent acquisition current frame image
Judge that image scene is by comparing the variation of the second preview image and the first preview image content as the second preview image
It is no to change.Focal length is further adjusted according to the judging result that image scene changes in time convenient for taking pictures to apply.Usually right
After coke starts, the first frame of preview indicates that user wants the scene of shooting, therefore the present embodiment uses the first frame data of preview
Images match is carried out with the present frame of preview.For the content change speed taken pictures than in faster application scenarios, as video is clapped
It takes the photograph, the picture frame being spaced between second preview image and first preview image of selection is less than preset quantity.Specifically
When implementation, the picture frame being spaced between second preview image and first preview image can be 0, the preset quantity
It is arranged according to the detection demand of application scenarios.Image is carried out using the two frame preview images that interval is less than preset quantity picture frame
Match, convenient for finding scene changes in time in dynamic scene.
Step 102, according to first preview image and the second preview image, first partial image and the second part are determined
Image.
When it is implemented, the time of the too big increased matching operation of meeting of preview image data, in order to reduce images match
Operand, improve scene detection efficiency, it is preferable that selection the first preview image and the second preview image part image data into
Therefore row matching first according to first preview image and the second preview image, determines first partial image and the second part
Image.The first partial image is the image district of the pre-set dimension centered on the center pixel of first preview image
Domain, second topography are the image-region of the pre-set dimension centered on the center pixel of second preview image,
Wherein, the pre-set dimension is c × d block of pixels, wherein c and d is positive integer.As shown in Fig. 2, with smart phone acquisition
For preview image 201, using the upper left corner of the image as coordinate origin, the coordinate of the center pixel p of the image, with imago in this
Centered on plain p, a rectangular area 202 is determined, the width of the rectangular area is d pixel, a height of c pixel, wherein c and d are
Positive integer, also, d is less than the pixel wide of preview image, and c is less than the pixels tall of preview image.Preferably, according to existing skill
The timeliness requirement of the resolution ratio and scene detection of the image that smart phone acquires in art, sets 480, c for d and is set as 640,
The first preview image and the second preview image center size is selected to carry out scene for the image data in 640 × 480 region
Variation detection, is not only suitable for major applications scene, and can accelerate the calculating speed of image scene detection.
In other embodiments of the invention, the first of the first preview image can also be determined according to the manual setting of user
Second topography of topography and the second preview image, specifically: the first partial image is first preview graph
The area image specified of user as in, second topography be in second preview image with the first partial image
The image in corresponding position region.For example, touch gestures of the detection user on the first preview image, true according to the touch gestures
Determine user and specify region, first partial image is the image area image that user specifies in region in first preview image
Position and size according to touch gestures determine, when it is implemented, touch gestures can for two o'clock sliding or 3 points sliding etc.;Again
For example, clicking operation of the detection user on the first preview image determines pre-set dimension area centered on the position of clicking operation
Domain, first partial image are the image in first preview image in pre-set dimension region;When it is implemented, can also pass through
Configuration interface is set, guides the specified region of user setting Scene change detection, then selects the finger in the first preview image
The image in region is determined as first partial image.The present invention to the specific method for specified region to be arranged without limitation.It determines
The area image that user specifies matches, and more targetedly, can be improved the accuracy rate of Scene change detection.
After determining first partial image according to the region that user specifies, second topography is second preview
In image with the image in the first partial image corresponding position region.
When it is implemented, the specified region can be rectangle, or other regular shapes such as circles.In order to calculate
Convenient, following embodiment is illustrated technical solution using specified region as rectangular area.
Step 103, gray value matching is carried out to the first partial image and the second topography, determines whether scene is sent out
Changing.
When it is implemented, the present invention is that basic unit carries out image comparison with block of pixels.As shown in figure 3, described to institute
It states first partial image and the second topography carries out gray value matching, the step of whether scene changes, including step determined
Rapid 1031 to step 1037.
Step 1031, the first partial image and the second topography are divided into a × b block of pixels respectively.
Wherein, multiple a × b are spaced between each a × b block of pixels in the first partial image and the second topography
Block of pixels, a and b are positive integer.
In step 1031, the first partial image and the second topography are divided into a × b block of pixels respectively,
In, multiple a × b block of pixels are spaced between each a × b block of pixels in the first partial image and the second topography.Its
In, a and b are positive integer, and a and b may be the same or different.Usual image optimization processing is needed using hardware-accelerated,
When carrying out hardware register read-write, register manipulation is as unit of byte, corresponding 8 bits of a byte, in order to further
Calculating speed is improved, in the embodiment of the present invention, the considerations of comprehensive operand and subsequent optimization, what a and b selection can be divided exactly by 8
Positive integer, such as a=b=8, then the size of block of pixels is a × b=64 pixel.
The first partial image and the second topography are divided into a × b block of pixels respectively, wherein the first game
A specific embodiment party of multiple a × b block of pixels is spaced between each a × b block of pixels in portion's image and the second topography
Formula is: first partial image and the second topography are respectively divided into second quantity a × b block of pixels of arranged adjacent;It presses
The second quantity a × b block of pixels divided by the first partial image adopt according to predeterminated position distribution rule
Sample obtains first quantity a × b block of pixels, and, according to the predeterminated position distribution rule to by second topography
It divides obtained second quantity a × b block of pixels progress down-sampling and obtains first quantity a × b block of pixels;Wherein, described
One quantity is less than second quantity.Below with specified region for 640 × 480, for block of pixels is 8 × 8 block of pixels, further
Illustrate the determination method of the first quantity reference pixel block and block of pixels to be compared in preferred embodiment.
As shown in Fig. 2, first by the first partial image 202 and of the first preview image 201 and the second preview image 203
Two topographies 204 are respectively divided into the block of pixels of the second quantity pre-set dimension of arranged adjacent.By first partial image 202
According to from left to right, sequence from top to bottom is divided into 8 × 8 block of pixels of arranged adjacent, as block of pixels 2021,2022,
2023,2024,4800 block of pixels (i.e. second are obtained in available 80 row block of pixels, 60 block of pixels of every row arranged adjacent
4800) quantity is equal to;According to same division methods, the second topography 204 is divided, obtains 4800 block of pixels,
Such as block of pixels 2041,2042,2043,2044.Wherein, the block of pixels and the second topography that first partial image divides
204 positions for dividing obtained block of pixels correspond, and such as: the position of block of pixels 2021 and block of pixels 2041 is corresponding, block of pixels
2023 and block of pixels 2043 position it is corresponding.
Then, the second obtained quantity is divided to by the first partial image 202 according to predeterminated position distribution rule
Block of pixels carries out down-sampling and obtains the first quantity block of pixels to be matched, such as the block of pixels 2021,2023,2024 in Fig. 2, with
And the second obtained quantity block of pixels is divided to by second topography 204 according to the predeterminated position distribution rule
It carries out down-sampling and obtains the first quantity block of pixels to be compared, such as the block of pixels 2041,2043,2044 in Fig. 2.Wherein, in advance
If position distribution rule includes: to be spaced E block of pixels in every row block of pixels to select among a block of pixels or each column block of pixels
E, interval block of pixels in a block of pixels or every row block of pixels is selected to select a block of pixels and each column picture every F block of pixels
It is spaced F block of pixels in plain block and selects a block of pixels.With predeterminated position distribution rule are as follows: be spaced E picture in every row block of pixels
Plain block selects in a block of pixels and each column block of pixels for block of pixels one block of pixels of selection of F, interval, the meter of the first quantity M
It is as follows to calculate formula:
Wherein, M is pixel number of blocks to be matched, and W is the width value of the first partial image, and L is the first game
The length value of portion's image, the block of pixels number that E is spaced in the direction of the width between each a × b block of pixels, F are each a × b
The block of pixels number being spaced in the longitudinal direction between block of pixels, a are the width of a × b block of pixels, and b is a × b picture
The length of plain block, E and F may be the same or different.When it is implemented, if the equal value of E and F is 3;Predeterminated position distribution rule
Then are as follows: be spaced E block of pixels in every row block of pixels and select F, interval block of pixels selection one in a block of pixels and each column block of pixels
A block of pixels, then according to predeterminated position distribution rule to 4800 8 × 8 pixels divided by the first partial image
Block carries out down-sampling and obtains 300 block of pixels to be matched.Similarly, according to same predeterminated position distribution rule to by described
4800 8 × 8 block of pixels that two topographies divide carry out down-sampling and obtain 300 block of pixels to be matched.In order to
The calculation amount of images match is reduced, while not influencing the accuracy of scene detection, it is preferable that the E and F are less than 10.
Determining another embodiment for matched block of pixels is: respectively by the first partial image and the second part
Image is divided into a × b block of pixels, wherein each a × b block of pixels in the first partial image and the second topography it
Between be spaced multiple pixels.Specific embodiment is: establishing plane as coordinate origin using the upper left corner of the first partial image
Rectangular coordinate system carries out the down-sampling of the 5th number of pixels point of interval to the pixel in the first partial image, and with sampling
Obtained pixel determines M a × b block of pixels as left upper apex (or central point), wherein the 5th quantity is greater than a and b, institute
Stating the 5th quantity is preferably the positive integer that can be divided exactly by 8.By taking the 5th quantity is 32 as an example, if a and b is 8, for 640 × 480
First partial image divided, the quantity of 8 × 8 obtained block of pixels is 300, i.e. M=300.Using with divide first game
Second topography is divided into M block of pixels to be matched being spaced apart from each other by the identical method of portion's image.
The mode of two kinds of determination a × b block of pixels to be matched is only listed above, can also be determined by other methods
Block of pixels to be matched is used for Scene change detection to extract the data of block of pixels, and this will not be repeated here by the present invention.It should be understood that
Above embodiments should not be used as limitation of the invention merely to be easier to understand the solution of the present invention and easy to carry out.
The embodiment of the present invention, by the first preview image and the second preview image that obtain camera acquisition;And according to
First preview image and the second preview image, determine first partial image and the second topography;Then to described first
Topography and the second topography are divided into a × b block of pixels, and part of block of pixels is selected to carry out gray value matching, really
Determine whether scene changes, further reduce the complexity of matching operation, improves the efficiency of Scene change detection.
Step 1032, according to first preview image, images match threshold value is calculated.
Before carrying out pixel Block- matching, need that images match threshold value is calculated according to first preview image.Tool
When body is implemented, there are two types of calculation: the first, according to the average gray of first preview image and the block of pixels
Size calculates matching threshold;Second, matching threshold is calculated according to the average gray of first preview image.
The first is described according to first preview image, the mode of images match threshold value is calculated, comprising: to described
The gray value of all pixels of first preview image carries out operation of averaging, and obtains average gray;Obtain a × b pixel
The Pixel Dimensions value of block;According to formulaImages match threshold value is calculated, wherein y is the figure
As matching threshold, size is the Pixel Dimensions value of a × b block of pixels, and avg is the average gray.
Second described according to first preview image, the mode of images match threshold value is calculated, comprising: to described
The gray value of all pixels of first preview image carries out operation of averaging, and obtains average gray;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, and avg is the average gray.
Wherein, the calculation formula of average gray is as follows:
Wherein, W' is the width value of first preview image, and L' is the length value of first preview image, and p (j) is
Position is the gray value of the pixel of j in first preview image, and avg is the average gray value of the first preview image.
Step 1033, according to the first partial image, matching times M is calculated.
Before carrying out pixel Block- matching, it is also necessary to according to the first partial image, matching times M be calculated.Tool
When body is implemented, firstly, obtaining the length value L and width value W of first partial image;Matching times M is drawn equal to first partial image
The calculation formula of the quantity for a × b block of pixels got, matching times M is as follows:
Wherein, M is the matching times, and W is the width value of the first partial image, and L is the first partial image
Length value, the block of pixels number that E is spaced in the direction of the width between each a × b block of pixels, F be each a × b block of pixels
Between the block of pixels number that is spaced in the longitudinal direction, a is the width of a × b block of pixels, and b is a × b block of pixels
Length, E and F may be the same or different.
Step 1034, it according to each a × b block of pixels in the first partial image and the second topography, determines non-
The number of matched pixel block.
It is described according to described first if the matching threshold is calculated using above-mentioned first way in above-mentioned steps 1034
Each a × b block of pixels in topography and the second topography, the step of determining the number of non-matching block of pixels, comprising: point
Total gray value of each a × b block of pixels in the first partial image and the second topography is not calculated;By the first game
Total gray value of a × b block of pixels of opposite position carries out seeking difference operation in portion's image and the second topography, obtains total ash
Angle value difference;The absolute value of total gray value difference is compared with described image matching threshold;When total gray value
When the absolute value of difference is greater than described image matching threshold, then the number of non-matching block of pixels adds one.
If 300 block of pixels to be matched and the second topography of first partial image have been determined in step 1031
300 block of pixels to be matched, the then matching times being calculated in step 1033 are 300, then will respectively in step 1034
Calculate total gray value of 300 block of pixels of the second topography of total sum of the grayscale values of 300 block of pixels of first partial image;
Then, corresponding position in the second topography of total sum of the grayscale values of 300 block of pixels is calculated separately in the first partial image
The difference of the total gray value for the block of pixels set obtains total gray value difference.
If with G1(i) total gray value of block of pixels i in first partial image is indicated, with G2(i) it indicates in the second topography
Total gray value of block of pixels i, i are the station location marker of block of pixels, G1(i) and G2(i) first partial image and second game are respectively indicated
Total gray value of the corresponding block of pixels in position in portion's image.It will be corresponding in the first partial image and the second topography
Total gray value of a × b block of pixels of position carries out seeking difference operation, obtains total gray value difference, specifically: use formula D (i)
=G1(i)-G2(i) a × b block of pixels of opposite position in the first partial image and the second topography is calculated separately
The difference of total gray value.If having determined in step 1031 has 300 a × b block of pixels in first partial image, i.e., in step 1033
Determining matching times M=300 will obtain 300 total gray value difference D (i), wherein 0 < i≤300.Then, it will obtain respectively
The absolute values of the total gray value differences of M be compared with described image matching threshold;It is absolute when total gray value difference
When value is greater than described image matching threshold, then the number of non-matching block of pixels adds one, finally, obtaining of non-matching block of pixels
Number.
It is described according to described first if the matching threshold is calculated using the above-mentioned second way in above-mentioned steps 1034
Each a × b block of pixels in topography and the second topography, the step of determining the number of non-matching block of pixels, comprising: point
The average gray value of each a × b block of pixels in the first partial image and the second topography is not calculated;By described first
The average gray value of a × b block of pixels of opposite position carries out seeking difference operation in topography and the second topography, obtains
Average gray value difference;The absolute value of the average gray value difference is compared with described image matching threshold;When described
When the absolute value of average gray value difference is greater than described image matching threshold, then the number of non-matching block of pixels adds one.If in step
Determined in rapid 1031 first partial image 300 block of pixels to be matched and 300 of the second topography it is to be matched
Block of pixels, the then matching times being calculated in step 1033 are 300, then first partial figure will be calculated separately in step 1034
The average gray value of 300 block of pixels of the average gray value and the second topography of 300 block of pixels of picture;Then, respectively
Calculate in the first partial image pixel of opposite position in the average gray value and the second topography of 300 block of pixels
The difference of the average gray value of block obtains average gray value difference.
If withIndicate the average gray value of block of pixels i in first partial image, withIndicate the second topography
The average gray value of middle block of pixels i, i are the station location marker of block of pixels,WithRespectively indicate first partial image and
The average gray value of the corresponding block of pixels in position in two topographies.It will be in the first partial image and the second topography
The average gray value of a × b block of pixels of opposite position carries out seeking difference operation, obtains average gray value difference, specifically: it adopts
Use formulaCalculate separately opposite position in the first partial image and the second topography
The difference of the average gray value of a × b block of pixels.If having determined in step 1031 has 300 a × b pixels in first partial image
Block, i.e., the matching times M=300 determined in step 1033, will obtain 300 average gray value difference D (i), wherein 0 < i≤
300.Then, the absolute value of M average gray value difference of acquisition is compared with described image matching threshold respectively;Work as institute
When stating the absolute value of average gray value difference greater than described image matching threshold, then the number of non-matching block of pixels adds one, finally,
Obtain the number of non-matching block of pixels.
Step 1035, judge whether the number of non-matching block of pixels is greater thanIf so, 1036 are thened follow the steps, if it is not,
Then follow the steps 1037.
Finally, further matching result is judged, if not the number of matched pixel block is greater than the default of matching times
Ratio, it is determined that the scene of second preview image and first preview image changes, and thens follow the steps 1036;It is no
Then, it determines that second preview image is consistent with the scene of first preview image, thens follow the steps 1037.Wherein, described
Preset ratio value is between 0~1, and closer to 1, scene changes amplitude needs are bigger to be just detected preset ratio value
Come, in order to guarantee the sensitivity of scene detection, it is preferred that the preset ratio value that the present embodiment uses for
Step 1036, when the number of non-matching block of pixels is greater thanWhen, it is determined that scene changes.
When the number of non-matching block of pixels is greater thanWhen, illustrate that non-matching block of pixels alreadys exceed the default of total block of pixels
Ratio, it is determined that the scene of the first preview image and the second preview image changes.By taking matching times M is equal to 300 as an example,
It needs to match 300 block of pixels in the second preview image, if not the number of matched pixel block is more than 100, then
Determine that the first preview image and the scene of the second preview image change.
Step 1037, when the number of non-matching block of pixels is less than or equal toWhen, it is determined that scene does not change.
When the number of non-matching block of pixels is less than or equal toWhen, illustrate non-matching block of pixels already less than total block of pixels
Preset ratio, it is determined that the scene of the first preview image and the second preview image is consistent.Still it is equal to 300 with matching times M
For, that is, it needs to match 300 block of pixels in the second preview image, if not the number of matched pixel block is less than or waits
In 100, it is determined that the scene of the first preview image and the second preview image does not change.
Scene-change detecting method during the taking pictures of the embodiment of the present invention passes through the first preview for obtaining camera acquisition
Image and the second preview image;And according to first preview image and the second preview image, first partial image and are determined
Two topographies;Then gray value matching is carried out to the first partial image and the second topography, determines whether scene is sent out
Changing;Wherein, first preview image and the second preview image are Y channel image.Compared with prior art, of the invention
Embodiment only needs to carry out matching operation to the topography of preview image, reduces the operand of images match, improves field
The efficiency of scape variation detection;Also, gray value matching is carried out using Y channel data, further reduces the complexity of matching operation
Degree, improve the efficiency of Scene change detection, can quickly, Scene change detection is carried out in previews of taking pictures in real time;
Threshold grayscale is calculated by dynamic, the adaptive matching threshold for adjusting scene detection can be improved the accurate of Scene change detection
Property.
Embodiment two:
Correspondingly, in another embodiment of the invention, disclosing a kind of mobile terminal 400, the mobile terminal 400 is wrapped
Camera (not shown) is included, as shown in figure 4, the mobile terminal further include:
Image collection module 410, for obtaining the first preview image and the second preview image of camera acquisition;
Image determining module 420, the first preview image and second for obtaining the acquisition of module 410 according to described image are pre-
It lookes at image, determines first partial image and the second topography;
Scene changes determining module 430, the first partial image and second for being determined to described image determining module 420
Topography carries out gray value matching, determines whether scene changes.
The data that image is obtained in the embodiment of the present invention are the original images of smart phone acquisition through past bad point, gamma school
Just, after the image procossings such as color correction, color enhancing, denoising, the image data of obtained yuv format, and extract Y channel image
Data are matched.Wherein, first preview image and the second preview image are Y channel image.
The mobile terminal 400 of the embodiment of the present invention, by the first preview image and the second preview that obtain camera acquisition
Image;And according to first preview image and the second preview image, first partial image and the second topography are determined;Then
Gray value matching is carried out to the first partial image and the second topography, determines whether scene changes;Wherein, described
First preview image and the second preview image are Y channel image.Compared with prior art, the embodiment of the present invention is only needed to pre-
Look at image topography carry out matching operation, reduce the operand of images match, improve the efficiency of Scene change detection;
Also, gray value matching is carried out using Y channel data, the complexity of matching operation is further reduced, improves scene changes
The efficiency of detection, can quickly, Scene change detection is carried out in previews of taking pictures in real time.
When it is implemented, the time of the too big increased matching operation of meeting of preview image data, in order to reduce images match
Operand, improve scene detection efficiency, it is preferable that selection the first preview image and the second preview image part image data into
Row matching.Select a kind of mode of the topography of preview image are as follows: the first partial image is with first preview image
Center pixel centered on pre-set dimension image-region, second topography be in second preview image
The image-region of pre-set dimension centered on imago element, wherein the pre-set dimension is c × d block of pixels, wherein c and d are
Positive integer.It selects the image data in the region of picture centre to carry out Scene change detection, is not only suitable for major applications scene,
It can accelerate the calculating speed of image scene detection again.
Select the another way of the topography of preview image are as follows: the first partial image is first preview graph
The area image specified of user as in, second topography be in second preview image with the first partial image
The image in corresponding position region.The area image that selection user specifies matches, and more targetedly, can be improved scene change
Change the accuracy rate of detection.
When it is implemented, as shown in figure 5, the scene changes determining module 430, further comprises:
Block of pixels division unit 4301, for respectively by described image determining module 420 determine first partial image and
Second topography is divided into a × b block of pixels;
Matching threshold computing unit 4302, for obtaining the first preview image that module 410 obtains, meter according to described image
Calculation obtains images match threshold value;
Matching times computing unit 4303, the first partial image for being determined according to described image determining module 420, meter
Calculation obtains matching times M;
Matching unit 4304, for according to each a × b pixel in the first partial image and the second topography
Block determines the number of non-matching block of pixels;
Judge determination unit 4305, the number of the non-matching block of pixels for determining when the matching unit 4304 is greater thanWhen, it is determined that scene changes;And
When the number for the non-matching block of pixels that the matching unit 4304 determines is less than or equal toWhen, it is determined that scene
It does not change;
Wherein, multiple a × b are spaced between each a × b block of pixels in the first partial image and the second topography
Block of pixels, a and b are positive integer.
Scene changes determining module 430 and its including each unit specific implementation referring to embodiment of the method, herein
It repeats no more.
It is matched by part block of pixels in selection first partial image and the second topography, to judge that scene becomes
Change, efficiently reduce the operand of images match, improves scene detection efficiency.
In a specific embodiment, the matching threshold computing unit 4302, is further used for: to first preview
The gray value of all pixels of image carries out operation of averaging, and obtains average gray;Obtain the pixel of a × b block of pixels
Size value;According to formulaImages match threshold value is calculated, wherein y is that described image matches threshold
Value, size are the Pixel Dimensions value of a × b block of pixels, and avg is the average gray.
Correspondingly, the matching unit 4304 is further used for: calculating separately out the first partial image and second game
Total gray value of each a × b block of pixels in portion's image;By opposite position in the first partial image and the second topography
Total gray value of a × b block of pixels carry out seeking difference operation, obtain total gray value difference;By the exhausted of total gray value difference
Value is compared with described image matching threshold;When the absolute value of total gray value difference is greater than described image matching threshold
When, then the number of non-matching block of pixels adds one.
In another specific embodiment of the invention, the matching threshold computing unit 4302 is further used for: to institute
The gray value for stating all pixels of the first preview image carries out operation of averaging, and obtains average gray;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, and avg is the average gray.
Correspondingly, the matching unit 4304 is further used for: calculating separately out in the first partial image and the second topography
The average gray value of each a × b block of pixels;By a × b of opposite position in the first partial image and the second topography
The average gray value of block of pixels carries out seeking difference operation, obtains average gray value difference;By the exhausted of the average gray value difference
Value is compared with described image matching threshold;Threshold is matched when the absolute value of the average gray value difference is greater than described image
When value, then the number of non-matching block of pixels adds one.
The embodiment of the present invention calculates threshold grayscale by dynamic, and the adaptive matching threshold for adjusting scene detection can
Improve the accuracy of Scene change detection.
When it is implemented, the first partial that block of pixels division unit 4301 respectively determines described image determining module 420
The embodiment that image and the second topography are divided into a × b block of pixels can be with are as follows: by first partial image and the second Local map
Second quantity a × b block of pixels as being respectively divided into arranged adjacent;According to predeterminated position distribution rule to by the first game
Second quantity a × b block of pixels that portion's image divides carries out down-sampling and obtains first quantity a × b block of pixels, and,
According to the predeterminated position distribution rule to the second quantity a × b block of pixels divided by second topography into
Row down-sampling obtains first quantity a × b block of pixels;Wherein, first quantity is less than second quantity.Matching times etc.
In the first quantity.The matching times computing unit 4303, is further used for:
Obtain the length value and width value of first partial image;
According to formulaMatching times M is calculated, wherein M is institute
State matching times, W is the width value of the first partial image, and L is the length value of the first partial image, E be each a ×
The block of pixels number being spaced in the direction of the width between b block of pixels, F are spaced in the longitudinal direction between each a × b block of pixels
Block of pixels number, a be a × b block of pixels width, b be a × b block of pixels length.
For different block of pixels division methods, the specific method for calculating matching times might have difference, herein no longer
It repeats.
The mobile terminal of the embodiment of the present invention obtains the first preview image and second of camera acquisition by above-mentioned module
Preview image;And according to first preview image and the second preview image, first partial image and the second topography are determined;
Then a × b block of pixels is divided into the first partial image and the second topography, and part of block of pixels is selected to carry out
Gray value matching, to determine whether scene changes, further reduces the complexity of matching operation, improves scene changes
The efficiency of detection.
Embodiment three:
Fig. 6 is the block diagram of the mobile terminal of another embodiment of the present invention.Mobile terminal 600 shown in fig. 6 includes: at least
One processor 601, memory 602, at least one network interface 604 and user interface 603, component 606 of taking pictures, component of taking pictures
606 include camera.Various components in mobile terminal 600 are coupled by bus system 605.It is understood that total linear system
System 605 is for realizing the connection communication between these components.Bus system 605 further includes power supply in addition to including data/address bus
Bus, control bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus in Fig. 6
System 605.
Wherein, user interface 603 may include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate, touch screen or Trackpad etc..
It is appreciated that the memory 602 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
It or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read-
OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM
(ErasablePROM, EPROM), electrically erasable programmable read-only memory (ElectricallyEPROM, EEPROM) dodge
It deposits.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is slow to be used as external high speed
It deposits.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random access memory
(StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory
(SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate
SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory
(DirectRambusRAM, DRRAM).The memory 602 of system and method described herein be intended to include but be not limited to these and
The memory of any other suitable type.
In some embodiments, memory 602 stores following element, executable modules or data structures, or
Their subset of person or their superset: operating system 6021 and application program 6022.
Wherein, operating system 6021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and the hardware based task of processing.Application program 6022 includes various application programs, such as media
Player (MediaPlayer), browser (Browser), input method etc., for realizing various applied business.Realize the present invention
The program of embodiment method may be embodied in application program 6022.
In embodiments of the present invention, by the program or instruction of calling memory 602 to store, specifically, can be application
The program or instruction stored in program 6022.The behaviour that user uses application program is detected by the touch screen in user interface 603
Make, such as detects the touch gestures of the specified area image of user setting.Processor 601 is used to obtain the first of camera acquisition
Preview image and the second preview image;According to first preview image and the second preview image, determine first partial image and
Second topography;Gray value matching is carried out to the first partial image and the second topography, determines whether scene occurs
Variation;Wherein, first preview image and the second preview image are Y channel image.
The method part that the embodiments of the present invention disclose can be applied in processor 601, or real by processor 601
It is existing.Processor 601 may be a kind of IC chip, the processing capacity with signal.During realization, the above method
Each step can be completed by the integrated logic circuit of the hardware in processor 601 or the instruction of software form.Above-mentioned place
Reason device 601 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), dedicated integrated electricity
Road (ApplicationSpecific IntegratedCircuit, ASIC), ready-made programmable gate array
(FieldProgrammableGateArray, FPGA) either other programmable logic device, discrete gate or transistor logic
Device, discrete hardware components.It may be implemented or execute disclosed each method, step and the logical box in the embodiment of the present invention
Figure.General processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with the present invention
The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and execute completion, or use decoding processor
In hardware and software module combination execute completion.Software module can be located at random access memory, and flash memory, read-only memory can
In the storage medium of this fields such as program read-only memory or electrically erasable programmable memory, register maturation.The storage
Medium is located at memory 602, and processor 601 reads the user in memory 602 to the access times of application program, firmly in conjunction with it
Part completes the step of above method.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its
Combination is to realize.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (Application
SpecificIntegratedCircuits, ASIC), digital signal processor (DigitalSignalProcessing, DSP),
Digital signal processing appts (DSPDevice, DSPD), programmable logic device (ProgrammableLogicDevice, PLD),
Field programmable gate array (Field-ProgrammableGateArray, FPGA), general processor, controller, microcontroller
In device, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, it can be realized herein by executing the module (such as process, function etc.) of function described herein
The technology.Software code is storable in memory and is executed by processor.Memory can in the processor or
It is realized outside processor.
The processor 601 is further used for: respectively by the first partial image and the second topography be divided into a ×
B block of pixels;According to first preview image, images match threshold value is calculated;According to the first partial image, calculate
To matching times M;According to each a × b block of pixels in the first partial image and the second topography, non-matching picture is determined
The number of plain block;When the number of non-matching block of pixels is greater thanWhen, it is determined that scene changes;When non-matching block of pixels
Number is less than or equal toWhen, it is determined that scene does not change;Wherein, the first partial image and the second topography
In each a × b block of pixels between be spaced multiple a × b block of pixels, a and b are positive integer.
Optionally, the processor 601 is used for: carrying out asking equal to the gray value of all pixels of first preview image
It is worth operation, obtains average gray;Obtain the Pixel Dimensions value of a × b block of pixels;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, and size is a × b
The Pixel Dimensions value of block of pixels, avg are the average gray.
Correspondingly, the processor 601 is further used for: calculating separately out the first partial image and the second Local map
Total gray value of each a × b block of pixels as in;By a of opposite position in the first partial image and the second topography
Total gray value of × b block of pixels carries out seeking difference operation, obtains total gray value difference;By the absolute value of total gray value difference
It is compared with described image matching threshold;When the absolute value of total gray value difference is greater than described image matching threshold,
Then the number of non-matching block of pixels adds one.
Optionally, the processor 601 is used for: carrying out asking equal to the gray value of all pixels of first preview image
It is worth operation, obtains average gray;According to formulaImages match threshold value is calculated, wherein y is the figure
As matching threshold, avg is the average gray.
Correspondingly, the processor 601 is further used for: calculating separately out the first partial image and the second Local map
The average gray value of each a × b block of pixels as in;By opposite position in the first partial image and the second topography
The average gray value of a × b block of pixels carries out seeking difference operation, obtains average gray value difference;By the average gray value difference
Absolute value be compared with described image matching threshold;When the absolute value of the average gray value difference is greater than described image
When with threshold value, then the number of non-matching block of pixels adds one.
Optionally, the processor 601 is used for: obtaining the length value and width value of first partial image;According to formulaMatching times M is calculated, wherein M is the matching times, and W is institute
The width value of first partial image is stated, L is the length value of the first partial image, and E is between each a × b block of pixels in width
The block of pixels number being spaced on degree direction, the block of pixels number that F is spaced in the longitudinal direction between each a × b block of pixels, a
For the width of a × b block of pixels, b is the length of a × b block of pixels.
Optionally, the first partial image is the pre-set dimension centered on the center pixel of first preview image
Image-region, second topography is the figure of pre-set dimension centered on the center pixel of second preview image
As region, wherein the pre-set dimension is c × d block of pixels, wherein c and d is positive integer.
Optionally, the first partial image is the area image specified of user in first preview image, described the
Two topographies are the image in second preview image with the first partial image corresponding position region.
Mobile terminal 600 can be realized each process that mobile terminal is realized in previous embodiment, to avoid repeating, here
It repeats no more.
The mobile terminal 600 of the embodiment of the present invention passes through above-mentioned module, it is only necessary to carry out to the topography of preview image
Matching operation reduces the operand of images match, improves the efficiency of Scene change detection;Also, using Y channel data into
Row gray value matching, further reduce the complexity of matching operation, improve the efficiency of Scene change detection, can quickly,
Scene change detection is carried out in previews of taking pictures in real time;Threshold grayscale is calculated by dynamic, it is adaptive to adjust scene inspection
The matching threshold of survey can be improved the accuracy of Scene change detection.
Example IV:
Fig. 7 is the structural schematic diagram of the mobile terminal of another embodiment of the present invention.Specifically, the mobile terminal in Fig. 7
It can be smart phone, tablet computer, personal digital assistant (PersonalDigital Assistant, PDA) or vehicle mounted electric
Brain etc..
Mobile terminal in Fig. 7 includes radio frequency (RadioFrequency, RF) circuit 710, memory 720, input unit
730, display unit 740, processor 760, component 750 of taking pictures, voicefrequency circuit 770, WiFi (WirelessFidelity) module
780 and power supply 790, component 750 of taking pictures includes camera.
Wherein, input unit 730 can be used for receiving the number or character information of user's input, and generation and mobile terminal
User setting and function control related signal input.Specifically, in the embodiment of the present invention, which can be with
Including touch panel 731.Touch panel 731, also referred to as touch screen collect the touch operation (ratio of user on it or nearby
Such as user uses the operation of finger, stylus any suitable object or attachment on touch panel 731), and according to setting in advance
Fixed formula drives corresponding attachment device.Optionally, touch panel 731 may include touch detecting apparatus and touch controller two
A part.Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, signal is passed
Give touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then
The processor 760 is given, and order that processor 760 is sent can be received and executed.Furthermore, it is possible to using resistance-type, electricity
The multiple types such as appearance formula, infrared ray and surface acoustic wave realize touch panel 731.In addition to touch panel 731, input unit 730
Can also include other input equipments 732, other input equipments 732 can include but is not limited to physical keyboard, function key (such as
Volume control button, switch key etc.), trace ball, mouse, one of operating stick etc. or a variety of.
Wherein, display unit 740 can be used for showing information input by user or be supplied to the information and movement of user
The various menu interfaces of terminal 700.Display unit 740 may include display panel 741, optionally, can use LCD or organic hair
The forms such as optical diode (OrganicLight-EmittingDiode, OLED) configure display panel 741.
It should be noted that touch panel 731 can cover display panel 741, touch display screen is formed, when the touch display screen is examined
After measuring touch operation on it or nearby, processor 760 is sent to determine the type of touch event, is followed by subsequent processing device
760 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and common control viewing area.The Application Program Interface viewing area
And arrangement mode of the common control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface elements such as the icon comprising at least one application program and/or widget desktop control.The Application Program Interface viewing area
Or the empty interface not comprising any content.This commonly uses control viewing area for showing the higher control of utilization rate, for example,
Application icons such as button, interface number, scroll bar, phone directory icon etc. are set.
Wherein processor 760 is the control centre of mobile terminal 700, utilizes the entire intelligent hand of various interfaces and connection
The various pieces of machine, by running or executing the software program and/or module that are stored in first memory 721, and calling
The data being stored in second memory 722 execute the various functions and processing data of mobile terminal 700, thus to mobile whole
End 700 carries out integral monitoring.Optionally, processor 760 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 721 in software program and/or module and/
Or the data in the second memory 722, processor 760 are used to obtain the first preview image and the second preview of camera acquisition
Image;According to first preview image and the second preview image, first partial image and the second topography are determined;To described
First partial image and the second topography carry out gray value matching, determine whether scene changes;Wherein, described first is pre-
It lookes at image and the second preview image is Y channel image.
The processor 760 is further used for: respectively by the first partial image and the second topography be divided into a ×
B block of pixels;According to first preview image, images match threshold value is calculated;According to the first partial image, calculate
To matching times M;According to each a × b block of pixels in the first partial image and the second topography, non-matching picture is determined
The number of plain block;When the number of non-matching block of pixels is greater thanWhen, it is determined that scene changes;When non-matching block of pixels
Number is less than or equal toWhen, it is determined that scene does not change;Wherein, the first partial image and the second topography
In each a × b block of pixels between be spaced multiple a × b block of pixels, a and b are positive integer.
Optionally, the processor 760 is used for: carrying out asking equal to the gray value of all pixels of first preview image
It is worth operation, obtains average gray;Obtain the Pixel Dimensions value of a × b block of pixels;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, and size is a × b
The Pixel Dimensions value of block of pixels, avg are the average gray.
Correspondingly, the processor 760 is further used for: calculating separately out the first partial image and the second Local map
Total gray value of each a × b block of pixels as in;By a of opposite position in the first partial image and the second topography
Total gray value of × b block of pixels carries out seeking difference operation, obtains total gray value difference;By the absolute value of total gray value difference
It is compared with described image matching threshold;When the absolute value of total gray value difference is greater than described image matching threshold,
Then the number of non-matching block of pixels adds one.
Optionally, the processor 760 is used for: carrying out asking equal to the gray value of all pixels of first preview image
It is worth operation, obtains average gray;According to formulaImages match threshold value is calculated, wherein y is the figure
As matching threshold, avg is the average gray.
Correspondingly, the processor 760 is further used for: calculating separately out the first partial image and the second Local map
The average gray value of each a × b block of pixels as in;By opposite position in the first partial image and the second topography
The average gray value of a × b block of pixels carries out seeking difference operation, obtains average gray value difference;By the average gray value difference
Absolute value be compared with described image matching threshold;When the absolute value of the average gray value difference is greater than described image
When with threshold value, then the number of non-matching block of pixels adds one.
Optionally, the processor 760 is used for: obtaining the length value and width value of first partial image;According to formulaMatching times M is calculated, wherein M is the matching times, and W is institute
The width value of first partial image is stated, L is the length value of the first partial image, and E is between each a × b block of pixels in width
The block of pixels number being spaced on degree direction, the block of pixels number that F is spaced in the longitudinal direction between each a × b block of pixels, a
For the width of a × b block of pixels, b is the length of a × b block of pixels.
Optionally, the first partial image is the pre-set dimension centered on the center pixel of first preview image
Image-region, second topography is the figure of pre-set dimension centered on the center pixel of second preview image
As region, wherein the pre-set dimension is c × d block of pixels, wherein c and d is positive integer.
Optionally, the first partial image is the area image specified of user in first preview image, described the
Two topographies are the image in second preview image with the first partial image corresponding position region.
As it can be seen that the mobile terminal of the present embodiment passes through above-mentioned module, it is only necessary to topography's progress to preview image
With operation, the operand of images match is reduced, improves the efficiency of Scene change detection;Also, it is carried out using Y channel data
Gray value matching, further reduces the complexity of matching operation, improves the efficiency of Scene change detection, can quickly, in fact
When Scene change detection is carried out in previews of taking pictures;Threshold grayscale is calculated by dynamic, adaptively adjusts scene detection
Matching threshold, can be improved the accuracy of Scene change detection.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the movement of foregoing description
The specific work process of terminal, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit
It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with
The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.For mobile terminal reality
For applying example, since it is basically similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
Claims (16)
- Scene-change detecting method during 1. one kind is taken pictures, applied to the mobile terminal with camera, which is characterized in that institute Stating scene-change detecting method during taking pictures includes:Obtain the first preview image and the second preview image of camera acquisition;According to first preview image and the second preview image, first partial image and the second topography are determined;Gray value matching is carried out to the first partial image and the second topography, determines whether scene changes;Wherein, first preview image and the second preview image are Y channel image;Wherein, described that gray value matching is carried out to the first partial image and the second topography, determine whether scene occurs The step of variation, comprising:The first partial image and the second topography are divided into a × b block of pixels respectively;According to first preview image, images match threshold value is calculated;According to the first partial image, matching times M is calculated;According to each a × b block of pixels in the first partial image and the second topography, of non-matching block of pixels is determined Number;When the number of non-matching block of pixels is greater thanWhen, it is determined that scene changes;When the number of non-matching block of pixels is less than or equal toWhen, it is determined that scene does not change;Wherein, multiple a × b pixels are spaced between each a × b block of pixels in the first partial image and the second topography Block, a and b are positive integer.
- 2. figure is calculated the method according to claim 1, wherein described according to first preview image As the step of matching threshold, comprising:Operation of averaging is carried out to the gray value of all pixels of first preview image, obtains average gray;Obtain the Pixel Dimensions value of a × b block of pixels;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, Size is the Pixel Dimensions value of a × b block of pixels, and avg is the average gray.
- 3. according to the method described in claim 2, it is characterized in that, described according to the first partial image and the second Local map Each a × b block of pixels as in, the step of determining the number of non-matching block of pixels, comprising:Calculate separately out total gray value of each a × b block of pixels in the first partial image and the second topography;Total gray value of a × b block of pixels of opposite position in the first partial image and the second topography is asked Difference operation obtains total gray value difference;The absolute value of total gray value difference is compared with described image matching threshold;When the absolute value of total gray value difference is greater than described image matching threshold, then the number of non-matching block of pixels adds One.
- 4. figure is calculated the method according to claim 1, wherein described according to first preview image As the step of matching threshold, comprising:Operation of averaging is carried out to the gray value of all pixels of first preview image, obtains average gray;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, and avg is described Average gray.
- 5. according to the method described in claim 4, it is characterized in that, described according to the first partial image and the second Local map Each a × b block of pixels as in, the step of determining the number of non-matching block of pixels, comprising:Calculate separately out the average gray value of each a × b block of pixels in the first partial image and the second topography;The average gray value of a × b block of pixels of opposite position in the first partial image and the second topography is carried out Difference operation is sought, average gray value difference is obtained;The absolute value of the average gray value difference is compared with described image matching threshold;When the absolute value of the average gray value difference is greater than described image matching threshold, then the number of non-matching block of pixels adds One.
- 6. the method according to claim 1, wherein described according to the first partial image, it is calculated The step of with number M, comprising:Obtain the length value and width value of first partial image;According to formulaMatching times M is calculated, wherein M is the matching Number, W are the width value of the first partial image, and L is the length value of the first partial image, and E is each a × b pixel The block of pixels number being spaced in the direction of the width between block, the picture that F is spaced in the longitudinal direction between each a × b block of pixels Plain block number, a are the width of a × b block of pixels, and b is the length of a × b block of pixels.
- 7. the method according to claim 1, wherein the first partial image is with first preview image Center pixel centered on pre-set dimension image-region, second topography be in second preview image The image-region of pre-set dimension centered on imago element, wherein the pre-set dimension is c × d block of pixels, wherein c and d are Positive integer.
- 8. the method according to claim 1, wherein the first partial image is in first preview image The area image that user specifies, second topography are corresponding with the first partial image in second preview image The image of the band of position.
- 9. a kind of mobile terminal, including camera, which is characterized in that the mobile terminal further include:Image collection module, for obtaining the first preview image and the second preview image of camera acquisition;Image determining module, for obtaining the first preview image and the second preview image that module obtains according to described image, really Determine first partial image and the second topography;Scene changes determining module, for described image determining module determine first partial image and the second topography into The matching of row gray value, determines whether scene changes;Wherein, first preview image and the second preview image are Y channel image;Wherein, the scene changes determining module includes:Block of pixels division unit, first partial image and the second topography for respectively determining described image determining module It is divided into a × b block of pixels;Image is calculated for obtaining the first preview image that module obtains according to described image in matching threshold computing unit Matching threshold;Matching times computing unit, the first partial image for being determined according to described image determining module, is calculated matching Number M;Matching unit, for determining non-according to each a × b block of pixels in the first partial image and the second topography The number of matched pixel block;Judge determination unit, the number of the non-matching block of pixels for determining when the matching unit is greater thanWhen, it is determined that field Scape changes;And when the number of the determining non-matching block of pixels of the matching unit is less than or equal toWhen, it is determined that Scene does not change;Wherein, multiple a × b pixels are spaced between each a × b block of pixels in the first partial image and the second topography Block, a and b are positive integer.
- 10. mobile terminal according to claim 9, which is characterized in that the matching threshold computing unit is specifically used for:Operation of averaging is carried out to the gray value of all pixels of first preview image, obtains average gray;Obtain the Pixel Dimensions value of a × b block of pixels;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, Size is the Pixel Dimensions value of a × b block of pixels, and avg is the average gray.
- 11. mobile terminal according to claim 10, which is characterized in that the matching unit is specifically used for:Calculate separately out total gray value of each a × b block of pixels in the first partial image and the second topography;Total gray value of a × b block of pixels of opposite position in the first partial image and the second topography is asked Difference operation obtains total gray value difference;The absolute value of total gray value difference is compared with described image matching threshold;When the absolute value of total gray value difference is greater than described image matching threshold, then the number of non-matching block of pixels adds One.
- 12. mobile terminal according to claim 9, which is characterized in that the matching threshold computing unit is specifically used for:Operation of averaging is carried out to the gray value of all pixels of first preview image, obtains average gray;According to formulaImages match threshold value is calculated, wherein y is described image matching threshold, and avg is described Average gray.
- 13. mobile terminal according to claim 12, which is characterized in that the matching unit is specifically used for:Calculate separately out the average gray value of each a × b block of pixels in the first partial image and the second topography;The average gray value of a × b block of pixels of opposite position in the first partial image and the second topography is carried out Difference operation is sought, average gray value difference is obtained;The absolute value of the average gray value difference is compared with described image matching threshold;When the absolute value of the average gray value difference is greater than described image matching threshold, then the number of non-matching block of pixels adds One.
- 14. mobile terminal according to claim 9, which is characterized in that the matching times computing unit is specifically used for:Obtain the length value and width value of first partial image;According to formulaMatching times M is calculated, wherein M is the matching Number, W are the width value of the first partial image, and L is the length value of the first partial image, and E is each a × b pixel The block of pixels number being spaced in the direction of the width between block, the picture that F is spaced in the longitudinal direction between each a × b block of pixels Plain block number, a are the width of a × b block of pixels, and b is the length of a × b block of pixels.
- 15. mobile terminal according to claim 9, which is characterized in that the first partial image is pre- with described first The image-region for the pre-set dimension look at centered on the center pixel of image, second topography are with second preview graph The image-region of pre-set dimension centered on the center pixel of picture, wherein the pre-set dimension be c × d block of pixels, wherein c and D is positive integer.
- 16. mobile terminal according to claim 9, which is characterized in that the first partial image is first preview The area image that user specifies in image, second topography be second preview image in the first partial figure As the image in corresponding position region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610169640.4A CN105827963B (en) | 2016-03-22 | 2016-03-22 | Scene-change detecting method and mobile terminal during one kind is taken pictures |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610169640.4A CN105827963B (en) | 2016-03-22 | 2016-03-22 | Scene-change detecting method and mobile terminal during one kind is taken pictures |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105827963A CN105827963A (en) | 2016-08-03 |
CN105827963B true CN105827963B (en) | 2019-05-17 |
Family
ID=56524926
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610169640.4A Active CN105827963B (en) | 2016-03-22 | 2016-03-22 | Scene-change detecting method and mobile terminal during one kind is taken pictures |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105827963B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106686452B (en) * | 2016-12-29 | 2020-03-27 | 北京奇艺世纪科技有限公司 | Method and device for generating dynamic picture |
CN109670492B (en) * | 2017-10-13 | 2021-03-09 | 深圳芯启航科技有限公司 | Biological characteristic information acquisition method, biological characteristic information acquisition device and terminal |
CN108030452A (en) * | 2017-11-30 | 2018-05-15 | 深圳市沃特沃德股份有限公司 | Vision sweeping robot and the method for establishing scene map |
CN110880003B (en) * | 2019-10-12 | 2023-01-17 | 中国第一汽车股份有限公司 | Image matching method and device, storage medium and automobile |
CN111654637B (en) * | 2020-07-14 | 2021-10-22 | RealMe重庆移动通信有限公司 | Focusing method, focusing device and terminal equipment |
CN113438480B (en) * | 2021-07-07 | 2022-11-11 | 北京小米移动软件有限公司 | Method, device and storage medium for judging video scene switching |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440612A (en) * | 2013-08-27 | 2013-12-11 | 华为技术有限公司 | Image processing method and device in GPU vitualization |
CN103777865A (en) * | 2014-02-21 | 2014-05-07 | 联想(北京)有限公司 | Method, device, processor and electronic device for displaying information |
CN104519263A (en) * | 2013-09-27 | 2015-04-15 | 联想(北京)有限公司 | Method for acquiring image and electronic device |
-
2016
- 2016-03-22 CN CN201610169640.4A patent/CN105827963B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440612A (en) * | 2013-08-27 | 2013-12-11 | 华为技术有限公司 | Image processing method and device in GPU vitualization |
CN104519263A (en) * | 2013-09-27 | 2015-04-15 | 联想(北京)有限公司 | Method for acquiring image and electronic device |
CN103777865A (en) * | 2014-02-21 | 2014-05-07 | 联想(北京)有限公司 | Method, device, processor and electronic device for displaying information |
Also Published As
Publication number | Publication date |
---|---|
CN105827963A (en) | 2016-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105827963B (en) | Scene-change detecting method and mobile terminal during one kind is taken pictures | |
CN106131449B (en) | A kind of photographic method and mobile terminal | |
CN105898143B (en) | A kind of grasp shoot method and mobile terminal of moving object | |
CN106791400B (en) | A kind of image display method and mobile terminal | |
CN105872148B (en) | A kind of generation method and mobile terminal of high dynamic range images | |
CN106126108B (en) | A kind of generation method and mobile terminal of thumbnail | |
CN105827971B (en) | A kind of image processing method and mobile terminal | |
CN105847674B (en) | A kind of preview image processing method and mobile terminal based on mobile terminal | |
CN105827754B (en) | A kind of generation method and mobile terminal of high dynamic range images | |
CN107659769B (en) | A kind of image pickup method, first terminal and second terminal | |
CN110300264B (en) | Image processing method, image processing device, mobile terminal and storage medium | |
CN107231530B (en) | A kind of photographic method and mobile terminal | |
CN107155064B (en) | A kind of image pickup method and mobile terminal | |
CN105959544B (en) | A kind of image processing method and mobile terminal of mobile terminal | |
CN106101545B (en) | A kind of image processing method and mobile terminal | |
CN105827965A (en) | Image processing method based on mobile terminal and mobile terminal | |
CN106027900A (en) | Photographing method and mobile terminal | |
CN107566723B (en) | A kind of image pickup method, mobile terminal and computer readable storage medium | |
CN109691080B (en) | Image shooting method and device and terminal | |
CN107222681B (en) | A kind of processing method and mobile terminal of image data | |
CN107026982B (en) | A kind of photographic method and mobile terminal of mobile terminal | |
CN106101666B (en) | A kind of method and mobile terminal of image color reservation | |
CN107172346A (en) | A kind of weakening method and mobile terminal | |
CN107222737B (en) | A kind of processing method and mobile terminal of depth image data | |
CN106097398B (en) | A kind of detection method and mobile terminal of Moving Objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |